Big Data Engineer

The Big Data Engineer will support software developers, database architects, data analysts and data scientists on client projects and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimising or re-designing our clients’ data architecture to support the next generation of products and data initiatives.

Key Responsibilities:

  • Develop applications on Big Data and Cognitive technologies including API development.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using ‘big data’ technologies.
  • Build analytics tools that utilise the data pipeline to provide actionable insights to meet client key business performance metrics.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Work with stakeholders including the Project Managers, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Key Skills and Experience:
  • 4+ years of experience in a Data Engineer role.
  • Expected to have traditional Application Development background along with knowledge of Analytics libraries, open-source Natural Language Processing, statistical and big data computing libraries.
  • Strong technical abilities to understand, design, write and debug complex code.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimising ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Skills include Python, Spark, Kafka, Hadoop, NoSQL, HBase, HIVE, PIG, C++, SQL, Linux, Java, EAI, SOA, CEP, HDFS, ETL.

If this looks like you and you’d like to be considered for this outstanding opportunity, contact Harry Wade on +61 (0) 487 443 130 or click APPLY.
23/09/2020
Melbourne (Melbourne)
Contract
I.T. & T
BH-357758