Job Description-
Responsibilities:
- Work with data and analytics experts to strive for greater functionality in our data systems and can install Airflow from scratch and configure, maintain and administer it.
- The engineer should be an independent guy and should keep up to date with the Airflow open-source community enhancements.
- He should work with other team members and communicate on planning to make sure nothing gets impacted because of the installation or configuration of Airflow
- Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Requirements:
- Minimum 5 years of experience on Apache Airflow.
- Develop guidelines for Airflow clusters and DAG's.
- Performance tuning of the DAG's and task implementation
- Develop DAG - data pipeline to on-board and change management of datasets
- Experience with Apache Airflow, including installing, configuring and monitoring Airflow cluster
- Understanding of airflow rest services and integration of airflow of platform ecosystem
- Orchestrating the Airflow / workflow in a hybrid cloud environment
- Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing big data' data pipelines, architectures, and data sets.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of message queuing, stream processing, and highly scalable big data' data stores.
- Good project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
Job Type: Permanent
Salary: $140,000.00 per year
Schedule:
- 8 hour shift
Work Authorisation:
- Australia (Required)
Work Location: In person
.