Senior Data Engineer

Remote -
Full time
employer logo
I.T., digital & online media services
5,001-10,000 employees
Compare top employers
Apply on company site

Our Data Engineering Team is comprised of data experts. We build world-class data solutions and applications that power crucial business decisions throughout the organisation. We manage multiple analytical data models and pipelines across Atlassian, covering finance, growth, product analysis, customer analysis, sales and marketing, and so on. We maintain Atlassian’s data lake that provide a unified way of analysing our customers, our products, our operations, and the interactions among them.

We’re hiring a Senior Data Engineer, reporting to the Data Engineering Manager based in Sydney. Here, you’ll enable a world-class engineering practice, drive the approach with which we use data, develop backend systems and data models to serve the needs of insights, and help build Atlassian’s data-driven culture. You love thinking about the ways the business can consume data and then figuring out how to build it.


At Atlassian, we strive to design equitable, explainable, and competitive compensation programs. To support this goal, the baseline of our range is higher than that of the typical market range, but in turn we expect to hire most candidates near this baseline. Base pay within the range is ultimately determined by a candidate's skills, expertise, or experience. In the United States, we have three geographic pay zones. For this role, our current base pay ranges for new hires in each zone are:

Zone A: $163,300 - $217,700

Zone B: $147,000 - $196,000

Zone C: $135,600 - $180,700

This role may also be eligible for benefits, bonuses, commissions, and equity.

Please visit for more information on which locations are included in each of our geographic pay zones. However, please confirm the zone for your specific location with your recruiter.

  • BS in Computer Science or equivalent experience with 8+ years as a Senior Data Engineer or similar role

  • Strong programming skills using Python

  • Working knowledge of relational databases and query authoring (SQL).

  • Experience designing data models for optimal storage and retrieval to meet product and business requirements.

  • Experience building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools.

  • Experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka).

  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team.

  • Well versed in modern software development practices (Agile, TDD, CICD)