We’re looking for a Data Engineer to build, optimize, and support data workflows across our Azure ecosystem. You’ll work closely with technical and business stakeholders to deliver reliable, scalable, and well-documented data solutions that power analytics, reporting, and operational decision-making—especially across asset management data domains like securities, holdings, benchmarks, and pricing.
In this role you will be responsible for:
- Design & orchestrate data pipelines using Azure Data Factory—from ingestion to transformation and delivery.
- Develop and optimize transformations in Azure Databricks using PySpark and SQL for large-scale processing and validation.
- Model and store data in Azure Data Lake (hierarchical storage) with robust partitioning, metadata, and governance practices.
- Query and warehouse datasets in Azure Synapse—enabling distributed querying and analytical workloads.
- Implement ETL/ELT workflows, including data cleansing, mapping, integration, and reconciliation.
- Automate routine tasks and operational checks using Python (scripting/automation).
- Ensure data quality—own validation frameworks, exception handling, and documentation across the lifecycle.
- Collaborate with business and technology teams—turn requirements into scalable, production-grade solutions.
To be successful in this role your skills and experience should include
- Strong hands-on experience across the Azure data stack, such as Azure Databricks, Data Factory, Data Lake, Synapse
- Solid understanding of ETL/ELT principles and implementation.
- Advanced SQL and PySpark skills for transformation and validation.
- Experience in Python for automation and scripting.
- Strong problem-solving, attention to detail, and commitment to data quality.
- Excellent communication and documentation skills—comfortable interfacing with both technical and business teams.
- Proven experience managing and troubleshooting end-to-end data workflows within the Azure ecosystem.
- Experience in below a bonus:
- Power BI or Tableau for visualization and reporting support.
- Familiarity with DevOps concepts, CI/CD workflows, and source control (Azure DevOps preferred).
Benefits
Joining Cognizant will give you the opportunity to learn and collaborate with some of the most talented people in the industry, while having your finger on the pulse of emerging industry trends and working on the cutting edge of technology in your field of expertise.
We recognize that our people perform at their best when they feel valued as significant contributors and that is why at Cognizant, taking care of our employees is a priority:
- You can pursue innovative career tracks and opportunities here
- You can enhance your professional development through education and dedicated training
- We’ll give you the skills you need to keep pace with the changing workplace while our compensation, benefits and wellness packages help you stay healthy and plan for the future.
Please reach out to our friendly and welcoming team today to apply and register your interest for this full-time hybrid Data Engineer position.
At Cognizant, we engineer modern businesses to improve everyday life because we're dedicated to making a lasting impact. Cognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we're improving everyday life. See how at www.cognizant.com or @cognizant.
