Head of Data - Principal Engineer

JP Morgan

Location: Greater London

Job Type: Full time


We are not your standard bank, we are an innovation hub. We are part of the team that launched Chase as a new UK bank in 2021, and are now starting on a new initiative with big ambitions. More details will be provided during the interview but for now we can enthusiastically say: It’s challenging, it’s high stakes, it’s fun!

Our team is at the heart of building this new venture. We have created a new organisation with diverse professionals that come from a wide set of skills, backgrounds and experiences, including top tech giants, blockchain innovators and FinTech unicorns.

Team members have autonomy to collaborate and work in a cross-functional way. We value your opinion in every matter. Our inclusive culture is hugely important to us and we are looking for intellectually curious, compassionate people who would like to expand their skills whilst working on a new exciting venture for the firm. We are about bringing new products to market that solve real-world problems for real-life customers, not just innovation for the sake of innovation. We are human first; we love working with each other, and we need personal connections and in-person bonding. We find we are happier, more motivated and more productive this way. If this sounds like something that you would like to be a part of we’d encourage you to read on and apply.

What we do
  • Structure software so that it is easy to understand, test and evolve.
  • Build solutions that avoid single points of failure, using scalable architectural patterns.
  • Develop secure code so that our customers and ourselves are protected from malicious actors.
  • Promptly investigate and fix issues and ensure they do not resurface in the future.
  • Ensure fully automate releases.
  • See that our data is written and read in a way that's optimized for our needs.
  • Keep an eye on performance, making sure we use the right approach to identify and solve problems.
  • Ensure our systems are reliable and easy to operate.
  • Keep us up to date by continuously updating our technologies and patterns.
The role

This is a hands-on role for a Data Engineering lead who wants to be part of a flat-structure organisation and influence the design & development of a green-field initiative.

Your responsibilities will include:

  • Lead, manage & mentor a team of data engineers to design, develop, and implement data platforms, pipelines, and infrastructure for our multi-cloud product across GCP and AWS.
  • Architecture and design: Design and develop scalable, cost-effective, and secure distributed architectures and solutions, utilizing appropriate GCP and AWS services and technologies.
  • Data pipeline development: Design, implement, and maintain data pipelines that efficiently collect, process, and store large volumes of data from various sources, ensuring data quality and integrity.
  • Performance optimization: Continuously monitor, analyze, and optimize data pipelines to improve performance, reduce costs, and ensure reliability and scalability.
  • Security and compliance: Ensure that data solutions comply with relevant data security and privacy regulations, and implement best practices for securing data at rest and in transit.
  • Establish & govern a project wide data catalogue where we maintain the list of all data sets across the platform with the appropriate description of the data items
What we expect

We’d like you to have extensive experience in a similar role, along with a few key technical skills:

  • Programming languages: Proficiency in one or more languages commonly used in data engineering , such as Python, R, Java, or Scala. Strong problem-solving and critical-thinking skills, with the ability to break down complex problems and develop innovative solutions
  • Data engineering skills: Proficiency in designing, building, and optimizing data pipelines, as well as experience with big data processing tools like Apache Spark, Hadoop, and Dataflow
  • Experience in designing & operating Operational Datastore/Data Lake/Data Warehouse platforms at scale with high-availability
  • Data integration: Familiarity with data integration tools and techniques, including ETL (Extract, Transform, Load) processes and real-time data streaming (e.g., using Apache Kafka, Kinesis, or Pub/Sub), exposing data sets via GraphQL
  • Cloud platforms expertise: Deep understanding of GCP/AWS services, architectures, and best practices, with experience in designing and implementing scalable and cost-effective solutions. Relevant cloud certifications are a plus
  • Data storage and databases: Knowledge of various data storage options (e.g., relational databases, NoSQL, data lakes) and hands-on experience with managing and optimizing databases such as PostgreSQL, MySQL, BigQuery, and Redshift
  • Leadership and communication: Ability to lead a team of engineers, collaborate effectively with cross-functional teams, and communicate complex technical concepts to both technical and non-technical stakeholders.

We’d also like you to have a few key personal skills. Every organisation has its own culture, and we want you to be successful in ours.

  • A desire to teach others and share knowledge. We aren’t looking for hero developers, more for team players. We want you to coach other team members on coding practices, design principles, and implementation patterns
  • Comfortable in uncharted waters. We are building something new. Things change quickly. We need you to learn technologies and patterns quickly
  • Ability to see the long term. We don’t want you to sacrifice the future for the present. We want you to choose technologies and approaches based on the end goals.
  • High standards. We are looking for people who expect personal performance and team performance to be nothing short of the best
  • Clarity of thought. We operate quickly and efficiently, and we value people who are economical with their time and clear with their opinions
  • Comfortable working within a geographically distributed team
Technologies we use
  • We primarily use JVM based languages (Java/Kotlin) but we also have parts of the platform that use other languages as needed (Go, Python, etc.)
  • We look to use Open source/SaaS when it makes sense and build ourselves when it doesn't
  • We are entirely cloud native and want to build a truly multi-cloud solution
  • We look at each problem independently and pick the right technology to solve it
  • We aren’t afraid to try new things but we always remember that we are looking to build something to last, and we focus on solving real world problems for real life customers
You’ve got this!