Why This Job is Featured on The SaaS Jobs
This Senior Data Engineer role stands out in a SaaS setting because it sits at the intersection of product data, AI enablement, and customer-facing insights. The remit is not limited to maintaining pipelines; it is about shaping a unified data platform that can reliably serve both internal AI development and in-application analytics. The tooling mix (Databricks, Azure ecosystem components, Snowflake replication, low-latency serving) reflects the kind of modern SaaS data stack used to operationalise usage data at scale.
From a SaaS career perspective, the work maps closely to the problems that recur as subscription products mature: standardising event and operational data, supporting near-real-time experiences, and raising trust through governance and observability. Building Medallion-style layers, handling batch plus streaming, and implementing RBAC and PII controls are all portable capabilities across SaaS companies investing in AI features and measurable product outcomes.
This position is best suited to someone who prefers hands-on platform building while still thinking in systems and long-term architecture. It will fit an engineer who enjoys cross-functional collaboration with AI and insights stakeholders, and who wants ownership over reliability, security, and data access patterns that directly influence how a SaaS product learns from—and responds to—customer behaviour.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
We are seeking an experienced data engineer who thrives in a fast paced environment. You will have the unique opportunity to build the new unified data platform to power our suite of AI tools and insight delivery.
About this role and the work
Karbon is at the start of its Data & AI journey meaning that you will have the opportunity to revolutionize our data platform. This role supports both our AI team and our Insights team, critical in delivering features for the Karbon platform. You’ll improve our new data platform centered around Databricks. The successful candidate will be a hands-on builder and a strategic thinker, capable of designing scalable, robust, and forward-looking data solutions.
Some of your main responsibilities will include:
- Developing a unified data platform: Develop our new unified data platform on Databricks. You will be instrumental in establishing the Medallion Architecture (Bronze, Silver, Gold layers) using dlt for data modeling and transformations.
- Develop Data Pipelines: Create and manage resilient data pipelines for both batch and real-time processing from various sources in our Azure data ecosystem. This includes building a "hot path" for streaming data and orchestrating complex dependencies using Databricks Workflows.
- Enable Data Integration and Access: Implement and manage data replication processes from Databricks to Snowflake. You will also be responsible for developing a low-latency query endpoint to serve our production Karbon application.
- Champion Data Quality and Governance: Establish best practices for data quality, integrity, and observability. You will build automated quality checks, tests, and monitoring for all data assets and pipelines to ensure trust in our data.
- Implement Robust Security and Governance Practices: Design and enforce a comprehensive security model for the data platform. This includes management of PII and implementing a fine-grained Role-Based Access Control (RBAC) model through IaC
- Cross functional collaboration: Work within a cross-functional team of AI engineers, analysts, and developers to deliver impactful data products.
About you
If you’re the right person for this role, you have:
- 5+ years of relevant work experience as a data engineer, with a proven track record of building and scaling data platforms
- Previous experience with Databricks
- Previous experience architecting ETL & ELT data migration patterns with strong proficiency in DLT.
- Experience scaling data pipelines in a multi-cloud environment
- Strong proficiency in Python
- Strong proficiency in SQL and a deep understanding of relational DBMS
- DevOps experience, including CI/CD, and infrastructure-as-code (e.g., Terraform)
It would be advantageous if you have:
- Previous experience with Azure cloud services (Highly desirable)
- DevOps experience is highly desirable
- Experience with both batch and streaming data technologies
- Experience building and maintaining APIs or query endpoints for application data access
- Practical MLOps experience, such as implementing solutions with MLflow, feature stores, and automated model deployment and evaluation pipelines.
Why work at Karbon?
- Gain global experience across the USA, Australia, New Zealand, UK, Canada and the Philippines
- 4 weeks annual leave plus 5 extra "Karbon Days" off a year
- Flexible working environment
- Work with (and learn from) an experienced, high-performing team
- Be part of a fast-growing company that firmly believes in promoting high performers from within
- A collaborative, team-oriented culture that embraces diversity, invests in development, and provides consistent feedback
- Generous parental leave