About Explorium
Explorium is a leading provider of B2B data foundations for AI agents. We offer go-to-market data and infrastructure designed to power context-aware AI products and strategies. Our platform harmonizes diverse data sources to deliver high-quality, structured, and trustworthy insights - empowering businesses to build intelligent systems that drive real growth.
We're at the forefront of applied AI - leveraging LLMs, Generative AI, and modern data engineering practices to solve hard, real-world data problems at scale.
About the Team
Atlas is a data engineering team that owns Explorium's core data products end-to-end - from ingestion and enrichment through transformation, quality, and serving. We build and operate the pipelines, data models, and platform services that power the product.
We work closely with our customers and external data providers to assess, integrate, and enhance third-party data assets.
The Role
We're looking for a Senior Data Engineer to own high-impact data products from architecture through production deployment, monitoring, and continuous improvement. This isn't a pure infrastructure role - you'll combine strong engineering with product thinking, operational excellence, and awareness of data quality, cost, and business impact.
You will design, implement, test, deploy, and maintain production-grade data products - pipelines, transformation layers, data quality and reliability systems - using tools like DBT (on Spark) and Databricks. You'll apply best practices in Python and SQL to build scalable and maintainable data transformations, and leverage technologies like LLMs and GenAI to create innovative solutions for real business problems.
This role is ideal for someone who wants technical leadership responsibilities in an AI-first engineering culture - we use LLMs, GenAI, and AI-native development tools as core parts of our daily workflow.
Key Responsibilities
- Act as a technical leader within the team - raise engineering standards, drive strong architectural choices, and improve how we build
- Own data products end-to-end: design, development, deployment, monitoring, and iteration
- Work closely with senior leadership to translate strategic goals into scalable data solutions
- Develop and maintain production ETL/ELT pipelines using DBT (on Spark) and orchestrated workflows in Databricks
- Build monitoring, alerting, and testing pipelines to ensure reliability and performance in production
- Evaluate and introduce new technologies - including AI-native development tools - and integrate the ones that create real impact
- Collaborate with customers and external data providers - gathering requirements and making product decisions.
- Mentor team members through code reviews, pairing, and knowledge sharing