Why This Job is Featured on The SaaS Jobs
This Senior Data Engineer role sits at a core SaaS junction: the event stream that turns customer product usage into analytics, personalization, and relevance signals. In a platform business like Algolia, event ingestion is not a back-office pipeline—it is a customer-facing surface area where latency, reliability, and data quality directly shape what downstream features can deliver.
Career-wise, the work translates strongly across modern SaaS companies because it combines real-time and batch patterns, external connector ecosystems (analytics tools and direct integrations), and production-grade distributed systems. Experience designing ingestion contracts, hardening delivery guarantees, and evolving a mature pipeline without disrupting customers is durable, especially for engineers who want to deepen expertise in data platform fundamentals rather than one-off reporting.
The listing signals a fit for someone who enjoys owning critical infrastructure and improving it incrementally, while partnering closely with product and engineering peers to keep technical choices aligned with customer needs. It also suits a senior engineer comfortable onboarding into an established system and helping newer teammates build confidence through shared practices and architectural clarity.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
The Team
The Events team owns Algolia’s customer-facing Events platform, the entry point for sending user interaction data into our system. These events drive improvements in analytics, personalization, and search relevance for thousands of customers. We are continuing to expand and improve this existing system, which means you’ll play a critical role in onboarding to a mature product, helping a newly built team grow in confidence, and shaping the future of this high-volume, real-time data pipeline.
The role will consist of:
As a Senior Data Engineer, you’ll help scale and evolve the backbone of Algolia’s Events platform. This means:
- Designing and maintaining reliable pipelines for ingesting and processing both real-time and batch data from diverse external sources (including Segment, Google Analytics, and direct customer integrations).
- Owning and optimizing systems that run at massive scale, ensuring low-latency event delivery and high reliability.
- Quickly getting up to speed with an established production system, and helping your teammates do the same.
- Partnering with backend, frontend, and product teams to align technical decisions with customer-facing needs.
- Contributing to architectural improvements that make our event ingestion platform more robust, efficient, and easy to extend.
- Sharing knowledge across the team and mentoring new engineers to help them grow.
You might be a fit if you have:
Must-haves
- Solid experience with data pipelines and event-driven architectures at scale.
- Proficiency in Go or another backend language, with the ability to quickly adapt to new codebases.
- Strong knowledge of distributed systems, APIs, and messaging platforms like Pub/Sub.
- Hands-on experience with BigQuery or similar data warehouses for analytics.
- A track record of collaborating with cross-functional teams and contributing to production-critical systems.
Nice-to-haves
- Familiarity with GCP and Kubernetes in production environments.
- Exposure to frontend systems (React, Rails) and how they interact with backend data pipelines.
- Experience integrating with customer-facing APIs or analytics connectors.
- Background in onboarding to inherited systems and driving re-architecture where needed.
Team’s current stack
Go backend on GCP and Kubernetes, pipelines built with BigQuery and Pub/Sub, integrating with a React + Rails frontend.