Why This Job is Featured on The SaaS Jobs
SaaS companies increasingly compete on how quickly teams can turn product and operational data into decisions, and this Senior Software Engineer, Data Platform role sits at that leverage point. The remit spans batch and real-time ingestion across clickstream, databases, and third‑party sources—work that directly shapes the reliability and accessibility of analytics across a subscription product. Positioned within a dedicated Data Platform function, it reflects a mature approach to internal data-as-a-service for multiple stakeholder groups.
From a SaaS career perspective, the role builds durable platform engineering skills: designing multi-tenant data infrastructure, setting standards for data delivery, and operating pipelines in production with clear ownership from requirements through monitoring and incident response. Experience with AWS services, modern data stack patterns, orchestration (Airflow), and streaming systems maps cleanly to how many SaaS organizations structure their data layer as they scale usage and reporting demands.
This position tends to fit senior engineers who prefer systems thinking and cross-functional collaboration over feature-only delivery. It suits professionals comfortable translating ambiguous stakeholder needs into dependable interfaces and tooling, and who enjoy being accountable for operational outcomes in addition to architecture and implementation.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
We are looking for a Senior Software Engineer to architect, build, and maintain the data infrastructure at Gusto. As part of the Data Platform team, you will collaborate closely with Data Science, Business Intelligence, and analysts across Gusto teams to help them achieve their goals.
The Data Platform position is a software development role with knowledge of data architectures and data delivery. The ideal candidate is passionate about developing software, working with data, and understanding the needs of end users.
Here’s what you’ll do day-to-day:
- Architect, build, and refine our infrastructure and tools that enable other teams to work with data.
- Efficiently handle vast amounts of clickstream, database, and third party application data, processing in batch and real-time
- Take full ownership of the solutions you build, working with stakeholders to develop requirements, implement solutions, monitor production, and troubleshoot problems that arise.
- Work as part of a team. We value team players who share their knowledge and like collaborating with others.
Here’s what we're looking for:
- At least 7 years of software engineering experience.
- Experience building solutions in the cloud, AWS preferred (Redshift, MSK, EMR).
- Experience with OLAP databases (Clickhouse)
- Experience building data pipelines at scale, Airflow and Python preferred.
- Experience with streaming systems desired (Kafka, Kinesis, or similar).
- Ability to turn vague requirements into clear deliverables with minimal guidance.
- Experience building and maintaining a modern data stack in production.
Our cash compensation amount for this role is targeted at $163,000-$204,000/year in Denver, $178,000-$223,000/year in Los Angeles, and $197,000-$247,000/year for San Francisco, New York, and Seattle. Final offer amounts are determined by multiple factors including candidate experience and expertise and may vary from the amounts listed above.