Why This Job is Featured on The SaaS Jobs
This Senior Software Engineer, Data Platform role stands out in SaaS because it sits at the layer that turns product usage and operational signals into reliable, shared infrastructure. With clickstream, database, and third party data flowing in both batch and real time, the work reflects the data demands common in mature subscription products where measurement, experimentation, and reporting must be dependable across many teams.
For a long-term SaaS career, platform data engineering experience tends to compound. Building and operating a modern data stack in production, especially across cloud services and streaming systems, develops judgment around tradeoffs in latency, cost, and governance that applies across industries. The emphasis on end user needs and stakeholder collaboration also maps directly to how SaaS organizations scale analytics, data science, and business intelligence without fragmenting tooling.
This role is best suited to an experienced engineer who prefers ownership from requirements through production support and enjoys making ambiguous problems concrete. It fits someone who likes building internal products for technical and non-technical consumers, and who is comfortable working across AWS, orchestration, and analytical storage. Candidates motivated by enabling other teams, rather than optimizing a single application surface, should find the scope aligned.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
We are looking for a Senior Software Engineer to architect, build, and maintain the data infrastructure at Gusto. As part of the Data Platform team, you will collaborate closely with Data Science, Business Intelligence, and analysts across Gusto teams to help them achieve their goals.
The Data Platform position is a software development role with knowledge of data architectures and data delivery. The ideal candidate is passionate about developing software, working with data, and understanding the needs of end users.
Here’s what you’ll do day-to-day:
- Architect, build, and refine our infrastructure and tools that enable other teams to work with data.
- Efficiently handle vast amounts of clickstream, database, and third party application data, processing in batch and real-time
- Take full ownership of the solutions you build, working with stakeholders to develop requirements, implement solutions, monitor production, and troubleshoot problems that arise.
- Work as part of a team. We value team players who share their knowledge and like collaborating with others.
Here’s what we're looking for:
- At least 7 years of software engineering experience.
- Experience building solutions in the cloud, AWS preferred (Redshift, MSK, EMR).
- Experience with OLAP databases (Clickhouse)
- Experience building data pipelines at scale, Airflow and Python preferred.
- Experience with streaming systems desired (Kafka, Kinesis, or similar).
- Ability to turn vague requirements into clear deliverables with minimal guidance.
- Experience building and maintaining a modern data stack in production.
Our cash compensation amount for this role is targeted at $163,000-$204,000/year in Denver, $178,000-$223,000/year in Los Angeles, and $197,000-$247,000/year for San Francisco, New York, and Seattle. Final offer amounts are determined by multiple factors including candidate experience and expertise and may vary from the amounts listed above.