Why This Job is Featured on The SaaS Jobs
This Software Engineer I role sits at a core SaaS junction where product analytics meets modern data infrastructure. Building integrations with cloud data warehouses and enabling “warehouse-native” analytics reflects a broader shift in SaaS toward meeting customers where their data already lives, rather than forcing ingestion into a separate system. The work touches the connective tissue between customer data ecosystems and a product intelligence platform.
From a SaaS career standpoint, the position develops fluency in the reliability and performance expectations that underpin subscription products at scale. Experience with throughput, low-latency design, and resilience maps directly to common SaaS challenges like multi-tenant stability, predictable query performance, and operational maturity. The stated tooling mix, including Kubernetes, Kafka, and Terraform, also provides transferable grounding in cloud-native patterns used across data-heavy SaaS businesses.
The role is best suited to early-career engineers who want their first or second position to be close to production systems and real customer workflows. It favors a working style that values cross-functional collaboration, comfort with design discussions and code review, and curiosity about distributed systems and data products. Candidates motivated by platform work that directly shapes how SaaS users access and activate data should find the scope aligned.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
The Data Warehouse team is responsible for developing features to allow users integrate disparate data sources with Amplitude seamlessly to turn data into powerful product intelligence. We created a data platform which allows customers to easily bring data from numerous data sources into amplitude, and enhance the value of the data by powering more workflows outside of Amplitude as well. Also we are building next generation analytics experience where users can instantly get product insights out of their data in warehouses without any data ingested. Today, the team still works like a small startup with a fast-paced, collaborative and adaptive working environment. And we have lots of great product initiatives to be fulfilled.
In this role, you will solve hardcore Infrastructure challenges like designing for extreme throughput, designing/optimizing systems with millisecond latencies, building resilient systems to achieve close to zero downtime. You will also get to closely work with Product and Customers to define the strategy and roadmap for our core product and new features. You will also work with designers to deliver the best customer experience for our products.
Our product sits on top of many modern technologies, including Kubernetes, Kafka, Redis/Elasticache, Amazon S3, DynamoDB, Terraform etc. You will share your ideas with a group of similarly innovative and curious engineers.
As a Software Engineer I You Will:
- Contribute to backend and data product features that integrate with cloud data warehouses.
- Design and build systems that are reliable, performant, and scalable as we grow.
- Collaborate with Product, Design, and other engineers to bring ideas from concept to implementation.
- Participate in code reviews and design discussions to grow your skills and share knowledge.
- Learn modern data engineering tools and best practices while contributing to real production systems.
You’ll Be a Great Addition to the Team If You Have:
- A degree in Computer Science or related technical field, or equivalent practical experience.
- Strong computer science fundamentals (data structures, algorithms, software design).
- Solid programming skills in at least one modern language (Java, Go, or Python preferred).
- An eagerness to learn about distributed systems, large-scale data processing, and data products.
- A passion for solving challenging technical problems and working collaboratively.
Bonus Points
- Prior internship or project experience with backend systems, data pipelines, or cloud services.
- Familiarity with data tools like dbt, Temporal, or Apache Iceberg.
- Experience working with cloud data warehouses (Snowflake, BigQuery, Redshift).
- Contributions to team projects, open-source code, or technical communities.
This role is eligible for equity, benefits and other forms of compensation.
Based on Colorado law, the following details are for individuals who will work for Amplitude in Colorado. Colorado range: $121,000 - $181,000 total target cash (inclusive of bonus or commission)
Based on legislation in New York City, the following details are for individuals who will work for Amplitude in New York City. New York City salary range: $134,000 - $201,000 total target cash (inclusive of bonus or commission)
Based on legislation in California, the following details are for individuals who will work for Amplitude in San Francisco Bay Area of California. Salary range: $134,000 - $201,000 total target cash (inclusive of bonus or commission)
Based on legislation in California, the following details are for individuals who will work for Amplitude in California outside of the San Francisco Bay Area. California salary range: $121,000 - $181,000 total target cash (inclusive of bonus or commission)
Based on legislation in Washington state, the following details are for individuals who will work for Amplitude in Washington state. Washington salary range: $121,000 - $181,000 total target cash (inclusive of bonus or commission)
Based on legislation in Washington state, the following details are for individuals who will work for Amplitude in Washington only: unlimited PTO, 10 to 13 holidays annually (will vary), medical dental and vision PPO and CDHP plans. Finally, a company sponsored 401(k) retirement plan.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
#LI-JJ1
#LI-Hybrid