Why This Job is Featured on The SaaS Jobs
This Senior Data Engineer role sits at a core SaaS inflection point: the centralized data platform that underpins customer facing products. In subscription software, the ability to instrument, process, and serve data reliably becomes a product capability in its own right, influencing everything from feature delivery to trust and compliance. The remit described here is platform oriented rather than analytics only, which is typically where SaaS companies concentrate long term leverage.
From a career perspective, the work maps closely to durable SaaS competencies: building batch and streaming pipelines at large scale, operating production services, and designing for security and data privacy. The emphasis on experimentation and rapid MVPs also signals exposure to product led iteration, where data engineering supports measurement, rollout decisions, and continuous improvement. Experience with AWS, Spark, Kafka, and modern lakehouse components tends to transfer well across SaaS firms with growing data estates.
This position is best suited to an established individual contributor who prefers ownership, ambiguity resolution, and cross functional design work with product and engineering partners. It will appeal to someone who wants to be close to how data platforms shape customer outcomes, and who is comfortable balancing delivery with operational responsibility when services need investigation and triage.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
About the team/role
We're seeking an experienced Senior Data Engineer to join our Data Platform team. This team plays a crucial role in our mission by developing and maintaining scalable data platforms for fair and safe hiring decisions.
As a Senior Data Engineer on the Data Platform team, you'll work on Checkr’s centralized data platform, critical for the company's vision. The centralized data platform is the heart of all key customer-facing products. You will work on high-impact projects that directly contribute to the next generation of products.
What you’ll do:
- Be an independent individual contributor who can solve problems and deliver high-quality solutions with minimal/high-level oversight and a high level of ownership
- Bring a customer-centric, product-oriented mindset to the table - collaborate with customers and internal stakeholders to resolve product ambiguities and ship impactful features
- Partner with engineering, product, design, and other stakeholders in designing and architecting new features
- Experimentation mindset - autonomy and empowerment to validate a customer need, get team buy-in, and ship a rapid MVP
- Quality mindset - you insist on quality as a critical pillar of your software deliverables
- Analytical mindset - instrument and deploy new product experiments with a data-driven approach
- Deliver performant, reliable, scalable, and secure code for a highly scalable data platform
- Monitor, investigate, triage, and resolve production issues as they arise for services owned by the team
What you bring:
- Bachelor's degree in a computer-related field or equivalent work experience
- 6-7+ years of development experience in the field of data engineering (5+ years writing PySpark)
- Experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
- Experience with ETL/ELT, stream and batch processing of data at scale
- Strong proficiency in PySpark, Python, and SQL
- Expertise in understanding database systems, data modeling, relational databases, NoSQL (such as MongoDB)
- Experience with big data technologies such as Kafka, Spark, Iceberg, Datalake, and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
- Knowledge of security best practices and data privacy concerns
- Strong problem-solving skills and attention to detail
Nice to have:
- Experience/knowledge of data processing platforms such as Databricks or Snowflake.
- An understanding of Graph and Vector data stores (preferred)
What you get:
- A fast-paced and collaborative environment
- Learning and development allowance
- Competitive cash and equity compensation, and opportunity for advancement
- 100% medical, dental, and vision coverage
- Up to $25K reimbursement for fertility, adoption, and parental planning services
- Flexible PTO policy
- Monthly wellness stipend