Why This Job is Featured on The SaaS Jobs
This Senior Data Engineer role stands out in the SaaS landscape because it sits at the intersection of product analytics, AI or ML enablement, and operational reporting, all of which depend on trustworthy, well modelled data. The description emphasises production grade pipelines with observability, testing discipline, and scalable architecture, signalling a platform minded approach typical of mature SaaS data organisations. Remote delivery within India also reflects how SaaS teams increasingly build core data infrastructure in distributed setups.
From a SaaS career perspective, the work maps to durable problems that recur across subscription businesses: defining data contracts, keeping datasets reliable as features ship, and balancing performance and cost as usage grows. Experience with tiered data models, pipeline SLAs, and incident driven debugging translates well across analytics engineering, data platform, and ML platform tracks. The focus on standards and design reviews also develops the judgement needed to influence data strategy beyond individual pipelines.
This role is best suited to engineers who prefer operational ownership over one off builds and who enjoy making systems measurable and resilient. It will fit someone comfortable collaborating across platform, analytics, and ML stakeholders, and who values clear interfaces and repeatable quality controls in a remote first working rhythm.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
We're looking for a Senior Data Engineer to design, build, and operate the reliable, scalable data pipelines powering analytics, AI/ML, and operational workloads.
This is engineering that matters; production-grade ETL with real observability, rigorous testing discipline, and architectural decisions that scale. You'll work across the full stack of modern data systems, applying strong design principles to build pipelines that don't just run, but run reliably at scale.
If you're passionate about data engineering done right, where monitoring isn't an afterthought, tests are non-negotiable, and system design is foundational, this is the opportunity for you!
Location -Remote in India
What you'll be doing- Design, build, and maintain scalable ETL/ELT pipelines across batch and streaming workloads.
Implement and operate pipelines following a tiered data model (e.g., Bronze/Silver/Gold) to ensure clear data contracts, quality boundaries, and reusability.
Build pipelines that are observable by default, with strong metrics, logging, tracing, and alerting.
Implement data quality checks, validations, and automated tests at each data tier to ensure correctness, freshness, and reliability.
Apply strong system design principles to build fault-tolerant, scalable, and maintainable data systems.
Optimise pipeline performance, cost, and reliability through profiling, monitoring, and tuning.
Collaborate with platform, analytics, and ML teams to design well-modelled datasets for downstream consumers.
Participate in architecture and design reviews, contributing to data modelling, ingestion, and observability standards.
Troubleshoot production issues across pipelines and storage layers using logs, metrics, and traces.
Ensure data pipelines comply with security, governance, and compliance requirements.
Other duties as needed.
About you
Must have:
- Strong experience building ETL/ELT pipelines for large-scale data platforms.
Good understanding of tiered data architectures (e.g., Bronze/Silver/Gold, medallion model) and how to apply them in production.
Hands-on experience with pipeline observability (metrics, logs, alerts, SLAs/SLOs).
Solid understanding of distributed systems and system design fundamentals.
Experience testing data pipelines, including data quality checks, regression testing, and failure scenarios.
Proficiency in one or more programming languages (e.g., Python, Java, Scala).
Experience with cloud platforms (AWS, Azure, or GCP).
Strong problem-solving and production debugging skills.
Not necessary but highly regarded:
- Experience with streaming platforms (Kafka, Pulsar, Kinesis).
Familiarity with data lakes, lakehouse architectures, or OLAP systems.
Experience with CI/CD for data pipelines and infrastructure-as-code.
Exposure to regulated or high-availability environments.
NinjaOne unifies IT to simplify work for more than 35,000 customers in 140+ countries.
The NinjaOne Unified IT Operations Platform delivers endpoint management, autonomous patching, backup, and remote access in a single console to improve efficiency, increase resilience, and reduce spend. By automating IT and managing all endpoints, organizations give employees a great technology experience at work.
NinjaOne is obsessed with customer success and has retained a 98% customer satisfaction score for more than 5 years.
What You’ll Love
- Competitive compensation
Pension Scheme
Employee's Provident Fund
Private healthcare
Paid maternity and paternity leave
12 days of paid sick leave
18 days of Annual Leave
India Public Holidays based on your location
Other leave benefits, such as Wedding leave
This position is NOT eligible for Visa sponsorship.
All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, age, disability, genetic information, marital status, veteran status, or any other status protected by applicable law. We are committed to providing an inclusive and diverse work environment.