Why This Job is Featured on The SaaS Jobs
Within SaaS, observability and security platforms live or die by ingestion: the ability to accept, normalize, and route customer telemetry continuously and at scale. This Staff Software Engineer role sits at that core, focused on data pipelines that handle petabyte level log volume, high frequency metrics, and large scale tracing. The scope signals a mature, production heavy environment where backend decisions directly influence product reliability and customer trust.
For a long term SaaS engineering path, ingest is a compounding specialty. Work at this layer builds durable instincts around distributed systems, performance tuning, and operational correctness under real customer load. The emphasis on automated testing, CI/CD, and design documentation maps closely to how successful SaaS companies sustain platform evolution without breaking downstream analytics and detection workflows. Experience here also transfers into adjacent domains such as platform engineering, SRE aligned backend work, and data infrastructure leadership.
This role is best suited to senior engineers who prefer end to end ownership of systems behavior in production, from design through measurement and iterative improvement. It will fit professionals who enjoy mentoring and influencing engineering practices, and who are comfortable collaborating across time zones with both technical and non technical partners.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
Staff Software Engineer - Data Ingest
Location: Noida/Bengaluru (Hybrid)
As a backend staff engineer in the Data Ingest team, you will be responsible for helping create a scalable, reliable and performant data platform for observability and security products to empower our customers to rapidly create high-quality analyses that enable them to react in real-time to events and incidents. The Data Ingest Team is responsible for making the ingestion pipeline of logs performant at petabytes of data per day, millions of data points per minute in metrics, and terabytes of data in tracing.
Responsibilities
- Design and implement extremely high-volume, fault-tolerant, scalable backend systems that process and manage petabytes of customer data.
- Analyze and improve the efficiency, scalability, and reliability of our backend systems.
- Write robust code; demonstrate its robustness through automated tests.
- Work as a member of a team, helping the team respond quickly and effectively to business needs.
- Mentor junior engineers and improve software development processes.
- Evaluate, test, and provide technology and design recommendations to management.
- Write detailed design documents and documentation on system design and implementation.
- Take ownership in breaking down requirements into technical tasks and help estimate timelines.
Required Qualifications and Skills
- B.Tech, M.Tech, or Ph.D. in Computer Science or related discipline
- 9+ years of industry experience with a proven track record of ownership
- Object-oriented experience, for example in Java, Scala, Ruby, or C++.
- Experience in multi-threaded programming and distributed systems.
- Understand the performance characteristics of commonly used data structures (maps, lists, trees, etc).
- Desire to learn Scala, an up-and-coming JVM language (scala-lang.org).
- Experience working in teams with a heavy emphasis on Automation and Quality (CI/CD)
- Experience leading projects and mentoring engineers.
- Comfortable working with a remote team operating in multiple time zones.
- Comfortable communicating about your work with both technical and non-technical team members, including fellow engineers, product managers, designers, and analysts.
- Team player, able to take and give constructive feedback and apply (from code reviews to 1-1s).
Desired Qualifications and Skills
- Experience in big data and/or 24x7 commercial service is highly desirable.
- Agile software development experience (test-driven development, iterative and incremental development) is a plus.
About Us
Sumo Logic, Inc. helps make the digital world secure, fast, and reliable by unifying critical security and operational data through its Intelligent Operations Platform. Built to address the increasing complexity of modern cybersecurity and cloud operations challenges, we empower digital teams to move from reaction to readiness—combining agentic AI-powered SIEM and log analytics into a single platform to detect, investigate, and resolve modern challenges. Customers around the world rely on Sumo Logic for trusted insights to protect against security threats, ensure reliability, and gain powerful insights into their digital environments. For more information, visit www.sumologic.com.
Sumo Logic Privacy Policy. Employees will be responsible for complying with applicable federal privacy laws and regulations, as well as organizational policies related to data protection.