Why This Job is Featured on The SaaS Jobs
This Software Engineer, Data Foundations role sits at a core SaaS problem: turning fragmented enterprise content across many third-party applications into a dependable layer that search and AI features can build on. Work spanning connectors, ingestion, and permission-aware representations reflects the reality of modern SaaS ecosystems, where product value depends on integrating cleanly with tools like Google Workspace, Microsoft 365, and Salesforce while respecting enterprise constraints.
For a long-term SaaS engineering career, this kind of scope compounds. It builds fluency in multi-tenant data movement, API and webhook design, and the operational disciplines that keep customer-facing data fresh and correct at scale. The emphasis on SLO thinking, idempotency, retries, and backpressure also maps directly to how mature SaaS platforms reduce incident risk while onboarding more customers and data sources.
The role tends to suit engineers who enjoy platform-style ownership rather than feature-only delivery, and who like tracing failures across queues, workers, storage, and external systems. It also fits someone motivated by security-sensitive data handling and the practical constraints of enterprise integrations, especially in a hybrid Bay Area environment with close cross-functional collaboration.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
About the Role
We are looking for a Software Engineer to join Glean’s Data Foundations team — the group that owns the end-to-end data ingestion and management layer powering Glean’s Search, AI Assistant, and Agent products across thousands of enterprise apps and billions of documents.
Your work will directly determine the quality, freshness, and trustworthiness of the knowledge that every Glean user interacts with every day.
You will work on:
Ingestion & Connectivity
- Build and scale connectors to a wide variety of SaaS and on-prem systems (Google Workspace, Microsoft 365, Slack, Salesforce, Jira, ServiceNow, GitHub, etc.).
- Handle full syncs, low-latency incremental updates via webhooks/APIs, rate-limiting, and complex authentication flows.
- Build advanced capabilities in datasources like actions, live-fetch, and query language support.
Data Processing & Modeling
- Transform raw, unstructured enterprise content into rich, structured, permission-aware representations optimized for search and LLM reasoning.
- Design document schemas and enrichment pipelines (entity extraction, access-graph propagation, redactions, etc.).
- Expand the capabilities of AI products through deep integrations that allow us to automate tasks, perform complex queries grounded in enterprise data, and enhance our indexed corpus with live data.
Reliability & Distributed Systems
- Own end-to-end correctness, freshness, and performance for petabyte-scale data flows.
- Solve hard problems in ordering, idempotency, exactly-once processing, backpressure, and retries across distributed queues, workers, and storage.
Security & Permissions
- Preserve fine-grained ACLs, deletions, and sensitivity constraints so AI answers are always grounded in what users are actually allowed to see.
Cross-Functional Impact
- Partner closely with Search Serving, Product, Platforms, and Security teams to define how enterprise context is exposed to LLMs and agents.
- Continuously improve observability, alerting, and automation to onboard larger customers and more data sources with confidence.
About you:
- 3+ years building production backend or data infrastructure systems (Java, Go, C++, Python, etc.).
- Hands-on experience with distributed systems, data pipelines, queues, and large-scale storage (SQL/NoSQL).
- You think in SLOs, error budgets, failure modes, and correctness guarantees — not just features.
- Comfortable with strict consistency and permission-modeling challenges.
- Prior work on enterprise connectors, search/indexing, information retrieval, or security-sensitive systems is a strong plus.
- Passionate about making AI trustworthy by building the rock-solid data foundation underneath it.
- Power user of LLMs and AI tools in your own workflow.
Location:
- This role is hybrid (4 days a week in one of our SF Bay Area offices)
Compensation & Benefits:
The standard base salary range for this position is $140,000 - $265,000 annually. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for variable compensation, equity, and benefits.
We offer a comprehensive benefits package including competitive compensation, Medical, Vision, and Dental coverage, generous time-off policy, and the opportunity to contribute to your 401k plan to support your long-term goals. When you join, you'll receive a home office improvement stipend, as well as an annual education and wellness stipends to support your growth and wellbeing. We foster a vibrant company culture through regular events, and provide healthy lunches daily to keep you fueled and focused.
We are a diverse bunch of people and we want to continue to attract and retain a diverse range of people into our organization. We're committed to an inclusive and diverse company. We do not discriminate based on gender, ethnicity, sexual orientation, religion, civil or family status, age, disability, or race.
#LI-HYBRID