Why This Job is Featured on The SaaS Jobs
Applied Machine Learning Engineer roles are increasingly central to SaaS products as “AI features” move from prototypes into core workflows. This listing stands out for its emphasis on production delivery—treating ML as part of the application stack, not a separate research function—and for the explicit collaboration with product and backend engineering to ship user-facing capabilities in a hybrid R&D setting.
From a SaaS career perspective, the work maps to durable patterns: turning messy, real-world data into reliable services; making trade-offs between model quality, latency, and maintainability; and designing systems that can be operated over time. Experience spanning Python backend code, SQL, and cloud architecture also translates well across SaaS companies that run ML as an always-on product capability rather than a batch analytics layer. Exposure to GenAI integration adds relevance as many SaaS teams experiment with new interaction models.
This role fits engineers who prefer end-to-end ownership—moving from exploration to deployment—while staying grounded in software engineering practices. It is well-suited to someone comfortable partnering with product to define impact, and to practitioners who value pragmatic iteration, system design discussions, and operational thinking around monitoring and scalability.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
Employment Type
Full time
What will you do at Sweep?
As an Applied Machine Learning Engineer at Sweep, you’ll play a key role in designing and delivering ML-driven solutions that go directly into production. This is a hands-on, engineering-focused role where you’ll collaborate closely with product and backend teams to bring intelligent features to life.
Build, test, and deploy end-to-end ML systems – from data exploration and prototyping to production-grade deployment
Write high-quality, maintainable Python code, including backend logic (not just model code)
Collaborate with product managers and engineers to align ML solutions with user and business needs
Work with real-world datasets to solve practical problems, focusing on impact and scalability
Participate in system design discussions, including how cloud infrastructure supports scalable ML
Explore and integrate Generative AI capabilities where relevant
Help shape internal ML best practices and contribute to team knowledge-sharing
We are looking for someone who:
Has a strong engineering background – experience as a Software Engineer or Machine Learning Engineer
Is proficient in Python and has written production-level backend code, not just ML scripts
Understands and can write SQL; familiarity with JavaScript is a plus
Has experience working with cloud infrastructure and understands system architecture at scale
Is curious and up to date with Generative AI tools and frameworks (e.g. Langchain, Langraph, Mastra, etc)
Is practical and impact-driven – focused on shipping and solving real problems
Communicates clearly and works well in cross-functional teams
Has deployed ML models into production environments
(Nice to have) Has experience with ML Ops, model monitoring, or experimentation platforms
(Nice to have) Has worked in fast-moving, startup-like environments