Who are we?
Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.
Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
Why this role?
As an Evaluation Frontend Software Engineer, you will play a key role in helping us make modelling decisions based on experimental outcomes for our large language models (LLMs). Your primary focus will be on building tools to enable easy visualization and analysis of model evaluations which are stored in a central database. You will work closely with cross-functional teams, including researchers and engineers, to surface the insights that are necessary for the future development of our models.
This role combines expertise in statistics, data science, frontend development and data visualization. If any of these topics sound interesting to you, we encourage you to apply.
Please Note: We have offices in London, Paris, Toronto, San Francisco, and New York, but we also embrace being remote-friendly! There are no restrictions on where you can be located for this role.
As a Evaluation Frontend Software Engineer you will:
Design tools and visualizations that enable researchers and engineers to compare and analyse hundreds of model evaluations. This includes both data visualization tools, as well as statistical tools to extract signal from the noisy signals we get.
Develop an understanding of the relative merits and limitations of each of our model evaluations, as well as suggest new facets of model evaluation.
You may be a good fit if you have:
Strong statistical skills and experience evaluating scientific experiments related to data collection and model performance.
Prior experience building front-end visualization systems and dashboards.
Familiarity with ML systems evaluations.
Proficiency in programming languages such as Python and ML frameworks (e.g., PyTorch, TensorFlow, JAX).
Excellent communication skills to collaborate effectively with cross-functional teams and present findings.
One or more papers at top-tier venues (such as NeurIPS, ICML, ICLR, AIStats, MLSys, JMLR, AAAI, Nature, COLING, ACL, EMNLP).
If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply!
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.
Full-Time Employees at Cohere enjoy these Perks:
🤝 An open and inclusive culture and work environment
🧑💻 Work closely with a team on the cutting edge of AI research
🍽 Weekly lunch stipend, in-office lunches & snacks
🦷 Full health and dental benefits, including a separate budget to take care of your mental health
🐣 100% Parental Leave top-up for up to 6 months
🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend
✈️ 6 weeks of vacation (30 working days!)