Captions is the leading AI video company—our mission is to empower anyone, anywhere to tell their stories through video. Over 10 million creators and businesses have used Captions to simplify video creation with truly novel and groundbreaking AI capabilities.
We are a rapidly growing team of ambitious, experienced, and devoted engineers, researchers, designers, marketers, and operators based in NYC. As an early member of our team, you’ll have an opportunity to have an outsized impact on our products and our company's culture.
Our Technology
Mirage Announcement our proprietary omni-modal foundation model
Seeing Voices (technical paper) generating A-roll video from audio with Mirage
Mirage Studio for generating expressive videos at scale
"Captions: For Talking Videos” available in the iOS app store
Press Coverage
Lenny’s Podcast: Interview with Gaurav Misra (CEO)
Latest Fundraise: Series C Announcement
The Information: 50 Most Promising Startups
Fast Company: Next Big Things in Tech
Business Insider: 34 most promising AI startups
TIME: The Best Inventions of 2024
Our Investors
We’re very fortunate to have some the best investors and entrepreneurs backing us, including Index Ventures, Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Lenny Rachitsky, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, and more.
** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square)
We do not work with third-party recruiting agencies, please do not contact us**
About the Role
Captions is seeking an exceptional Member of Technical Staff to advance the state‑of‑the‑art in large‑scale image generation models. You’ll conduct novel research on generative image models for people and storytelling, developing new training techniques and scaling models to billions of parameters and millions of users. As a key member of our AI team, you’ll work at the cutting edge of image generation systems that enable natural, expressive, and high‑fidelity outputs.
Our team has strong expertise in training large‑scale models with demonstrated research and product impact (see our recent whitepaper here for details of recent work). We’re especially excited to push the use of image synthesis for multimodal video generation, with a focus on photorealistic quality, professional-grade expressivity, and creative iteration. Our models power tools used by millions of creators, and we’re tackling fundamental challenges in how to generate compelling composition, lighting, and fine‑grained detail across diverse domains.
Key Responsibilities
Research & Architecture Development
Design and implement large‑scale image generation models (transformers, latent diffusion, flow matching, etc.).
Develop new approaches to multimodal conditioning and generation (e.g. audio and video) and controllability (editing, multi-frame consistency, script guidance, etc).
Research advanced image‑editing and -generation techniques such as content‑preserving edits, multi‑input conditioning, and reference‑based generation.
Establish and validate scaling laws for image diffusion models across resolution and parameter count.
Develop automated evaluation approaches for improved fidelity and consistency.
Drive rapid experimentation with model architectures, sampling strategies, and training strategies.
Validate research directly through product deployment and real user feedback.
Derive insights from data and recommend architectures and training practices that will make meaningful impacts on our products.
Model Training & Optimization
Train and optimize models at massive scale (10s–100s of billions of parameters) across multi‑node GPU clusters.
Push the boundaries of efficiency and hardware utilization for training and deploying models in a cost effective manner.
Develop sophisticated distributed training approaches using FSDP, DeepSpeed, Megatron‑LM, Triton and custom CUDA kernels where needed.
Design and implement model‑compression techniques (pruning, distillation, quantization, etc.) for efficient serving.
Create new approaches to memory optimization, gradient checkpointing, and mixed‑precision training.
Research techniques for improving sampling speed (DDIM, PFGM++, SDE‑VE) and training stability at scale.
Conduct systematic empirical studies to benchmark architecture and optimization choices.
Preferred Qualifications
Research Experience
Master’s or PhD in Computer Science, Machine Learning, or a related field or equivalent practical experience.
Demonstrated experience implementing and improving state‑of‑the‑art generative image models.
Deep expertise in generative modeling approaches (flow matching / diffusion, autoregressive models, VAEs, GANs, etc.).
Strong background in optimization techniques, sampling, and loss‑function design.
Experience with empirical scaling studies and systematic architecture research.
Track record of research contributions at top ML conferences (NeurIPS, CVPR, ICCV, ICML, ICLR).
Technical Expertise
Strong proficiency in modern deep‑learning tooling (PyTorch, CUDA, Triton, FSDP, etc.).
Experience training image diffusion models with billions of parameters.
Familiarity with large language models or multimodal transformers is a plus.
Deep understanding of attention, transformers, latent representations, and modern image‑text alignment techniques.
Expertise in distributed training systems, model parallelism, and high‑throughput inference.
Proven ability to implement and improve complex model architectures end to end.
Engineering Capabilities
Ability to write clean, modular research code that scales from prototype to production.
Strong software‑engineering practices including testing, code review, and CI/CD.
Experience with rapid prototyping and experimental design under tight iteration loops.
Strong analytical skills for debugging model behavior, numerical stability, and performance bottlenecks.
Familiarity with profiling and optimization tools (Nsight, TensorBoard, PyTorch Profiler, etc.).
Track record of bringing research ideas to production and maintaining high code quality in a research environment.
About the Team
You’ll work directly alongside our research and engineering teams in our NYC office. We’ve intentionally built a culture where technical innovation and research excellence are highly valued. Your success will be measured by your contributions to improving our models and advancing the field, not by your ability to navigate politics. We’re a team that loves diving deep into complex technical problems and emerging with practical breakthroughs.
Our Team Values
Open technical discussions and collaboration
Rapid iteration and practical solutions
Deep technical expertise and continuous learning
Direct impact on research and product outcomes
What Sets Us Apart
Opportunity to advance the state‑of‑the‑art in multimodal video generation
Direct impact on products used by millions of creators
Access to massive compute resources and diverse, large‑scale datasets
Environment that values both research excellence and practical impact
Ability to validate research through direct product feedback
Benefits:
Comprehensive medical, dental, and vision plans
401K with employer match
Commuter Benefits
Catered lunch multiple days per week
Dinner stipend every night if you're working late and want a bite!
Doordash DashPass subscription
Health & Wellness Perks (Talkspace, Kindbody, One Medical subscription, HealthAdvocate, Teladoc)
Multiple team offsites per year with team events every month
Generous PTO policy
Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Please note benefits apply to full time employees only.