By making evidence the heart of security, we help customers stay ahead of ever-changing cyber-attacks.
Corelight is the cybersecurity company that transforms network and cloud activity into evidence. Evidence that elite defenders use to proactively hunt for threats, accelerate response to cyber incidents, gain complete network visibility and create powerful analytics using machine-learning and behavioral analysis tools. Easily deployed, and available in traditional and SaaS-based formats, Corelight is the fastest-growing Network Detection and Response (NDR) platform in the industry. And we are the only NDR platform that leverages the power of Open Source projects in addition to our own technology to deliver Intrusion Detection (IDS), Network Security Monitoring (NSM), and Smart PCAP solutions. We sell to some of the most sensitive, mission critical large enterprises and government agencies in the world.
As a Lead Cloud Infrastructure Engineer / Site Reliability Engineer (SRE), you will ensure the stability, performance, and security of our Federal region’s cloud platform. You’ll manage infrastructure and operations with a focus on availability, latency, performance optimization, monitoring, incident response, and capacity planning. This role requires maintaining a FedRAMP-compliant environment and working closely with teams to meet the highest standards of security and compliance.
We adopt an "everything as code" approach, leveraging automation and best practices to create an efficient, reliable, and scalable infrastructure. You will be instrumental in maintaining core infrastructure services that are robust, secure, and capable of processing high volumes of data seamlessly.
The successful candidate must be a U.S. citizen and may need to perform work that the U.S. government has specified can only be carried out by a U.S. citizen on U.S. soil.
Responsibilities
- Collaborate with software engineering teams to ensure the reliability, performance, and security of the Federal region’s infrastructure.
- Design, deploy, and scale AI/ML/LLM infrastructure across cloud platforms (AWS, Azure, or GCP) ensuring high reliability and performance.
- Manage and optimize Kubernetes environments (EKS, AKS, GKE) for AI services, data pipelines, and model operations.
- Build and automate end-to-end data and model pipelines for fine-tuning, inference, and RAG workloads using Terraform, Python, and CI/CD tooling.
- Utilize automation tools such as GitOps, CI/CD pipelines, and containerization technologies (Docker, Kubernetes) to streamline ML/LLM tasks across the Large Language Model lifecycle.
- Implement monitoring, observability, and reliability best practices using Prometheus, Grafana, ELK/EFK, Langfuse, and SLI/SLO/SLA frameworks.
- Participate in 24x7 on-call rotations, leading incident response, performance tuning, and cost optimization across SaaS Platform and production workloads
- Own infrastructure end to end, leading scaling initiatives, deployments, and automation, and providing technical leadership across the team
Qualifications/Requirements:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field, or equivalent experience.
- 8+ years in SRE, DevOps, Platform Engineering, MLOps, or Cloud Infrastructure roles.
- 4+ years of production experience with Kubernetes (EKS, GKE, AKS) and containerization tools like Docker.
- Strong programming skills in Python and proficiency in Bash, Go, or PowerShell.
- Proficiency with Infrastructure-as-Code tools (Terraform, CloudFormation).
- Experience with Kubernetes Operators, Helm, GitOps (ArgoCD, Flux), or Service Mesh (Istio, Linkerd).
- Exposure to serverless compute (AWS Lambda, Azure Functions).
- Experience building or automating data and model pipelines for AI/ML/LLM workloads (e.g., RAG, fine-tuning, inference).
- Strong understanding of observability and monitoring using Prometheus, Grafana, ELK/EFK, Langfuse, or similar platforms.
- Familiarity with SLI/SLO/SLA practices, incident response, and reliability engineering in production environments.
Preferred Qualifications (Nice to Have):
- Cloud certifications (AWS, Azure, or GCP – e.g., Solutions Architect, DevOps Engineer).
- Experience with agentic AI frameworks (CrewAI, LangGraph, AutoGen).
- Background in hybrid or on-prem AI deployments, including OpenShift or Rancher.
- Familiarity with configuration management (Ansible, Chef, Puppet).
- Contributions to open-source AI/ML, DevOps, or platform tooling.
- Experience with multimodal AI or model observability platforms (RAGAS, AgentOps, Langtrace), Distributed Tracing, OpenTelemetry.
- Knowledge of performance tuning, cost efficiency, or capacity planning for AI/LLM infrastructure.
- Understanding of security controls and FedRAMP compliance for cloud and various workloads.
Additional Requirements
Due to the criteria and security levels required for Corelight’s FedRAMP program, this position requires:
- U.S. citizenship at the time of hire.
- Residence within the contiguous United States.
- Willingness to undergo a Single Scope Background Investigation, if required.
We are proud of our culture and values - driving diversity of background and thought, low-ego results, applied curiosity and tireless service to our customers and community. Corelight is committed to a geographically dispersed yet connected employee base with employees working from home and office locations around the world. Fueled by an accelerating revenue stream, and investments from top-tier venture capital organizations such as Crowdstrike, Accel and Insight - we are rapidly expanding our team.
Check us out at www.corelight.com