Our mission is to build multimodal AI to expand human imagination and capabilities
We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
We will deploy these systems to make a new kind of intelligent creative partner that can imagine with us. Free and away from the pressure of being creative. It's for all of us whose imaginations have been constrained, who've had to channel vivid dreams through broken words, hoping others will see what we see in our mind's eye. A partner that can help us show — not just tell.
Dream Machine is an early step to building that.
Try it here
Why you should join us:
- Luma is bringing together the best team in the world to achieve our goal, from researchers to engineers and designers to growth operators
- Luma is not just a lab - we are deeply product focused and our vision merging AI models and delightful products is unique in the industry
- We build. We ship. Our early products have been wildly successful
What do we value?
- Expertise in your field
- Urgency, velocity and execution
- Problem solving mindset
- Clear communication
- Product focus
The SRE role at Luma AI sits with the Infrastructure and Research teams and is responsible for our GPU clusters. Luma runs on '000s of H100 GPUs across multiple providers and clusters for Training, Data Processing and Inference. We need a highly skilled SRE to ensure those clusters are healthy and to build the monitoring and management tools we need to make full use of them. Successful candidates will want to get extremely in the weeds solving performance and maintenance problems in our clusters.
Competencies
Responsibilities
- Collaborate with researchers and engineers to specify the availability, performance, correctness, and efficiency requirements of the current and future versions of our GPU infrastructure.
- Work with multiple GPU cloud providers to scale up, scale down, maintain and monitor our 000's GPUs in many clusters.
- Design and implement solutions to ensure the scalability of our infrastructure to meet rapidly increasing demands.
- Implement and manage monitoring systems to proactively identify issues and anomalies in our production environment.
- Implement fault-tolerant and resilient design patterns to minimize service disruptions.
- Build and maintain automation tools to streamline repetitive tasks and improve system reliability.
- Participate in an on-call rotation to respond to critical incidents and ensure 24/7 system availability alongside other infrastructure developers.
- Develop and maintain service level objectives (SLOs) and service level indicators (SLIs) to measure and ensure system reliability.
Experience
- Proven work experience 10+ yrs as an reliability engineer, production engineer, infrastructure software engineer or a similar role in a fast-paced, rapidly scaling company.
- Strong proficiency in GPU cloud infrastructure, including the underlying concepts of scheduling, scaling, cloud storage, networking and security.
- Proficiency in programming/scripting languages.
- Experience with containerization technologies and container orchestration platforms like Kubernetes or equivalent.
- Knowledge of IaC tools such as Terraform or CloudFormation or equivalent.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and collaboration skills.
- Experience with observability tools; examples include DataDog, Prometheus, Grafana, Splunk and ELK stack or similar.
- Knowledge of security best practices in cloud environments.
- Good to have experience as an SRE within the AI/ML space is strongly preferred.