Exploring DevOps at Meta.

Welcome to Unrival.AI's 'Technology Stack' Series, powered by our Unrival GPT Model - Zapp.

Zapp Icon - UnrivalGPT

Meet Zapp

Zapp is a AI model built by the team of Machine Learning and Deep Learning Engineers at Unrival.AI. Zapp has extensive knowledge on the technologies used by Fortune 500 companies and what they use these technologies for. In this article, Zapp shares its knowledge on accounts via this blog.

Hey Zapp, can you tell us about DevOps at Meta?
Zapp Icon
DevOps at Meta.

DevOps at Meta involves a combination of software development and IT operations with the goal of shortening the development life cycle and providing continuous delivery with high software quality. DevOps practices at Meta likely include:

// Automation //
Automating repetitive tasks related to software development, testing, deployment, and infrastructure management.

// Continuous Integration and Continuous Deployment (CI/CD) //
Implementing pipelines that allow for the frequent merging of code changes into a central repository and automated deployment to production.

// Infrastructure as Code (IaC) //
Managing and provisioning infrastructure through code and automation tools, which can include the use of Terraform, Chef, Puppet, or Meta's own internal tools.

// Monitoring and Observability //
Instrumenting systems to collect metrics and logs to monitor the health and performance of applications and infrastructure.

// Collaboration and Communication //
Encouraging close collaboration between development, operations, and other stakeholders within the company to ensure smooth and efficient workflows.

Individuals who works on CI/CD infrastructure, and Senior DevOps Engineers, are examples of Meta employees who are involved in DevOps-related activities, focusing on orchestrating deployments and conducting infrastructure reviews for scalability and reliability, respectively.
Can you tell me about the tools and technologies they are using for orchestrating deployments?
Zapp Icon
Tools & Technologies for orchestrating.

Meta uses a variety of tools and technologies for orchestrating deployments, many of which are custom-built to handle the scale and complexity of their infrastructure.

Here are some of the key tools and technologies:

// CI/CD Infrastructure //
Meta has developed its own continuous integration and continuous deployment (CI/CD) infrastructure to manage deployments across its vast array of services and applications. This infrastructure is designed to handle the deployment of code to billions of users multiple times a day.

// Chef //
For configuration management, Meta has historically used Chef, which allows them to manage and automate the configuration of their servers.

// Canary Deployment //
Meta uses canary deployment strategies to roll out changes incrementally to a small subset of users before a wider release, which helps in identifying any issues early in the deployment process.

// Autoscaling Systems //
Meta has developed autoscaling systems that dynamically adjust the number of active servers based on the current load, ensuring efficient resource utilization.

// Container Orchestration //
Meta has migrated from Kubernetes bare-metal infrastructure to Amazon EKS for some of its services, which simplifies the management of containerized applications.

// Custom Deployment Tools //
Meta engineers have built custom deployment tools and systems to manage the specific needs of their infrastructure. These tools are integrated with their internal systems and are designed to ensure safe and reliable deployments.

// Bots and Automation //
Meta uses bots and automation to manage various aspects of the deployment process, from code merges to task coordination.

// Monitoring and Observability //
To ensure the health of deployments, Meta has developed internal tools for monitoring and observability, which are integrated with their deployment processes.

// Conveyor3 //
This is one of the first projects at Meta that was fully written in Rust, re-architecting Meta's continuous deployment system to manage the deployment of backend services, ML models, and mobile apps.

The specific details and the current state of these tools and technologies are proprietary to Meta and evolve over time as they continue to innovate and scale their infrastructure.
Can you expand more on how they approach monitoring and observability?
Zapp Icon
Monitoring and Observability at Meta.

Meta approaches monitoring and observability with a focus on scalability, reliability, and efficiency, given the vast scale of its infrastructure and the need to support billions of users and millions of machines.

Here are some key aspects of how Meta handles monitoring and observability:

Distributed Coordination: Meta's Zeus Team, for example, provides distributed coordination to ensure that the infrastructure can support the massive scale of operations. This involves managing and orchestrating a large number of machines and services in a coherent manner.

AI Experimentation and Observability: Product Managers like [redacted] are responsible for owning AI Experimentation and Observability tools, with a particular emphasis on Generative AI models. This involves building product strategies and executing them with engineering and design partners, as well as diving into logs and codebases to ensure the tools are functioning as intended.

Data Infrastructure Monitoring: Production Engineers like [redacted]  focus on instrumenting monitoring, observability, and Service Level Objectives (SLOs) for critical services within Meta's data infrastructure. This includes developing internal tools for debugging and tracing, as well as creating tooling and training for service ownership.

Operational Efficiency and Privacy: Data Scientists like [redacted] work on enhancing the operational efficiency of processes such as privacy reviews by using machine learning techniques like topic modeling. This helps in identifying bottlenecks and reducing turnaround times, which is crucial for maintaining user privacy and trust.

Infrastructure Leadership Recruiting: The Infrastructure Leadership Recruiting Team, which includes professionals like [redacted], is responsible for hiring leaders across various domains such as Security, Privacy, Network, and Production Engineering. These leaders are tasked with identifying and building advancements in their respective areas, which includes monitoring and observability.

Data Engineering: Data Engineers like [redacted] develop tools like the Training Capacity Dashboard to increase GPU utilization through optimum scheduling, which is part of the observability of AI infrastructure performance.

Production Engineering: Engineers like [redacted] work on automating the setup of infrastructure and ensuring that monitoring of services in cloud environments like AWS is integrated with Meta's internal systems for better visibility and troubleshooting.

Overall, Meta's approach to monitoring and observability involves a combination of advanced tooling, AI-driven insights, and a strong focus on operational efficiency and privacy. The company leverages its engineering talent to build and maintain robust systems that can handle the complexity and scale of its global operations.

Got more questions? Join our waitlist

Request early access