From custom chips to a secret, cutting-edge model, Zuckerberg is restructuring Meta from the inside out and betting it all on the era of agentic AI.
==================
By: myai4.com | April 2026
AI Pods, New Titles, and a Restructured Workforce
Meta is not just building AI products, it is reorganizing itself around AI. The company has launched an internal experiment inside its Reality Labs division, restructuring
a team of 1,000 employees into "AI-native pods" where every person carries one of three roles: AI Builder, AI Pod Lead, or AI Org Lead. The restructuring formalizes measurement of AI-assisted work and is designed to make the entire organization fundamentally more productive with AI.
The transformation comes alongside a wave of layoffs. Meta has eliminated hundreds of positions across operations, sales, recruiting, and Reality Labs, part of a broader pattern since 2022 in which the company has reduced its workforce by approximately 25,000 roles. The official rationale has evolved over time, but the current framing is consistent: AI has changed how much work each employee can produce, making certain roles redundant as a matter of technological progress, not business crisis.
Avocado: The Proprietary Model That Could Change Everything
Perhaps the most consequential strategic shift at Meta involves its AI model strategy. After Llama 4 failed to meet internal expectations, Meta is reportedly developing a new frontier model codenamed "Avocado" and unlike the open-source Llama family, Avocado is being built as a proprietary, closed model designed to compete head-to-head with GPT and Claude.
The shift away from open-source represents a significant reversal for a company that has championed open AI development. Multiple factors reportedly drove the decision: frustration that DeepSeek used Llama code in its own model, and pressure from newly recruited senior engineers who believe a proprietary architecture is necessary to build a truly frontier-level model.
Custom Chips and a $130 Billion Infrastructure Bet
To support its AI ambitions, Meta unveiled four new custom processors developed with Broadcom, the MTIA 300, 400, 450, and 500, designed for inference, training, and generative AI workloads across Facebook, Instagram, and WhatsApp. The first large-scale shipments are expected in the second half of 2026, and the chips are a key part of Meta's long-term strategy to reduce dependence on Nvidia and control its own compute stack.
Meta also acquired a 49% stake in Scale AI, securing a critical pipeline for the data labeling needed to train its next-generation models including Llama 5. And under its Meta Compute program, the company has committed between $115 billion and $135 billion toward AI data center infrastructure, including on-site power generation, making it one of the largest single AI infrastructure investments in the world.
The Vision: Personal Superintelligence
Beyond products and infrastructure, Meta's AI chief Alexandr Wang offered a sweeping vision at the India AI Impact Summit: 2026 as the year of "recursive self-improvement," where current AI models are used as tools to accelerate the development of the next generation. The destination, in Meta's framing, is "personal superintelligence", AI agents that intimately know each user's health, goals, and relationships to help them maximize their daily lives.
Bottom line: Meta is making the largest structural bet of any company in this report. By building its own chips, developing a proprietary frontier model, restructuring its entire workforce around AI-native principles, and committing over $100 billion to infrastructure, Zuckerberg is attempting to transform the world's largest social media company into one of the defining AI powers of the next decade.
*Note: Article produced with AI assistance (Claude by Anthropic)
