Meta is moving beyond the screen and the headset, attempting to build the physical vessels for what it calls “personal superintelligence.” The company is quietly assembling a dedicated hardware team within its Meta Superintelligence Labs (MSL), tapping veteran engineer Rui Xu to lead the effort. Even as Meta is already a known player in wearables via Reality Labs, this new push suggests the company is exploring entirely new categories of AI-native devices designed to host a persistent, all-seeing digital agent.
The Architect of the “Constellation”
The hire of Rui Xu is a calculated move to bring deep, multi-disciplinary hardware expertise into the heart of Meta’s AI division. Xu arrives with a pedigree that spans robotics, smartphones, and AI agents. He most recently led hardware at Dreamer, an AI agent startup whose founding team Meta acqui-hired last month. Before that, Xu served as COO of K-Scale, a robotics startup, and held leadership roles at ByteDance, Xiaomi, Lenovo, and Tencent.
This expertise aligns with the vision outlined by Alexandr Wang, Meta’s Chief AI Officer and head of MSL. Wang has described a future where AI isn’t just a tool you open on a phone, but a personalized agent that exists across a “constellation” of devices. The goal is an agent that is always on, seeing what the user sees and hearing what they hear, effectively weaving itself into the fabric of daily life.
Navigating an Internal Reorganization
The creation of MSL represents a massive consolidation of Meta’s AI power. Under the direction of Mark Zuckerberg and Alexandr Wang, MSL now unifies the company’s foundations, product, and FAIR (Fundamental AI Research) teams. This reorganization was designed to move Meta away from the “vague assertions” of its rivals and toward tangible tools for individual empowerment.
There is a delicate overlap between this new division and the existing Reality Labs, which handles VR headsets and smart glasses. While MSL is now building its own hardware team, the two divisions are collaborating closely. Some Reality Labs engineers have already transitioned to MSL to help prototype software on existing hardware, suggesting that while the vision for “superintelligence” devices is new, Meta is leveraging its existing wearable infrastructure to get there faster.
This push comes as Meta enters a high-stakes race against competitors like OpenAI, both of which are vying to create the first truly AI-native personal device that can realistically challenge the smartphone’s dominance.
What exactly is Meta Superintelligence Labs (MSL)?
MSL is a unified organization created by Mark Zuckerberg to consolidate all of Meta’s AI efforts—including research (FAIR), product development, and foundational model building—under one umbrella. Led by Chief AI Officer Alexandr Wang and Nat Friedman, its primary mission is to develop “personal superintelligence” for users.
Why is Rui Xu’s background significant for this role?
Xu brings a rare combination of experience in humanoid robotics (K-Scale) and mass-market consumer electronics (ByteDance, Xiaomi). This suggests Meta is looking for hardware that can do more than just display information; they are likely looking for devices with high sensory integration or robotic capabilities to support an “always on” AI agent.
How does this differ from Meta’s existing smart glasses?
While smart glasses are a part of the “constellation,” the creation of a dedicated hardware team within MSL indicates Meta is exploring device types beyond glasses. The goal is to create a seamless ecosystem of multiple devices that allow a single, personalized AI agent to follow a user throughout their day.
What are the implications of Meta’s massive AI spending?
The projected spend of up to $135 billion in 2026 suggests that Meta believes the hardware-software integration is the only way to win the AI race. By controlling both the “brain” (the superintelligence models) and the “body” (the hardware devices), Meta aims to avoid dependency on other platforms, such as Apple or Google, for how users interact with AI.
As AI moves from the chatbox into the physical world, will users be comfortable with a “constellation” of devices that see and hear everything they do?
