• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - TPU
Tag:

TPU

Business

Google in talks with Marvell to build new AI chips

by Chief Editor April 19, 2026
written by Chief Editor

The Great Silicon Shift: Why Google’s Move Toward Custom AI Chips Changes Everything

For years, the AI gold rush has had one primary arms dealer: Nvidia. Their GPUs have been the undisputed engine behind the explosion of Large Language Models (LLMs). But the wind is shifting. Google’s recent moves to collaborate with Marvell Technology on specialized AI chips signal a broader, more strategic trend: the era of “General Purpose” AI hardware is ending, and the era of bespoke silicon is beginning.

When a giant like Alphabet decides to build its own memory processing units (MPUs) and next-generation Tensor Processing Units (TPUs), it isn’t just about saving a few dollars on hardware. We see about solving the fundamental physics of AI: the “Memory Wall.”

Did you know? Google was one of the first companies to anticipate the AI boom, developing its first TPU as early as 2015 to accelerate the workloads of TensorFlow, its open-source machine learning framework.

Breaking the Memory Wall: The Rise of the MPU

To understand why Google is developing a memory processing unit (MPU), you have to understand the bottleneck. In traditional computing, the processor (the brain) and the memory (the storage) are separate. Data must travel back and forth between them constantly.

View this post on Instagram about Google, Nvidia
From Instagram — related to Google, Nvidia

As AI models grow to trillions of parameters, this “commute” becomes a massive energy drain and a speed killer. This is known as the von Neumann bottleneck. By integrating processing capabilities directly into the memory architecture, Google aims to reduce latency and power consumption exponentially.

This trend toward near-data processing is where the industry is headed. We are seeing a shift from “compute-centric” to “data-centric” architecture. If Google can move the computation to where the data lives, their AI responses will be faster, cheaper, and more sustainable.

The Strategic Play Against Nvidia

Nvidia’s H100s are incredible, but they are designed to be versatile. They can handle a wide range of tasks, which makes them slightly less efficient than a chip designed for one specific purpose. This is where Google’s TPUs gain an edge.

By designing chips specifically for inference—the process of actually running a trained AI model to provide an answer—Google can optimize for cost-per-query. For a company serving billions of Search and Gemini users, a 10% increase in efficiency translates to billions of dollars in saved electricity and infrastructure costs.

Pro Tip: For tech leaders and investors, keep a close eye on “Inference Costs.” While the world focused on the cost of training AI in 2023, the real profit margins in 2025 and beyond will be decided by who can run those models most efficiently.

The Future of AI Inference: Beyond the Data Center

The collaboration with Marvell suggests that the future of AI isn’t just about bigger chips, but smarter ones. We are entering the “Inference Era.” While training a model requires massive clusters of GPUs, running that model (inference) can happen anywhere—from a massive cloud server to a smartphone.

We can expect three major trends to emerge from this shift:

  • Vertical Integration: Like Apple did with its M-series chips, Google is integrating the software (Gemini), the platform (Android/Chrome), and the hardware (TPUs). This creates a “closed loop” of optimization.
  • Energy-Efficient AI: As data centers face scrutiny over power consumption, specialized chips that do more with less wattage will become the only viable way to scale.
  • Domain-Specific Accelerators: We will likely see chips optimized for specific AI tasks—some for image generation, some for logical reasoning, and others for real-time translation.

Real-world examples are already appearing. Amazon is developing its Trainium and Inferentia chips to reduce its reliance on external vendors, mirroring Google’s strategy to protect its margins and maintain control over its supply chain.

How This Affects the Cloud Landscape

For the average business, this hardware war is actually a win. When Google, AWS, and Microsoft compete on silicon, the cost of cloud AI services drops.

Marvell Partners with Google’s ATAP to Support Project Ara – @marvellsemi

Custom silicon allows Google Cloud Platform (GCP) to offer more competitive pricing for AI workloads. If they can provide the same performance as an Nvidia-based cluster but at 60% of the cost, they can aggressively capture market share from other cloud providers.

You can read more about how cloud infrastructure is evolving to support these massive workloads in our deep-dive on data center architecture.

Frequently Asked Questions

What is a TPU?
A Tensor Processing Unit (TPU) is an AI-accelerator application-specific integrated circuit (ASIC) developed by Google specifically to accelerate machine learning workloads.

Why can’t Google just use Nvidia GPUs?
While Nvidia GPUs are powerful, they are general-purpose. Custom silicon allows Google to optimize for their specific software architecture, reducing energy use and increasing speed.

What is the difference between AI Training and Inference?
Training is the process of “teaching” a model using massive datasets. Inference is the process of the model using that knowledge to answer a user’s prompt in real-time.

What does Marvell Technology bring to the table?
Marvell specializes in data infrastructure and semiconductor design, providing the expertise needed to bridge the gap between Google’s architectural vision and actual physical chip production.

Join the Conversation

Do you think custom silicon will eventually produce general-purpose GPUs obsolete, or will Nvidia maintain its crown? Let us know your thoughts in the comments below!

Subscribe for More AI Insights

April 19, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

MatX Raises $500M to Rival Nvidia in AI Chip Market

by Chief Editor February 25, 2026
written by Chief Editor

MatX Secures $500M to Challenge Nvidia’s AI Dominance

AI chip startup MatX has just landed a significant $500 million Series B funding round, positioning itself as a serious contender to Nvidia in the rapidly evolving landscape of artificial intelligence hardware. The investment, led by Jane Street and Situational Awareness – the latter founded by former OpenAI researcher Leopold Aschenbrenner – signals strong confidence in MatX’s potential to disrupt the market.

The Race for LLM Supremacy

MatX, founded by former Google hardware engineers Reiner Pope and Mike Gunter, aims to create processors that dramatically outperform Nvidia’s GPUs in training Large Language Models (LLMs). The company’s stated goal is a 10x improvement in performance. This ambition comes as demand for powerful AI chips continues to surge, fueled by the proliferation of generative AI applications.

The funding will be used to manufacture chips with TSMC, with initial shipments planned for 2027. Pope previously led AI software development for Google’s Tensor Processing Units (TPUs), and Gunter was a lead designer of TPU hardware, giving MatX a strong foundation of expertise.

Valuation and Competitive Landscape

While MatX hasn’t disclosed its current valuation, comparisons are being drawn to Etched, a competitor that recently raised $500 million at a $5 billion valuation. MatX’s Series A round in 2024 valued the company at over $300 million, according to previous reports. This rapid increase in funding and valuation reflects the intense investor interest in the AI chip sector.

Other investors in this latest round include Marvell Technology, NFDG, Spark Capital, and Stripe co-founders Patrick Collison and John Collison.

Why This Matters: The Growing Necessitate for Specialized AI Hardware

Nvidia currently dominates the AI chip market, but its GPUs weren’t specifically designed for the unique demands of LLM training. This creates an opportunity for startups like MatX and Etched to develop specialized hardware that can deliver superior performance and efficiency. The demand for more powerful and efficient AI chips is driven by several factors:

  • Increasing Model Complexity: LLMs are growing larger and more complex, requiring exponentially more computing power.
  • Rising Training Costs: Training these models is incredibly expensive, making efficiency a critical concern.
  • Edge Computing: There’s a growing need to run AI models on edge devices (like smartphones and autonomous vehicles), which requires chips with low power consumption.

The Role of Former OpenAI and Google Talent

The involvement of individuals with backgrounds at OpenAI and Google lends significant credibility to MatX. Leopold Aschenbrenner’s Situational Awareness, formed by a former OpenAI researcher, demonstrates a clear understanding of the challenges and opportunities in the AI space. Similarly, the founders’ experience with Google’s TPUs provides a deep understanding of AI hardware development.

Looking Ahead: Potential Future Trends

The success of MatX and similar startups could lead to several key trends:

  • Increased Competition: More companies will enter the AI chip market, driving innovation and lowering prices.
  • Hardware Specialization: We’ll see a proliferation of chips designed for specific AI tasks, rather than general-purpose GPUs.
  • Rise of Chiplet Designs: Chiplet designs, where multiple smaller chips are combined into a single package, could grow more common, offering greater flexibility and scalability.
  • Focus on Energy Efficiency: Reducing the power consumption of AI chips will be crucial for both cost savings and environmental sustainability.

Frequently Asked Questions

What is an LLM?

LLM stands for Large Language Model. These are AI models trained on massive amounts of text data, capable of generating human-quality text, translating languages, and answering questions.

Who are the founders of MatX?

MatX was founded by Reiner Pope and Mike Gunter, both former Google hardware engineers.

What is TSMC?

TSMC (Taiwan Semiconductor Manufacturing Company) is the world’s largest dedicated independent semiconductor foundry.

When will MatX chips be available?

MatX plans to start shipping its chips in 2027.

What is a TPU?

TPU stands for Tensor Processing Unit, a custom-developed AI accelerator for machine learning, created by Google.

Did you know? The AI chip market is projected to reach hundreds of billions of dollars in the coming years, making it one of the fastest-growing segments of the semiconductor industry.

Pro Tip: Retain an eye on companies developing innovative chip architectures, as they are likely to be at the forefront of the AI revolution.

Want to learn more about the latest advancements in AI hardware? Explore our other articles or subscribe to our newsletter for regular updates.

February 25, 2026 0 comments
0 FacebookTwitterPinterestEmail

Recent Posts

  • Planted!: A New Gardening Battle Game from Cloth Cat Games

    April 22, 2026
  • Warning over alleged roofing repair scammer in Sydney

    April 22, 2026
  • Lebanese Journalist Amal Khalil Killed in Israeli Attack in South Lebanon

    April 22, 2026
  • Lebanese Journalist Amal Khalil Killed in Israeli Attack

    April 22, 2026
  • Denise Richards i sorg etter Patrick Muldoons død

    April 22, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World