• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - Cerebras
Tag:

Cerebras

Tech

OpenAI Spark: New Fast Coding Model Powered by Cerebras AI Chip

by Chief Editor February 12, 2026
written by Chief Editor

OpenAI and Cerebras: A New Era of Speed in AI Coding

OpenAI has launched GPT-5.3-Codex-Spark, a lighter, faster version of its coding tool, powered by a dedicated chip from Cerebras Systems. This marks a significant shift in OpenAI’s infrastructure, moving beyond its traditional reliance on Nvidia and signaling a new focus on ultra-low latency for real-time AI applications.

The Need for Speed: Why Low Latency Matters

For coding, speed isn’t just about convenience; it’s about workflow. Traditional AI models can introduce noticeable delays, disrupting the flow of a developer’s thought process. Codex-Spark aims to eliminate this friction, enabling “rapid iteration” and real-time collaboration. OpenAI emphasizes that this new model is designed for daily productivity, focusing on prototyping rather than complex, long-running tasks.

A $10 Billion Partnership: OpenAI and Cerebras Deepen Ties

The collaboration between OpenAI and Cerebras isn’t new. A multi-year agreement, reportedly worth over $10 billion, was announced last month. OpenAI stated that integrating Cerebras’ technology is “all about making our AI respond much faster.” Spark is described as the “first milestone” in this partnership, utilizing Cerebras’ Wafer Scale Engine 3 (WSE-3), a megachip boasting 4 trillion transistors.

Beyond Nvidia: Diversifying AI Infrastructure

While GPUs remain foundational to OpenAI’s operations, the move to incorporate Cerebras chips represents a strategic diversification. OpenAI acknowledges that Cerebras excels in workflows demanding extremely low latency, complementing the capabilities of GPUs. This suggests a future where different AI tasks are handled by specialized hardware optimized for specific needs.

Cerebras: From Startup to AI Powerhouse

Cerebras Systems, founded over a decade ago, has been gaining prominence in the AI industry. The company recently raised $1 billion in fresh capital, achieving a valuation of $23 billion. This funding underscores the growing demand for alternative AI hardware solutions.

What Does This Signify for Developers?

Currently, GPT-5.3-Codex-Spark is available in a research preview for ChatGPT Pro users within the Codex app. Early reports suggest a 15x speed increase in code generation compared to previous models. This faster response time promises a more fluid and efficient coding experience, allowing developers to experiment and refine their work more quickly.

The Future of Real-Time AI

Codex-Spark is presented as the first step towards a dual-mode Codex: one for real-time collaboration and rapid iteration, and another for deeper reasoning and long-running tasks. This suggests a future where AI tools adapt to the specific demands of the user, offering both speed and depth as needed.

Sean Lie, CTO and co-founder of Cerebras, expressed excitement about the partnership, stating that it will unlock “new interaction patterns, new use cases, and a fundamentally different model experience.”

FAQ

What is GPT-5.3-Codex-Spark?

It’s a lightweight version of OpenAI’s coding tool, designed for faster inference and real-time collaboration.

Who is Cerebras Systems?

Cerebras Systems is a chipmaker specializing in low-latency AI workloads, known for its Wafer Scale Engine chips.

What is the benefit of lower latency in AI coding?

Lower latency means faster response times, enabling a more fluid and efficient coding experience.

Is OpenAI moving away from Nvidia?

No, OpenAI states that GPUs remain foundational, but Cerebras complements their infrastructure by excelling at specific tasks.

Who can access GPT-5.3-Codex-Spark?

Currently, it’s available in a research preview for ChatGPT Pro users in the Codex app.

February 12, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Cerebras Systems Valued at $23B in New Funding Round, Eyes 2026 IPO

by Chief Editor February 7, 2026
written by Chief Editor

Cerebras Systems: A $23 Billion Bet on the Future of AI Compute

AI chipmaker Cerebras Systems has secured $1 billion in fresh funding, catapulting its valuation to $23 billion – a dramatic increase from the $8.1 billion it reached just six months prior. This surge in investment underscores the intensifying competition in the AI hardware landscape and highlights Cerebras’ unique approach to tackling the computational demands of artificial intelligence.

The Scale of Innovation: Wafer Scale Engines

What distinguishes Cerebras isn’t simply building faster chips, but building them differently. Their Wafer Scale Engine (WSE) is a massive processor, approximately 8.5 inches across, containing a staggering 4 trillion transistors. Unlike traditional chips cut from silicon wafers, Cerebras utilizes nearly the entire wafer, maximizing computational density. This architecture boasts 900,000 specialized cores, enabling parallel processing and minimizing data transfer bottlenecks that plague conventional GPU clusters.

OpenAI Partnership and the $10 Billion Deal

Cerebras is gaining traction with major players in the AI space. A recent multi-year agreement, worth over $10 billion, will see the company provide 750 megawatts of computing power to OpenAI through 2028. This partnership, fueled by the need for faster AI response times, demonstrates the growing demand for specialized AI infrastructure. Notably, OpenAI CEO Sam Altman is also an investor in Cerebras.

Benchmark Capital’s Long-Term Vision

The latest funding round saw significant participation from Benchmark Capital, an early investor who first backed Cerebras in 2016 with a $27 million Series A investment. Benchmark’s continued commitment is notable; the firm even created two separate investment vehicles, ‘Benchmark Infrastructure,’ specifically to support Cerebras. This demonstrates a long-term belief in the company’s potential.

Navigating Regulatory Hurdles and the Path to IPO

Cerebras’ journey hasn’t been without challenges. A significant portion of its revenue was previously tied to G42, a UAE-based AI firm. This relationship triggered a national security review by the Committee on Foreign Investment in the United States, delaying the company’s initial public offering (IPO) plans. With G42 removed from its investor list, Cerebras is now preparing for a public debut in the second quarter of 2026.

The AI Chip War: Cerebras vs. Nvidia

Cerebras directly positions itself as a competitor to Nvidia, claiming its systems offer superior performance for AI workloads. The company’s unique architecture aims to overcome the limitations of traditional GPU-based systems, particularly in handling large-scale AI models. The competition between these companies is driving innovation and pushing the boundaries of AI compute.

Pro Tip: Understanding Wafer Scale Integration

Wafer Scale Integration (WSI) is a complex manufacturing process. Although offering significant advantages in terms of transistor density and performance, it also presents challenges in yield and defect management. Cerebras’ success hinges on its ability to consistently produce high-quality WSEs.

FAQ

Q: What is Cerebras’ primary product?
A: Cerebras’ flagship product is the Wafer Scale Engine (WSE), a massive AI processor designed for high-performance computing.

Q: Who is Cerebras’ biggest customer?
A: OpenAI, through a multi-year agreement worth over $10 billion.

Q: When is Cerebras expected to go public?
A: Cerebras is preparing for an IPO in the second quarter of 2026.

Q: What makes Cerebras different from Nvidia?
A: Cerebras uses a Wafer Scale Engine, utilizing almost an entire silicon wafer for its processor, while Nvidia uses traditional chip designs.

Q: What is Benchmark Capital’s role in Cerebras’ success?
A: Benchmark Capital was an early investor and continues to support Cerebras, even creating dedicated investment vehicles for the company.

Did you know? Cerebras’ WSE contains more transistors than the entire first generation of microprocessors combined.

Want to learn more about the future of AI and machine learning? Explore our other articles on advanced computing and artificial intelligence.

Share your thoughts on Cerebras’ potential in the comments below!

February 7, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

OpenAI & Cerebras: $10B Deal for AI Compute Power Through 2028

by Chief Editor January 14, 2026
written by Chief Editor

OpenAI’s $10 Billion Bet on Cerebras: The Dawn of Real-Time AI?

OpenAI’s recent agreement with Cerebras, securing a massive $10 billion+ in compute power through 2028, isn’t just a big deal – it’s a signal flare. It points to a future where AI isn’t just intelligent, but instantaneous. This partnership isn’t about raw processing power; it’s about drastically reducing latency, the delay between a request and a response. Think of it as moving from dial-up internet to fiber optic – the difference is transformative.

The Latency Problem and Why It Matters

Currently, many AI applications, even those powered by giants like ChatGPT, experience noticeable delays. While fractions of a second might seem insignificant, they accumulate and impact user experience, especially in real-time applications. Consider a customer service chatbot – a laggy response feels frustrating and impersonal. Or a self-driving car needing to react to a sudden obstacle – milliseconds can be the difference between safety and disaster.

Cerebras, with its uniquely designed Wafer Scale Engine (WSE), claims to offer significantly faster inference speeds than traditional GPU-based systems like those from Nvidia. Their architecture allows for massive parallelism, processing data directly where it’s stored, minimizing bottlenecks. This is crucial for “real-time inference,” the ability to generate responses almost immediately.

Techcrunch event

San Francisco
|
October 13-15, 2026

Beyond Chatbots: The Expanding Universe of Real-Time AI

The implications extend far beyond improved chatbots. Imagine:

  • Financial Trading: AI algorithms reacting to market fluctuations in microseconds, executing trades with unparalleled speed and precision.
  • Drug Discovery: Rapidly simulating molecular interactions to identify potential drug candidates, accelerating the development process.
  • Personalized Medicine: Analyzing patient data in real-time to tailor treatment plans based on individual genetic profiles and health conditions.
  • Robotics & Automation: Enabling robots to respond to dynamic environments with human-like agility and precision.

These applications demand low latency, and that’s where Cerebras’ technology, now backed by OpenAI’s scale, could truly shine. A recent report by Grand View Research estimates the global AI inference chip market will reach $75.89 billion by 2030, demonstrating the growing demand for specialized hardware.

The Chip Wars Heat Up: Cerebras vs. Nvidia

This deal throws down the gauntlet in the increasingly competitive AI chip market. Nvidia currently dominates, but Cerebras is positioning itself as a specialized alternative, focusing specifically on inference. Nvidia is responding by developing its own inference-focused solutions, but Cerebras has a head start in this niche.

The fact that OpenAI, a leading AI innovator, is investing so heavily in Cerebras is a strong endorsement of their technology. It also highlights a strategic move towards diversifying OpenAI’s compute infrastructure. Relying solely on one provider (like Nvidia) creates a potential single point of failure and limits negotiating power.

Pro Tip: Keep an eye on the development of new chip architectures. The race for AI dominance will be won, in part, by the companies that can deliver the most efficient and powerful hardware.

Cerebras’ IPO Journey and Sam Altman’s Involvement

Cerebras’ path to an IPO has been bumpy, repeatedly delayed despite significant funding rounds. This suggests the company is prioritizing strategic partnerships, like the one with OpenAI, over immediate public market pressure. The fact that OpenAI CEO Sam Altman is already an investor, and that OpenAI even considered acquiring Cerebras, underscores the deep connection and shared vision between the two companies.

What Does This Mean for the Future of AI?

The OpenAI-Cerebras partnership signals a shift in focus from simply building more powerful AI models to making those models more accessible and responsive. Real-time AI will unlock a new wave of applications, transforming industries and fundamentally changing how we interact with technology. The demand for low-latency solutions will only increase as AI becomes more deeply integrated into our daily lives.

FAQ: OpenAI, Cerebras, and the Future of AI

Q: What is “inference” in AI?
A: Inference is the process of using a trained AI model to make predictions or generate outputs based on new data.

Q: Why is latency important in AI?
A: Low latency is crucial for real-time applications where immediate responses are required, such as self-driving cars, financial trading, and customer service.

Q: What makes Cerebras’ chips different?
A: Cerebras’ Wafer Scale Engine (WSE) is designed for massive parallelism, allowing for faster inference speeds compared to traditional GPU-based systems.

Q: Will this deal make AI cheaper?
A: While the initial investment is substantial, increased efficiency and faster processing times could ultimately lead to lower costs for AI applications.

Did you know? Cerebras’ WSE is one of the largest and most complex chips ever created, containing over 850,000 cores.

Want to learn more about the latest advancements in AI? Explore our other articles on artificial intelligence. Share your thoughts on this partnership in the comments below!

January 14, 2026 0 comments
0 FacebookTwitterPinterestEmail

Recent Posts

  • Oodie creator Davie Fogarty splashes $5m on spectacular Gold Coast mansion

    May 2, 2026
  • Ann Dowd’s Aunt Lydia Is The Performance Of The Week

    May 2, 2026
  • Millwall Take Pride in Championship Playoff Prize

    May 2, 2026
  • Knoxville Hospital & Clinics and Pella Regional Health Center Recognize the Mel and Holly Suhr Family and KNIA/KRLS for their Contribution of $50,000 to the Bridging the Gap Campaign  | KNIA

    May 2, 2026
  • Eric Clapton: The Story Behind Tears in Heaven

    May 2, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World