• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - High Bandwidth Memory
Tag:

High Bandwidth Memory

Tech

Micron’s AI Memory Story Reassessed After Losing Nvidia HBM4 Orders

by Chief Editor February 10, 2026
written by Chief Editor

Samsung Gains Ground in AI Chip Memory, Micron Faces Headwinds

The race for dominance in high-bandwidth memory (HBM) – a critical component for artificial intelligence (AI) processors – is heating up. Recent developments indicate Samsung Electronics is gaining an edge, securing a key position as a supplier for Nvidia’s next-generation AI platform, while Micron Technology (NasdaqGS:MU) has been left out of this initial deal.

Nvidia Chooses Samsung for HBM4

Samsung is set to begin mass production of HBM4 chips as early as this month, with shipments to Nvidia anticipated in the third week of February. This move allows Samsung to supply chips for Nvidia’s upcoming Vera Rubin AI accelerators. The South Korean tech giant has reportedly passed Nvidia’s stringent quality certification process and secured purchase orders. SK Hynix is also expected to be a major supplier, projected to provide around 70% of the HBM4 chips, with Samsung taking approximately 30%.

What This Means for Micron

This development presents a challenge for Micron, which competes with Samsung and SK Hynix in the HBM market. Micron had planned to ramp up its own HBM4 production in the second quarter of 2026, but Samsung’s accelerated timeline could grant the Korean manufacturer a competitive advantage. The loss of a key Nvidia HBM4 slot could limit Micron’s share of the highest-margin AI-memory orders.

HBM: The Engine of AI

HBM chips are crucial for advanced AI processors due to their high bandwidth, and efficiency. They carry higher margins than typical memory components and have been a significant driver of Micron’s stock performance, with the stock more than quadrupling over the past 12 months. However, supplier selection can shift, as demonstrated by this recent announcement.

Micron’s Broader Strategy

Despite this setback, Micron is continuing to invest heavily in AI-related memory technologies. The company is building a new fab in Singapore, investing $24 billion, and expanding its HBM packaging capacity. Micron is also diversifying its customer base beyond Nvidia, targeting other hyperscalers and AI use cases. The company remains sold out on HBM for 2026, suggesting continued strong demand for its products.

Potential Impacts and Considerations

Samsung’s earlier HBM4 production may put pressure on Micron regarding pricing as supply potentially moves closer to balance. However, the overall tight supply of DRAM, NAND, and HBM still supports the idea that buyers have limited alternatives in the near term.

Investor Sentiment and Stock Performance

Micron’s stock experienced a 9.8% decline over the past week, reflecting investor concerns about the impact of losing the Nvidia HBM4 order. However, the stock remains up 25.1% year-to-date and 14.4% over the past 30 days, indicating that investors still see long-term potential in the company.

Frequently Asked Questions

  • What is HBM? High-bandwidth memory is a type of memory designed for high-performance applications like AI and graphics processing.
  • Why is HBM vital for AI? AI processors require fast and efficient memory to handle the massive amounts of data involved in AI workloads.
  • What is Nvidia’s Vera Rubin? Vera Rubin is Nvidia’s next-generation AI accelerator, succeeding Blackwell.
  • What is Micron doing to compete? Micron is investing in new fabs, expanding packaging capacity, and diversifying its customer base.

Pro Tip: Keep a close watch on Micron’s progress in HBM packaging and its ability to secure contracts with other major AI chipmakers.

Did you know? Samsung is the only semiconductor manufacturer capable of providing comprehensive solutions across logic, memory, foundry, and packaging.

Stay informed about the evolving landscape of AI and memory technology. Explore our other articles for in-depth analysis and expert insights.

February 10, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Samsung to start production of HBM4 chips next month for Nvidia supply, source says

by Chief Editor January 26, 2026
written by Chief Editor

Samsung Joins the HBM4 Race: What It Means for AI and Beyond

The competition in the high-bandwidth memory (HBM) market is heating up. Samsung Electronics is slated to begin production of its next-generation HBM4 chips next month, with initial supply destined for Nvidia, according to sources. This move signals a critical step for Samsung in catching up to its rival, SK Hynix, which currently dominates the HBM supply chain for AI accelerators.

Why HBM Matters: The Engine of AI

HBM isn’t your typical RAM. It’s a 3D-stacked memory solution designed to deliver significantly higher bandwidth and lower power consumption than traditional memory technologies. This makes it absolutely crucial for demanding applications like artificial intelligence, machine learning, and high-performance computing. Think of it as the supercharger for AI – the more bandwidth available, the faster AI models can train and operate.

Nvidia’s dominance in the AI chip market, fueled by its GPUs, has created an insatiable demand for HBM. Currently, SK Hynix is the primary supplier, but Nvidia is actively diversifying its supply chain, hence Samsung’s crucial entry into HBM4 production. The market is projected to grow exponentially; a recent report by TrendForce estimates the HBM market will more than double in 2024.

Samsung’s Comeback: From Delays to Deliveries

Last year, Samsung faced challenges with HBM supply, impacting its earnings and stock performance. The company’s shares saw a 2.2% jump on news of the HBM4 production start, while SK Hynix experienced a 2.9% dip, reflecting investor confidence in Samsung’s renewed momentum. This isn’t just about market share; it’s about national economic implications for South Korea, a global semiconductor powerhouse.

Samsung’s success hinges on consistently delivering high-quality HBM4 chips that meet Nvidia’s stringent requirements. The company reportedly passed Nvidia’s qualification tests for HBM4, and also secured qualification with AMD, broadening its potential customer base. Both Samsung and SK Hynix are expected to reveal more details about HBM4 orders during their upcoming fourth-quarter earnings announcements.

Pro Tip: Keep an eye on the earnings reports of Samsung, SK Hynix, and Nvidia. These reports will provide valuable insights into the HBM market dynamics and future demand.

SK Hynix Doubles Down: M15X Fab and Future Expansion

While Samsung is playing catch-up, SK Hynix isn’t standing still. The company is investing heavily in expanding its HBM production capacity. It’s already deploying silicon wafers into its new M15X fab in Cheongju, South Korea, although it hasn’t specified whether HBM4 will be the initial product. This expansion demonstrates SK Hynix’s commitment to maintaining its leadership position in the HBM market.

Nvidia’s Vera Rubin Platform: The HBM4 Destination

The demand for HBM4 is directly tied to Nvidia’s next-generation chips, the Vera Rubin platform. Nvidia CEO Jensen Huang announced earlier this month that Vera Rubin is in “full production,” paving the way for the launch of these powerful new chips later this year. These chips are specifically designed to work in tandem with HBM4, creating a synergistic relationship that will drive advancements in AI and other demanding applications.

Beyond AI: HBM’s Expanding Applications

While AI is the primary driver of HBM demand, its applications are expanding. High-performance gaming, data centers, and even automotive applications are increasingly relying on HBM to deliver the necessary bandwidth and performance. The rise of generative AI, like image and video creation tools, will further accelerate the demand for HBM.

Did you know? HBM’s 3D stacking architecture allows for a much smaller footprint compared to traditional memory, making it ideal for space-constrained applications like GPUs and mobile devices.

The Future of Memory: What’s Next After HBM4?

The industry is already looking beyond HBM4. Research and development are underway for HBM5 and beyond, focusing on even higher bandwidth, lower power consumption, and increased capacity. New materials and architectures are being explored to overcome the limitations of current HBM technology. Expect to see continued innovation in this space as the demand for memory continues to grow.

Frequently Asked Questions (FAQ)

  • What is HBM? High-Bandwidth Memory is a 3D-stacked memory technology offering significantly higher bandwidth than traditional RAM.
  • Why is HBM important for AI? AI models require massive amounts of data to be processed quickly. HBM provides the necessary bandwidth for efficient AI training and inference.
  • Who are the major HBM manufacturers? Currently, SK Hynix and Samsung are the leading manufacturers of HBM.
  • What is HBM4? HBM4 is the next generation of HBM technology, promising even higher performance and efficiency.
  • When will HBM4 be widely available? Samsung plans to start production in February, with wider availability expected throughout 2024.

Reader Question: “Will the increased HBM production lead to lower prices for AI-powered services?” – Stay tuned! Increased supply *could* eventually lead to price reductions, but demand is currently very high, so it’s too early to say.

Want to learn more about the semiconductor industry and the future of AI? Explore our other articles or subscribe to our newsletter for the latest updates.

January 26, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

15,000 Watt: KI-Beschleuniger’s Power Demand Soars

by Chief Editor June 16, 2025
written by Chief Editor

The Power Hungry Future: How AI Accelerators Are Reshaping Data Centers

The relentless march of artificial intelligence is driving a surge in demand for processing power. This, in turn, is leading to an unprecedented increase in the energy consumption of AI accelerators within data centers. According to research from the Terabyte Interconnection and Package Laboratory (Teralab) at KAIST (Korea Advanced Institute of Science and Technology), we’re on the cusp of seeing AI accelerator modules that gulp down a staggering 15,000 Watts.

Decoding the Wattage: Where the Power Goes

Let’s break down where all that power is going. The KAIST Teralab estimates that nearly 10,000 Watts will be consumed by eight AI processor chiplets. Each chiplet, in this scenario, would draw approximately 1,200 Watts. The remaining 5,000 Watts will feed 32 memory chip stacks, each composed of 24 individual DRAM dies, boasting an impressive 80 Gigabits of capacity. This is the future of High Bandwidth Memory (HBM), specifically the seventh generation (HBM7), designed to provide a total of 6 TBytes of AI memory, capable of a data transfer rate of around 1 Petabyte per second (PByte/s).

Did you know? Current top-tier AI accelerators already have power consumption numbers approaching the 15,000-Watt range, like the Cerebras Wafer Scale Engines. However, these are architecturally distinct from the more common AI accelerators from the likes of Nvidia and AMD.

The HBM Roadmap: A Glimpse into the Future

The HBM roadmap from KAIST Teralab isn’t about predicting exact release dates. Instead, it’s a look at upcoming technical challenges and potential solutions. This roadmap provides an informed perspective on the future of DRAM capacity and data transfer rates, alongside chip packaging innovations and expected power consumption levels of combined chips. This forward-thinking approach allows researchers and developers to anticipate the needs of tomorrow.

A key consideration stemming from these projections is the necessity for advanced cooling solutions. The increasing power density of these chips necessitates novel cooling methods to ensure optimal performance and longevity. New methods are already being explored.

Future AI accelerators could consist of eight logic chips and 32 HBM stacks. (Image: KAIST Teralab)

The Chiplet Puzzle: Breaking Down the Big Picture

KAIST Teralab’s research builds on Nvidia’s roadmap. Nvidia is already pushing the boundaries of single-chip size. Experts anticipate that the “reticle limit” will shrink slightly in the future, potentially due to limitations in High-NA EUV lithography. Expect to see more chiplets on the next-generation AI accelerators. Nvidia is already moving in this direction with their Blackwell (B200) and Rubin (R200) products. These will be followed by Feynman (F400) which will likely consist of four chiplets and in about ten years could grow to eight.

With each generation, the power consumption per GPU chiplet is anticipated to increase, going from roughly 800 Watts to 1,200 Watts.

HBM: The Data Pipeline

To provide each GPU chiplet with a sufficient supply of data, the capacity and speed of HBM must increase significantly. This has been achieved through a combination of increased capacity per chip, a greater number of chips per stack (which require thinner slicing), and higher clock frequencies. This latter point requires the decrease of the supply and data signal voltages to control power consumption. The demands on signal processing are also growing, with more chips dependent on a single line despite the increased clock cycles.

KAIST Teralab shows expected properties of HBM generations HBM4 to HBM8.
KAIST Teralab shows expected properties of HBM generations HBM4 to HBM8. (Image: KAIST Teralab)

Pro Tip: HBM4 will be introducing a doubling of data signal lines per stack, moving from 1024 to 2048. This will necessitate changes to the memory controllers in GPU chips and the silicon interposers.

The number of HBM stacks per GPU will also increase. Currently, many GPUs utilize four stacks; however, we should soon expect to see eight, 16, or even 32.

The Heat Problem: Managing Power Density

Today’s HBM3E stack, with eight or twelve layers of 24-Gigabit chips (24 or 36 GBytes of capacity), already converts up to 32 Watts into heat. The projected HBM4, with the same capacity but double the speed, is expected to generate 43 Watts. For 48 GBytes, this number may rise to 75 Watts.

This means that stacking methods will need to improve heat dissipation. The research from KAIST Teralab is available for review in Version 1.7 of their HBM roadmap and a PDF version.

FAQ: Decoding the Future of AI Accelerators

Q: What is the key driver behind the increasing power consumption of AI accelerators?

A: The escalating demands of artificial intelligence and machine learning workloads are driving the need for more powerful and faster processing, which directly translates to higher energy consumption.

Q: What is HBM and why is it important?

A: High Bandwidth Memory (HBM) is a type of memory designed to provide extremely high data transfer rates, essential for feeding data to the powerful AI accelerators. Its performance directly influences the overall efficiency of AI systems.

Q: How are manufacturers addressing the heat generated by these high-powered components?

A: Manufacturers are actively developing and refining advanced cooling solutions, including liquid cooling and other innovative thermal management technologies, to dissipate the significant heat generated by these components.

Q: What are chiplets and why are they being used?

A: Chiplets are smaller, individual chip components assembled together to form a larger processor. This design approach allows manufacturers to create more powerful processors and overcome the limits of single-die manufacturing. It can also reduce costs and improve yields.

Q: Why is the power consumption of AI accelerators a significant concern?

A: The high power consumption of AI accelerators presents several challenges, including increased energy costs, the need for more robust power infrastructure in data centers, and the potential for increased carbon emissions. Efficient power management is crucial for sustainability and cost-effectiveness.

Want to dive deeper into the fascinating world of AI hardware? Share your thoughts in the comments below, and stay tuned for more updates on the ever-evolving landscape of AI acceleration.

June 16, 2025 0 comments
0 FacebookTwitterPinterestEmail

Recent Posts

  • WRE Gaborone 26: Women’s 4x400m Preview

    April 29, 2026
  • Daily Lotto and Daily Lotto Plus results: Tuesday, 28 April 2026

    April 29, 2026
  • UIC researchers develop anti-cancer therapy inspired by bacteria in tumors

    April 29, 2026
  • First detailed ‘smell maps’ reveal how noses track odours

    April 29, 2026
  • Dangerous suspect escapes from hospital!

    April 29, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World