• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - Hynix
Tag:

Hynix

Tech

Hochkapazitive PC-SSDs: SK Hynix enthüllt 2-Tbit-Chips mit 321 Lagen

by Chief Editor August 25, 2025
written by Chief Editor

SK Hynix’s Leap: The Future of High-Capacity Storage

The world of digital storage is constantly evolving. SK Hynix, a major player in the semiconductor industry, is pushing boundaries with its new generation of NAND flash memory. Their recent advancements offer significant implications for both consumers and enterprise-level applications, promising a future of denser, faster, and more efficient storage solutions.

The Quadruple Level Cell Revolution: More Bits, More Capacity

SK Hynix’s move to Quadruple Level Cell (QLC) technology, packing four bits of data per cell, is a pivotal moment. This follows their previous Triple Level Cell (TLC) iteration. The immediate impact? A doubling of storage capacity per chip. Imagine squeezing more storage onto the same physical space, a game-changer for solid-state drives (SSDs) and other storage devices.

Did you know? Each “layer” in these flash memory chips is incredibly thin, representing a complex feat of engineering. The more layers, the more data that can be packed in.

321 Layers: Vertical Integration and Capacity Boost

The new QLC chips feature a staggering 321 layers, demonstrating SK Hynix’s vertical integration prowess. This layering enables an exceptional density, allowing for 4 TB drives in surprisingly compact form factors.

PC SSDs First: The Consumer Wave

The initial focus is on PC SSDs. Expect to see these high-capacity, cost-effective drives hitting the market in the first half of 2026. This means users will be able to enjoy increased storage space at competitive prices.

Pro Tip: With higher capacities, consider regularly backing up your data to external drives or cloud storage services to protect your valuable files. This is good practice regardless of the drive size, but even more critical with larger storage volumes.

The trade-off? While QLC generally offers impressive capacity gains, it often sacrifices some performance compared to its TLC counterparts, at least initially. Expect read/write speeds to be a key consideration, but the advantages of larger storage are undeniable.

Data Centers and AI: The Enterprise Opportunity

The long-term vision for this technology extends into the realm of data centers, particularly for AI training workloads. High-capacity, cost-efficient storage is critical for these resource-intensive tasks. SK Hynix is positioning itself to capitalize on the booming AI market. This represents a significant shift in the demands placed on data storage infrastructure.

Related Reading: Learn more about the current state of AI and its impact on the tech industry from this report: Gartner’s AI Trends

Internal Architecture: Boosting Performance

To mitigate the performance challenges of QLC, SK Hynix has optimized its internal architecture. The new chips are divided into six planes (internal storage areas) instead of four. Each plane can be read and written to in parallel, which helps alleviate QLC’s inherent performance limitations. This architecture shift aims to boost performance without sacrificing storage capacity.

Performance Gains: Faster Write Speeds and Increased Efficiency

SK Hynix is claiming significant performance improvements with this new generation of QLC. They promise a 56% increase in write speed and an 18% increase in read performance, along with 23% greater energy efficiency compared to their previous QLC offerings. These advances are encouraging for the future of storage technology.

Here’s a quick look at the advancements:

  • 56% improved write speed
  • 18% higher read performance
  • 23% better energy efficiency

FAQ: Understanding the Future of Storage

Q: What is QLC?

A: QLC (Quadruple Level Cell) is a type of flash memory that stores four bits of data per cell, allowing for higher storage densities.

Q: What are the benefits of SK Hynix’s new QLC technology?

A: Primarily, increased storage capacity and improved performance over previous QLC offerings, leading to more affordable storage.

Q: Where will this technology be used?

A: Initially in PC SSDs and, in the long term, in data centers, especially for AI training.

Q: Will QLC replace other storage technologies?

A: QLC is likely to become a dominant player in the consumer market for high-capacity, cost-effective drives. Other technologies, such as TLC and SLC, will remain relevant for applications demanding the absolute highest performance.

Q: When can we expect to see these products?

A: Products using this technology are expected to hit the market in the first half of 2026.

What are your thoughts on the future of high-capacity storage? Share your opinions in the comments below! Do you anticipate upgrading to a high-capacity QLC drive? Let’s discuss!

August 25, 2025 0 comments
0 FacebookTwitterPinterestEmail
Tech

15,000 Watt: KI-Beschleuniger’s Power Demand Soars

by Chief Editor June 16, 2025
written by Chief Editor

The Power Hungry Future: How AI Accelerators Are Reshaping Data Centers

The relentless march of artificial intelligence is driving a surge in demand for processing power. This, in turn, is leading to an unprecedented increase in the energy consumption of AI accelerators within data centers. According to research from the Terabyte Interconnection and Package Laboratory (Teralab) at KAIST (Korea Advanced Institute of Science and Technology), we’re on the cusp of seeing AI accelerator modules that gulp down a staggering 15,000 Watts.

Decoding the Wattage: Where the Power Goes

Let’s break down where all that power is going. The KAIST Teralab estimates that nearly 10,000 Watts will be consumed by eight AI processor chiplets. Each chiplet, in this scenario, would draw approximately 1,200 Watts. The remaining 5,000 Watts will feed 32 memory chip stacks, each composed of 24 individual DRAM dies, boasting an impressive 80 Gigabits of capacity. This is the future of High Bandwidth Memory (HBM), specifically the seventh generation (HBM7), designed to provide a total of 6 TBytes of AI memory, capable of a data transfer rate of around 1 Petabyte per second (PByte/s).

Did you know? Current top-tier AI accelerators already have power consumption numbers approaching the 15,000-Watt range, like the Cerebras Wafer Scale Engines. However, these are architecturally distinct from the more common AI accelerators from the likes of Nvidia and AMD.

The HBM Roadmap: A Glimpse into the Future

The HBM roadmap from KAIST Teralab isn’t about predicting exact release dates. Instead, it’s a look at upcoming technical challenges and potential solutions. This roadmap provides an informed perspective on the future of DRAM capacity and data transfer rates, alongside chip packaging innovations and expected power consumption levels of combined chips. This forward-thinking approach allows researchers and developers to anticipate the needs of tomorrow.

A key consideration stemming from these projections is the necessity for advanced cooling solutions. The increasing power density of these chips necessitates novel cooling methods to ensure optimal performance and longevity. New methods are already being explored.

Future AI accelerators could consist of eight logic chips and 32 HBM stacks. (Image: KAIST Teralab)

The Chiplet Puzzle: Breaking Down the Big Picture

KAIST Teralab’s research builds on Nvidia’s roadmap. Nvidia is already pushing the boundaries of single-chip size. Experts anticipate that the “reticle limit” will shrink slightly in the future, potentially due to limitations in High-NA EUV lithography. Expect to see more chiplets on the next-generation AI accelerators. Nvidia is already moving in this direction with their Blackwell (B200) and Rubin (R200) products. These will be followed by Feynman (F400) which will likely consist of four chiplets and in about ten years could grow to eight.

With each generation, the power consumption per GPU chiplet is anticipated to increase, going from roughly 800 Watts to 1,200 Watts.

HBM: The Data Pipeline

To provide each GPU chiplet with a sufficient supply of data, the capacity and speed of HBM must increase significantly. This has been achieved through a combination of increased capacity per chip, a greater number of chips per stack (which require thinner slicing), and higher clock frequencies. This latter point requires the decrease of the supply and data signal voltages to control power consumption. The demands on signal processing are also growing, with more chips dependent on a single line despite the increased clock cycles.

KAIST Teralab shows expected properties of HBM generations HBM4 to HBM8.
KAIST Teralab shows expected properties of HBM generations HBM4 to HBM8. (Image: KAIST Teralab)

Pro Tip: HBM4 will be introducing a doubling of data signal lines per stack, moving from 1024 to 2048. This will necessitate changes to the memory controllers in GPU chips and the silicon interposers.

The number of HBM stacks per GPU will also increase. Currently, many GPUs utilize four stacks; however, we should soon expect to see eight, 16, or even 32.

The Heat Problem: Managing Power Density

Today’s HBM3E stack, with eight or twelve layers of 24-Gigabit chips (24 or 36 GBytes of capacity), already converts up to 32 Watts into heat. The projected HBM4, with the same capacity but double the speed, is expected to generate 43 Watts. For 48 GBytes, this number may rise to 75 Watts.

This means that stacking methods will need to improve heat dissipation. The research from KAIST Teralab is available for review in Version 1.7 of their HBM roadmap and a PDF version.

FAQ: Decoding the Future of AI Accelerators

Q: What is the key driver behind the increasing power consumption of AI accelerators?

A: The escalating demands of artificial intelligence and machine learning workloads are driving the need for more powerful and faster processing, which directly translates to higher energy consumption.

Q: What is HBM and why is it important?

A: High Bandwidth Memory (HBM) is a type of memory designed to provide extremely high data transfer rates, essential for feeding data to the powerful AI accelerators. Its performance directly influences the overall efficiency of AI systems.

Q: How are manufacturers addressing the heat generated by these high-powered components?

A: Manufacturers are actively developing and refining advanced cooling solutions, including liquid cooling and other innovative thermal management technologies, to dissipate the significant heat generated by these components.

Q: What are chiplets and why are they being used?

A: Chiplets are smaller, individual chip components assembled together to form a larger processor. This design approach allows manufacturers to create more powerful processors and overcome the limits of single-die manufacturing. It can also reduce costs and improve yields.

Q: Why is the power consumption of AI accelerators a significant concern?

A: The high power consumption of AI accelerators presents several challenges, including increased energy costs, the need for more robust power infrastructure in data centers, and the potential for increased carbon emissions. Efficient power management is crucial for sustainability and cost-effectiveness.

Want to dive deeper into the fascinating world of AI hardware? Share your thoughts in the comments below, and stay tuned for more updates on the ever-evolving landscape of AI acceleration.

June 16, 2025 0 comments
0 FacebookTwitterPinterestEmail

Recent Posts

  • London stabbing of two Jewish men ‘terrorist incident’

    April 29, 2026
  • Artificial intelligence, big tech: Elon Musk says he’s saving humanity as he battles OpenAI’s Sam Altman in court

    April 29, 2026
  • Pokémon GO and LEGO Team Up at Pokémon GO Fest 2026: Copenhagen

    April 29, 2026
  • Jesper Sievestedt: The Man Behind the Name | [Your Brand]

    April 29, 2026
  • Epstein Ranch: New Documentary Reveals Horrific Abuse Claims

    April 29, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World