First HBM3 controller announced: faster, higher, harder

SmartDV Technologies introduced the first HBM3 memory controller, which is available for licensing by developers of various on-chip systems. Apparently, HBM3 memory can offer twice the bandwidth compared to HBM2E, as well as the capacity of a single chip up to 64 GB, which will provide a reserve for the future for many years to come.

The first HBM3 controller

To date, the JEDEC committee has approved a draft HBM3. Formally, the new specification has not yet been approved, but the draft version 0.7 is typically called Complete Draft, which supports all the features of the standard and defines all the electrical characteristics for the new technology. As a result, various developers may begin to design their controllers and technologies to verify the implementation of these controllers. SmartDV Technologies was the first company to introduce the HBM3 controller, which can be licensed by chip designers. The performance of the controller was confirmed using programmable matrices (FPGA). At the same time, those who wish to verify the correct operation of the final SoC can use the appropriate verification intellectual property of Cadence or SmartDV.

SmartDV’s HBM3 controller can be connected to almost any processor that uses both standard (AMBA APB / AHB / AXI, VCI, OCP, Avalon, PLB, Tilelink, Wishbone), as well as proprietary (proprietary) intra-chip connections. The controller supports up to 16 AXI ports, DFI 4.0 / 5.0 interfaces, a 512-bit data bus, error correction (ECC), pseudo-channels, as well as other technologies familiar to us from HBM2 / HBM2E.

The presence of a licensed HBM3 controller allows system-on-chip developers to add support for this technology in SoC, which will appear on the market in one and a half to two years.

READ  Bitcoin vera weakened by $ 12,000 per coin after a new high last week. Today he wiped out half of his loss

HBM2 / HBM2E: Up to 24 GB, up to 410 GB / s

Multilayer memory chips of the HBM / HBM2 type are based on several DRAM devices interconnected by thousands of TSV interconnects (through silicon via), which are installed on the base / buffer logic die core, which coordinates their work. Each HBM / HBM2 chip is connected to the memory controller using a 1024-bit bus (which in turn is divided into eight 128-bit channels), which is implemented on a silicon interposer.

This architecture allows you to get the highest memory bandwidth. For example, the Samsung Flashbolt chip with eight DRAM devices, a 1024-bit bus and a data transfer rate of 3200 Mtransfers / s offers a bandwidth of 410 GB / s, and four of these devices – 1.64 TB / s (for comparison, the memory bandwidth of NVIDIA GeForce Titan RTX – 672 GB / s). HBM3 goes further.

HBM3: Up to 64 GB, up to 819.2 GB / s

Since the HBM3 standard has not yet been published by JEDEC, we can only judge the possibilities of a new type of memory very superficially.

Judging by Cadence, the HBM3 developers set the task to increase the number of memory devices in the assembly to 16, and the data transfer rate to 6400 Mtransfers / s due to the doubled burst length parameter, to BL = 8. Thus, the advanced HBM3 chip can offer a capacity of 64 GB and a bandwidth of 819.2 GB / s. It is worth noting that the SmartDV HBM3 controller supports up to 1 GB of memory per 128-bit channel.

In terms of operating modes, HBM3 will not be much different from HBM2E, so the introduction of new memory does not promise to be difficult.

READ  management wants to maintain the same dividend as last year, despite the drop in results

Question price?

As HBM2 practice shows, TSV interconnects are extremely difficult to manufacture, the core core is not easy to connect to memory devices, and the connection substrate is very expensive.

In the case of HBM3, it is proposed to increase the number of memory devices, which will increase the number of TSV connections and complicate the structure of the chip. In addition to everything, the basic logic itself will become more complicated. At the same time, 16 memory chips will produce a lot of thermal energy, which will not simplify the cooling of the entire assembly. As a rule, complication leads to a rise in price, which means that the cost of final products based on HBM3 promises to be higher compared to products based on HBM2E.

Apparently, HBM3 was designed primarily for various specialized accelerators and complex multi-chip systems that require huge memory bandwidth and whose cost is not critical. Only time will tell whether the world will see consumer solutions based on HBM3, but this is unlikely to happen in the foreseeable future.

Speaking about the timing of the appearance of HBM3, it is worth remembering that formally the new standard is not yet ready, and a number of companies are working on devices that will use HBM2E. Be that as it may, before the mass use of HBM3 is still quite a long way off.

If you notice an error, select it with the mouse and press CTRL + ENTER.

Leave a Comment