This site may earn affiliate commissions from the links on this page. Terms of use.

Nvidia may have unveiled $.25 and pieces of its Pascal compages back in March, merely the visitor has shared some additional details at its GTC Japan technology briefing. Like AMD's Fury Ten, Pascal will movement away from GDDR5 and adopt the next-generation HBM2 memory standard, a 16nm FinFET process at TSMC, and up to 16GB of retentiveness. AMD and Nvidia are both expected to adopt HBM2 in 2016, but this will be Nvidia'southward first product to apply the technology, while AMD has prior experience thanks to the Fury lineup.

HBM vs. HBM2

HBM and HBM2 are based on the same core technology, but HBM2 doubles the constructive speed per pin and introduces some new low-level features, as shown below. Memory density is also expected to improve, from 2Gb per DRAM (8Gb per stacked die) to 8Gb per DRAM (32Gb per stacked die).

sk_hynix_hbm_dram_2

Nvidia's quoted 16GB of retentivity assumes a 4-broad configuration and four 8Gb die on meridian of each other. That's the same basic configuration that Fury 10 used, though the higher density DRAM ways the hypothetical summit-cease Pascal will have iv times every bit much retentiveness as the Fury X. We would exist surprised, still, if Nvidia pushes that 16GB stack beneath its top-stop consumer card. In our test of 4GB VRAM limits before this year, we found that the vast majority of games exercise not stress a 4GB VRAM buffer. Of the scattering of titles that do use more than than 4GB, none were found to exceed the 6GB limit on the GTX 980 Ti while maintaining anything approaching a playable frame rate. Consumers simply don't have much to worry almost on this front.

The other tidbit coming out of GTC Japan is that Nvidia will target 1TB/due south of total bandwidth. That's a huge bandwidth increase — 2x what Fury X offers — and once more, it'due south a meteoric increase in a short fourth dimension. Both AMD and Nvidia are claiming that HBM2 and fourteen/16nm process technology volition give them a 2x functioning per watt comeback.

Historically, AMD has typically led Nvidia when information technology comes to adopting new memory technologies. AMD was the but company to adopt GDDR4 and the first manufacturer to apply GDDR5 — the Radeon Hd 4870 debuted with GDDR5 in June 2008, while Nvidia didn't button the new standard on high-end cards until Fermi in 2010. AMD has argued that its expertise with HBM made implementing HBM2 easier, and some sites take reported rumors that the company has preferential access to Hynix'south HBM2 supply. Given that Hynix isn't the only visitor edifice HBM2, all the same, this may or may non translate into whatever kind of reward.

HBM2 production roadmap

HBM2 production roadmap

With Teams Red and Dark-green both moving to HBM2 next twelvemonth, and both apparently targeting the aforementioned bandwidth and memory chapters targets, I suspect that the performance crown next twelvemonth won't be decided by the memory subsystem. Games inevitably evolve to take advantage of adjacent-gen hardware, just the 1TB/s capability that Nvidia is talking up won't be a widespread feature — peculiarly if both companies stick to GDDR5 for entry and midrange products. One of the facets of HBM/HBM2 is that its advantages are more pronounced the more RAM you're putting on a card and the larger the GPU is. We tin bet that AMD and Nvidia will introduce ultra-loftier end and loftier-end cards that use HBM2, but midrange cards in the 2-4GB range could stick with GDDR5 for another production cycle.

The big question will be which company can have ameliorate advantage of its bandwidth, which architecture exploits information technology more effectively, and whether AMD can finally evangelize a new core architecture that leaps past the incremental improvements that GCN i.one and 1.2 offered over the original GCN i.0 architecture, which is now nearly three years quondam. Rumors abound on what kind of architecture that will be, but I'1000 inclined to recall information technology'll exist more an evolution of GCN rather than a wholesale replacement. Both AMD and Nvidia have moved towards evolutionary advance rather than radical architecture swaps, and in that location's enough low-hanging fruit in GCN that AMD could substantially ameliorate performance without reinventing the entire wheel.

Neither AMD nor Nvidia take announced a launch appointment, simply we anticipate seeing hardware from both in late Q1 / early Q2 of 2016.