Update (9/26/17): In statements to PCWorld, AMD has elaborated that they are "moving away from the CrossFire tag for multi-GPU gaming," citing that CrossFire technically refers to DX11 applications while mGPU is more apt for DX12. In addition, 3- and 4-way configurations for RX Vega in gaming will not be formally supported.

Today, AMD has released Radeon Software Crimson ReLive Edition 17.9.2, bringing 2-way multi-GPU (mGPU) support for RX Vega cards, as well as Project CARS 2 support ahead of tomorrow’s launch. In addition, AMD has included mGPU profile support for Project CARS 2.

Featuring Driver Version 17.30.1091 (Windows Driver Store Version 22.19.677.1), Radeon Software 17.9.2 also fixes two bugs: Hearts of Iron IV system hangs when the campaign scenario is launched, and erroneous “1603 error” messages after successfully installing Radeon Software.

The release of a multi-GPU enabled Vega driver comes at an interesting inflection point for overall multi-GPU support in the industry. NVIDIA's own views are well-known, meanwhile more recently with the Vega launch, AMD has stated that they're also deprioritizing multi-GPU support for Vega and future architectures. The company still supports mGPU - as evidenced by today's driver release - but it's no longer a significant focus as it once was.

The root cause of this dour outlook has been the same on both sides: game engines are increasingly using mGPU-unfriendly rendering technologies, which in turn has made adding mGPU support more difficult for game and driver developers all the while also limiting performance gains. This in turn has created a downward spiral in mGPU usage; as it's no longer a reliable means of getting more performance, the percentage of mGPU system setups has continued to drop, which has further reduced the relevance of the technology. Which is not to write multi-GPU's obituary here and now, only that it's increasingly clear that mGPU is a niche setup and AMD is going to treat it as such.

In any case, in line with AMD's earlier sentiment, Vega's multi-GPU support is only launching with support for up to 2-way configurations, foregoing support for 3+ card configurations. However more interestingly - and perhaps more telling - there is no mention of CrossFire terminology in the press release or driver notes. Rather, the technology is always refered to as "multi-GPU". While the exact mGPU limitations of Vega weren’t detailed, AMD appears to specify that only dual RX Vega56 or dual RX Vega64 configurations are officially supported, where in the past different configurations of the same GPU were officially compatible in CrossFire.

Like all GCN2 and newer cards, RX Vega mGPU is bridgeless and presumably uses the same XDMA technology as introduced in the R9 290X. In selected games, AMD cited performance scaling of over 80%.

The updated drivers for AMD’s desktop, mobile, and integrated GPUs are available through the Radeon Settings tab or online at the AMD driver download page. More information on this update and further issues can be found in the Radeon Software Crimson ReLive Edition 17.9.2 release notes.

Source: AMD

Comments Locked

11 Comments

View All Comments

  • ravyne - Monday, September 25, 2017 - link

    Honestly, the days of AFR are at an end, which was the low-hanging fruit Crossfire/SLI took advantage of. Rendering techniques today reuse a lot of information from the previous frame in order to drive visual quality up without doing every computation over again each frame; AFR shines when frames are independent -- inter-frame dependencies cause the workload to serialize and performance scaling to collapse to near-zero pretty quickly. Especially combined with other effects, like greater frame jitter and that AFR, even if you assume perfect scaling, doesn't decrease frame latency one bit, it only increases the number of frames that you see, a frame-rate improvement of less than 25% or so really isn't worth it, IMO. Spending money on a better single GPU can give you 25% pretty easily, with quicker, more-consistent frame times. That's why the recommendation has always been that dual-gpu should really only be used with the highest-end cards.

    This (and multi-threading) is also why new graphics APIs expose more-grainular synchronization primitives, and both explicit and implicit multi-gpu modes. Explicit-mode is extremely low-level, to the point that even a discreet AMD GPU can be leveraged together with an Intel integrated GPU, balancing tasks explicitly. Implicit ("linked") mode requires identical GPUs, but can automate a lot more of the details because it can assume identical behavior and even exchange internal-facing (non-API) data formats without conversion (say, hierarchical Z buffers, or proprietary compression, or the kinds of things that only the hardware engineers and driver developers would know about) -- it lessens the burden of full generality and also opens up opportunities to cooperate at that even lower level.

    What we don't quite have yet is a way for the hardware/driver to combine multiple GPUs and just present them as a single, big GPU. But that's the Holy Grail, and where the manufacturers are headed very soon (1-2 GPU generations). NvLink and infinity fabric lay the groundwork. Nvidia's big GPGPU Volta (the one with 1/2 rate double precision) is already right up against lithography aperture limits -- they literally cannot build a GPU with more compute units without a process shrink, and they published a paper (patent?) About multi-die GPUs. AMD is obviously reaping the rewards of the multi-die approach on the CPU side using infinity fabric, and they've already got Infinity fabric at work in Vega's high-bandwidth cache controller (That's why it's got a huge virtual memory space, and why that 8gb of HBM is more like a massive L3 victim cache than traditional VRAM.) -- you'll see that leveraged by onboard SSDs in Vega-based GPUs designed for video production (they've already got 2 generations of products that do this without infinity fabric), but it'll be a boon for any GPU workload with really big data sets -- oil and gas, cinematic rendering, certain kinds of big-science problems; might see gobs of traditional DRAM too, one day. Multi-die GPUs probably represent the next leap in GPU advancement, if for no other reason than that silicon process scaling is no longer able to keep pace with how quickly engineers can scale the architecture up. Process will still influence the size of the building blocks, power consumption, and cooling requirements, but multi-die frees engineers of aperture limits and untennable yields.

Log in

Don't have an account? Sign up now