NVIDIA's Scalable Link Interface: The New SLI
by Derek Wilson on June 28, 2004 2:00 PM EST- Posted in
- GPUs
It's Really Not Scanline Interleaving
So, how does this thing actually work? Well, when NVIDIA was designing NV4x, they decided it would be a good idea to include a section on the chip designed specifically to communicate with another GPU in order to share rendering duties. Through a combination of this block of transistors, the connection on the video card, and a bit of software, NVIDIA is able to leverage the power of two GPUs at a time.
NV40 core with SLI section highlighted.
As the title of this section should indicate, NVIDIA SLI is not Scanline Interleaving. The choice of this moniker by NVIDIA is due to ownership and marketing. When they acquired 3dfx, the rights to the SLI name went along with it. In its day, SLI was very well known for combining the power of two 3d accelerators. The technology had to do with rendering even scanlines on one GPU and odd scanlines on another. The analog output of both GPUs was then combined (generally via a network of pass through cables) to produce a final signal to send to the monitor. Love it or hate it, it's a very interesting marketing choice on NVIDIA's part, and the new technology has nothing to do with its namesake. Here's what's really going on.
First, software (presumably in the driver) analyses what's going on in the scene currently being rendered and divides for the GPUs. The goal of this (patent-pending) load balancing software is to split the work 50/50 based on the amount of rendering power it will take. It might not be that each card renders 50% of the final image, but it should be that it takes each card the same amount of time to finish rendering its part of the scene (be it larger or smaller than the part the other GPU tackled). In the presentation that NVIDIA sent us, they diagramed how this might work for one frame of 3dmark's nature scene.
This shows one GPU rendering the majority of the less complex portion of a scene.
Since the work is split on the way from the software to the hardware, everything from geometry and vertex processing to pixel shading and anisotropic filtering is divided between the GPUs. This is a step up from the original SLI, which just split the pixel pushing power of the chips.
If you'll remember, Alienware was working on a multiple graphics card solution that, to this point, resembles what NVIDIA is doing. But rather than scan out and use pass through connections or some sort of signal combiner (as is the impression that we currently have of the Alienware solution), NVIDIA is able to send the rendered data digitally over the SLI (Scalable Link Interface) from the slave GPU to the master for compositing and final scan out.
Here, the master GPU has the data from the slave for rendering.
For now, as we don't have anything to test, this is mostly academic. But unless their SLI has an extremely high bandwidth, half of a 2048x1536 scene rendered into a floating point framebuffer will be tough to handle. More normally used resolutions and pixel formats will most likely not be a problem, especially as scenes increase in complexity and rendering time (rather than the time it takes to move pixels) dominates the time it takes to get from software to the monitor. We are really anxious to get our hands on hardware and see just how it responds to these types of situations. We would also like to learn (though testing may be difficult) whether the load balancing software takes into account the time it would take to transfer data from the slave to the master.
40 Comments
View All Comments
Wonga - Monday, June 28, 2004 - link
Hey hadders, I was thinking the same thing. Surely if these cards need such fancy cooling, they need a little bit of room to actually get some air to that cooling??? And to think I used to get worried putting a PCI card next to a Voodoo Banshee...DigitalDivine - Monday, June 28, 2004 - link
does anyone have any info if nvidia will be doing this for low end cards as well?klah - Monday, June 28, 2004 - link
"But it is hard for us to see very many people justifying spending $1000 on two NV45 based cards even for 2x the performance of one very fast GPU"Probably the same number of people who spend $3k-$7k on systems from Alienware, FalconNW, etc.
Alienware alone sells ~30,000 units/yr.
http://money.cnn.com/2004/03/18/commentary/game_ov...
hadders - Monday, June 28, 2004 - link
Whoops. duh. Admittedly all that hot air is been exited via the cooling vent at the back, but still my original thought was overall ambient temperature. I guess there would be no reason why they couldn't put that second PCIe slot further down the boards.hadders - Monday, June 28, 2004 - link
Hmmm, to be honest I hope they would intend to widen the gap between video cards. I wouldn't think the air flow particularily good on the "second" card if it's pushed up hard against the other? And where is all that hot air been blown?DerekWilson - Monday, June 28, 2004 - link
The thing about NVIDIA SLI is that the technology is part of die ... Its on 6800UE, 6800U, 6800GT, and 6800 non-ultra ... It is poossible that they disable the technology on lower clocked versions just like one of the quad pipes is disabled on the 12 pipe 6800 ...The bottom line is that it wouldn't be any easier or harder for NVIDIA to impiliment this technology for lesser GPUs based on the NV40 core. Its a question of will they. It seems at this point that they aren't plannig on it, but demand can always influence a company's decisions.
At the same time, I wouldn't recommend holding your breath :-)
ET - Monday, June 28, 2004 - link
Even with current games you can easily make yourself GPU limited by running 8x AA at high resolutions (or even less, but wouldn't you want the highest AA and resolution, if you could get them). Future games will be much more demanding.What I'm really interested in is whether this will be available only at the high end, or at the mid-range, too. Buying two mid-range cards for a better-than-single-high-end result could be a nice option.
Operandi - Monday, June 28, 2004 - link
Cool idea, but aren't these high end cards CPU limited by themselves let alone paired together.DerekWilson - Monday, June 28, 2004 - link
I really would like some bandwidth info, and I would have mentioned it if they had offered.That second "bare bones" PCB you are talking about is kind of what I meant when I was speaking of a dedicated slave card. Currently NVIDIA has given us no indication that this is the direction they are heading in.
KillaKilla - Monday, June 28, 2004 - link
Did they give any info as to the bandwidth between cards?Or perhaps even to the viability of dual core cards? (say having a sandard card, and adding a seperate PCB with just the bare minimum, say GPU, RAM and and interface? Figuring that this would cut a bit of the cost off of manufacturing an entirely seperate card.