NVIDIA's GeForce 8800 (G80): GPUs Re-architected for DirectX 10
by Anand Lal Shimpi & Derek Wilson on November 8, 2006 6:01 PM EST- Posted in
- GPUs
Virtual Memory
Microsoft is taking tighter control of graphics memory with it's new driver model, and thus is able to provide virtual memory support for the graphics memory subsystem. What this means is that games no longer need to worry about running out of graphics memory. When software needs to write something to local memory, and local memory is full, Windows will be able to kick out something off the graphics card and put it in system memory (this is called paging) until it is needed. This happens without the software's intervention or knowledge. If system memory becomes full, data will be kicked out to the hard drive. Of course, if something like this happens the performance will definitely suffer.
Virtual memory isn't as much a performance enhancing tool as it is a way to remove the burden on the developer to manage memory usage around a hard limit of available space. Certainly, lots of paging will degrade performance, but lower performance is generally better than a crash. On the flip side, it is possible that virtual memory could increase performance by effectively replacing local graphics memory size with unused PCIe bandwidth. This has been the idea behind TurboCache and HyperMemory, but with the added advantage that the graphics driver doesn't need to worry about object or texture management between local and system memory.
Engineers have been wanting to see virtualized graphics memory for years, as operating on really huge data sets is made significantly easier when the software developer doesn't have to manage moving data in and out of graphics memory by hand. We've seen some limited benefits of utilizing both local and system memory on low memory TurboCache and HyperMemory cards. With game developers reaching towards ever larger data sets, high end parts will soon begin to benefit from virtualized graphics memory as well. Building the hardware to accommodate the possibility of higher latencies due to paging and allowing the OS to manage all the memory in the system will definitely help developers focus on building better games rather than better memory managers. That's not to say that memory management won't still be important to game developers. Making sure space and bandwidth are used efficiently are important factors in performance, but the ability to forget about hard limits in local memory will make it easier to take one efficient approach regardless of onboard memory.
Hardware Virtualization
Lately, all the big boys of computing have been infatuated with the idea of virtualization. It makes a whole lot of sense, really. With the advent of multi-core CPUs, AMD and Intel need to find ways to take full advantage of their processing power. Single thread execution time will never disappear as a factor in computing, and some algorithms just can't be parallelized.
Obviously, encouraging users to multitask is a simple way to provide a benefit to multi-core computing. The next step is to encourage developers to write highly multithreaded applications. Beyond that is to allow the user to run multiple operating systems on one set of hardware. One example of how this may be beneficial is in the use of a single system as a normal PC during its use as a home theater / DVR box. Another example is one we've already seen: Mac users running both Windows and OS X on Intel based Macs using a virtual machine manager like Parallels.
In order to really achieve the capabilities hardware providers would like to promote, more work must be done by hardware, software, and operating system providers. One of the major advances necessary is the virtualization of the graphics subsystem. With DirectX10 and the new WDDM (Windows Display Driver Model), graphics hardware is required to support virtualization. This is not a simple request, as games will no longer be guaranteed exclusive access to the hardware while running. We can potentially share game rendering with something like physics calculations on the same GPU. Or we could run a Folding@Home GPU client in the background while we play a game. On the extreme, multiple full screen 3d applications could be running concurrently.
Drivers and hardware will have to support context switching on a massive scale due the huge number of pipelines and registers supported in DX10 class hardware. With the advent of features like TurboCache and HyperMemory (and now graphics memory virtualization), hardware developers are already prepared to handle much larger latencies than we've seen in the past. The ability to preempt a process on the GPU will only increase the potential latency that will need to be addressed.
This is another major step in bringing the GPU closer in functionality to the CPU. More attention must be paid not only to instruction and thread scheduling, but the scheduling of multiple programs. This is no small task when such a high number of pipelines need to be managed. We are very interested in discovering how well NVIDIA has implemented this feature, but we won't be able to test this until we have access to an operating system, API, and software that support it as well.
111 Comments
View All Comments
JarredWalton - Wednesday, November 8, 2006 - link
Page 17:"The dual SLI connectors are for future applications, such as daisy chaining three G80 based GPUs, much like ATI's latest CrossFire offerings."
Using a third GPU for physics processing is another possibility, once NVIDIA begins accelerating physics on their GPUs (something that has apparently been in the works for a year or so now).
Missing Ghost - Wednesday, November 8, 2006 - link
So it seems like by substracting the highest 8800gtx sli power usage result with the one for the 8800gtx single card we can conclude that the card can use as much as 205W. Does anybody knows if this number could increase when the card is used in DX10 mode?JarredWalton - Wednesday, November 8, 2006 - link
Without DX10 games and an OS, we can't test it yet. Sorry.JarredWalton - Wednesday, November 8, 2006 - link
Incidentally, I would expect the added power draw in SLI comes from more than just the GPU. The CPU, RAM, and other components are likely pushed to a higher demand with SLI/CF than when running a single card. Look at FEAR as an example, and here's the power differences for the various cards. (Oblivion doesn't have X1950 CF numbers, unfortunately.)X1950 XTX: 91.3W
7900 GTX: 102.7W
7950 GX2: 121.0W
8800 GTX: 164.8W
Notice how in this case, X1950 XTX appears to use less power than the other cards, but that's clearly not the case in single GPU configurations, as it requires more than everything besides the 8800 GTX. Here's the Prey results as well:
X1950 XTX: 111.4W
7900 GTX: 115.6W
7950 GX2: 70.9W
8800 GTX: 192.4W
So there, GX2 looks like it is more power efficient, mostly because QSLI isn't doing any good. Anyway, simple subtraction relative to dual GPUs isn't enough to determine the actual power draw of any card. That's why we presented the power data without a lot of commentary - we need to do further research before we come to any final conclusions.
IntelUser2000 - Wednesday, November 8, 2006 - link
It looks like putting SLI uses +170W more power. You can see how significant video card is in terms of power consumption. It blows the Pentium D away by couple of times.JoKeRr - Wednesday, November 8, 2006 - link
well, keep in mind the inefficiency of PSU, generally around 80%, so as overall power draw increases, the marginal loss of power increases a lot as well. If u actually multiply by 0.8, it gives about 136W. I suppose the power draw is from the wall.DerekWilson - Thursday, November 9, 2006 - link
max TDP of G80 is at most 185W -- NVIDIA revised this to something in the 170W range, but we know it won't get over 185 in any case.But games generally don't enable a card to draw max power ... 3dmark on the other hand ...
photoguy99 - Wednesday, November 8, 2006 - link
Isn't 1920x1440 a resolution that almost no one uses in real life?Wouldn't 1920x1200 apply many more people?
It seems almost all 23", 24", and many high end laptops have 1900x1200.
Yes we could interpolate benchmarks, but why when no one uses 1440 vertical?
Frallan - Saturday, November 11, 2006 - link
Well i have one more suggestion for a resolution. Full HD is 1920*1080 - that is sure to be found in a lot of homes in the future (after X-mas any1 ;0) ) on large LCDs - I believe it would be a good idea to throw that in there as well. Especially right now since loads of people will have to decide how to spend their money. The 37" Full HD is a given but on what system will I be gaming PS-3/X-Box/PC... Pls advice.JarredWalton - Wednesday, November 8, 2006 - link
This should be the last time we use that resolution. We're moving to LCD resolutions, but Derek still did a lot of testing (all the lower resolutions) on his trusty old CRT. LOL