NVIDIA's GeForce 8800 (G80): GPUs Re-architected for DirectX 10
by Anand Lal Shimpi & Derek Wilson on November 8, 2006 6:01 PM EST- Posted in
- GPUs
Final Words
Back when Sony announced the specifications of the PlayStation 3, everyone asked if it meant the end of PC gaming. After all Cell looked very strong and NVIDIA's RSX GPU had tremendous power. We asked NVIDIA how long it would take until we saw a GPU faster than the RSX. Their answer: by the time the PS3 ships. So congratulations to NVIDIA for making the PS3 obsolete before it ever shipped, as G80 is truly a beast.
A single GeForce 8800 GTX is more powerful overall than a 7900 GTX SLI configuration and even NVIDIA's mammoth Quad SLI. Although it's no longer a surprise to see a new generation of GPU outperform the previous generation in SLI, the sheer performance we're able to attain because of G80 is still breathtaking. Being able to run modern day games at 2560x1600 at the highest in-game detail settings completely changes the PC gaming experience. It's an expensive proposition, sure, but it's like no other; games just look so much better on a 30" display at 2560x1600 that it makes playing titles at 1600x1200 seem just "ok". We were less impressed by the hardware itself than by gaming at 2560x1600 with all the quality settings cranked all the way up in every game we tried, and that is saying quite a lot. And in reality, that's what it's all about anyway: delivering quality and performance at levels never before thought possible.
Architecturally, G80 is a gigantic leap from the previous generation of GPUs. It's the type of leap in performance that's akin to what we saw with the Radeon 9700 Pro, and given the number of 9700 Pro-like launches we've seen, they are rare. Like 9700 Pro, we are able to enable features that improve image quality well beyond the previous generation, and we are able to run games smoothly at resolutions higher than we could hope for. And, like 9700 Pro, the best is yet to come.
With developers much more acclimated to programmable shader hardware, we expect to see a faster ramp in the availability of advanced features enabled by DirectX 10 class hardware. This is more because of the performance improvements of DX10 than anything else: game developers can create just about the same effects in SM3.0 that they can with SM4.0. The difference is that DX9 performance would be so low that features won't be worth implementing. This is different from the DX8 to DX9 transition where fully programmable shaders enabled a new class of effects. This time, DX10 simply removes the speed limit and straps on afterburners. The only fly in the ointment for DirectX 10 is the requirement that users run Windows Vista. Unfortunately, that means developers are going to be stuck with supporting both DX9 and DX10 hardware in their titles for some time, unless they simply want to eliminate Windows XP users as a potential market.
Much of the feature set for G80 can be taken advantage of through OpenGL on Windows XP today. Unfortunately, OpenGL has fallen out of use in games these days, but there are still a few who cling to its clean interface and extensibility. The ability to make use of DX10 class features is here today for those who wish to do so.
That's not to say that DX9 games won't see benefits from NVIDIA's new powerhouse. Everything we've tested here today shows incredible scaling on G80 and proves that a unified architecture is the way to go forward in graphics. More complex SM3.0 code will be capable of running on G80 faster than we've been able to see on G70 and R580, and we certainly hope developers will take advantage of that and start releasing games with the option to enable unheard of detail.
The bottom line is that we've got an excellent new GPU that enables incredible levels of performance and quality. And NVIDIA is able to do this while using a reasonable amount of power for the performance gained (despite requiring two PCIe power connectors per 8800 GTX). The chip is huge in terms of transistor count, and in terms of die area. Our estimates based on the wafer shots NVIDIA provided us with indicate that the 681 million transistor G80 die is somewhere between 480 and 530 mm^2 at 90nm. This leaves NVIDIA with the possibility of a spring refresh part based on TSMC's 80nm half-node process that could enable not only better prices, but higher performance and lower power as well.
While we weren't able to overclock the shader core of our G80 parts, NVIDIA has stated that shader core overclocking is coming. While playing around with the new nTune, overclocking the core clock does impact performance, but we'll talk more about this in our retail product review to be posted in the coming days.
With G80, NVIDIA is solidly in a leadership position and now we play the waiting game for ATI's R600 to arrive. One thing is for sure, if you were thinking about building a high end gaming system this holiday season, you only need to consider one card.
111 Comments
View All Comments
JarredWalton - Wednesday, November 8, 2006 - link
The text is basically complete, and minor spelling issues aren't going to change the results. Obviously, proofing 29 pages of article content is going to take some time. We felt our readers would be a lot more interested in getting the content now rather than waiting even longer for me to proof everything. I know the vast majority of readers don't bother to comment on spelling and grammar issues, but my post was to avoid the comments section turning into a bunch of short posts complaining about errors that will be corrected shortly. :)Iger - Wednesday, November 8, 2006 - link
Pff, of course we would! If I would like to read a novel I would find a book! Results first - proofing later... if ever :) Thanks for the article!JarredWalton - Wednesday, November 8, 2006 - link
Did I say an hour? Okay, how about I just post here when I'm done reading/editing? :)JarredWalton - Wednesday, November 8, 2006 - link
Okay, I'm done proofing/editing. If you still see errors, feel free to complain. Like I said, though, try to keep them in this thread.--Jarred
LuxFestinus - Thursday, November 9, 2006 - link
Pg. 3 under <b>Unified Shaders</b>Should read as follows:
<i>Until now, building a GPU with unified shaders would not have <b>been</b> desirable, let alone practical, but Shader Model 4.0 lends itself well to this approach.</i>
Good try though.;)
shabby - Wednesday, November 8, 2006 - link
$600 for the gtx and $450 for the gts is pretty good seeing how much they crammed into the gpu, makes you wonder why the previous gen topped 650 bucks at times.dcalfine - Wednesday, November 8, 2006 - link
How does the 8800GTX compare to the 7950GX2? Not just in FPS, but also in performance/watt?dcalfine - Wednesday, November 8, 2006 - link
Ignore ^^^sorry
Hot card by the way!
neogodless - Wednesday, November 8, 2006 - link
I know you touched on this, but I assume that DirectX 10 is still not available for your testing platform, Windows XP Professional SP2, and additionally no games have been released for that platform. Is this correct? If so...Will DirectX 10 be made available for Windows XP?
Will you publish a new review once Vista, DirectX 10 and the new games are available?
Can we peak into the future at all now?
JarredWalton - Wednesday, November 8, 2006 - link
DX10 will be Vista only according to Microsoft. What that means according to some game developers is that DX10 support is going to be somewhat slow, and it's also going to be a major headache because for the next 3-4 years they will pretty much be required to have a DX9 rendering solution along with DX10.