GeForce 8800 Roundup: The Best of the Best
by Josh Venning on November 13, 2006 11:04 AM EST- Posted in
- GPUs
Back when a new Intel chipset launch meant excitement and anticipation, we were always impressed by the widespread availability of motherboards based on the new chipset on the day of announcement. These launches with immediate availability were often taken for granted, and it wasn't until we encountered a barrage of paper launches that discussing availability was really ever an issue.
It wasn't too long ago that both ATI and NVIDIA were constantly paper launching new graphics products, but since that unfortunate year both companies have sought to maintain these "hard launches" with immediate retail availability. NVIDIA has done a better job of ensuring widespread availability than ATI, and last week's launch of the GeForce 8800 series is a perfect example of just that.
Weeks before our G80 review went live we were receiving samples of 8800 GTX and GTS GPUs from NVIDIA's board manufacturers, all eager to get their new product out around the time of NVIDIA's launch. It's simply rare that we see that sort of vendor support surrounding any ATI GPU launch these days, and obviously it's a fact that NVIDIA is quite proud of.
The G80 itself is reason enough for NVIDIA to be proud; widespread availability is merely icing on the cake. As we saw in our review of the 681 million transistor GPU, even a single GeForce 8800 GTX is able to outperform a pair of 7900 GTX or X1950 XTX cards running in SLI or CrossFire respectively. The chip is fast and on average an 8800 GTX seems to draw only 8% more power than ATI's Radeon X1950 XTX, so overall performance per watt is quite strong.
The architecture of G80 is built for the future, and as the first DirectX 10 GPU these cards will be used to develop the next-generation of games. Unlike brand new architectures of DirectX past, you don't need newly re-written games to take advantage of G80. Thanks to its unified shader architecture, the massively parallel powerhouse is able to make full utilization of its execution power regardless of what sort of shader code you're running on it.
NVIDIA's timing with the 8800 launch is impeccable, as it is the clear high end choice for PCs this holiday season. With no competition from ATI until next year, NVIDIA is able to enjoy the crown for the remaining weeks of 2006. If you are fortunate enough to be in the market for an 8800-class card this holiday season, we present to you a roundup of some of the currently available GeForce 8800 graphics cards.
We've got a total of seven G80 based cards in today's roundup, six of which are GeForce 8800 GTX cards along with a single 8800 GTS. All seven G80s are clocked at NVIDIA's stock speeds which are 1.35GHz/575MHz/900MHz (shader/core/memory) for the GTX and 1.2GHz/500MHz/800MHz for the GTS. Apparently NVIDIA isn't allowing vendor-overclocked 8800 GTX cards (according to one of our OEM contacts), thus you can expect all 8800s to perform more or less the same at stock speeds. You will see differences however in the cooling solutions implemented by the various manufacturers, which will in turn influence the overclocking capability of these cards. It's worth mentioning that even at stock speeds, these 8800s are fast... very fast. With the power of these cards and the good overclocking we've seen, we expect we'll see a return to vendor sanctioned overclocking with NVIDIA GPUs at some point in the future, but exactly when this will happen is hard to say.
All of the cards in this roundup are fully HDCP compliant thanks to NVIDIA's NVIO chip in combination with the optional crypto-ROM key found on each of the boards. HDMI outputs are still not very common on PC graphics cards and thus HDCP is supported over DVI on each card. Coupled with an HDCP compliant monitor, any of these 8800s will be able to play full resolution HD-DVD or Blu-ray movies over a digital connection where HDCP is required.
34 Comments
View All Comments
JarredWalton - Monday, November 13, 2006 - link
It appears Oblivion isn't fully able to use all the SPs at present. The stock 8800 GTX should still have about 17% more potential core performance, although maybe not? If the SPs run at 1.35 GHz, what runs at 575 MHz? Or in the case of the OC'ed GTS, at 654 MHz? It could be they have a similar number of ROPs or some other logic that somehow makes the core clock more important in some cases. Or it could just be that the drivers need more optimizations to make the GTX outperform the GTS in all games. Obviously Oblivion isn't GPU bandwidth limited; beyond that, more testing will need to be done.dcalfine - Monday, November 13, 2006 - link
What about the Liquid-cooled BFG 8800GTX?Any news on that? I'd be interested in seeing how it compared in speed, overclockablility, temperature and power consumption.
Keep up the good work though!
shamgar03 - Monday, November 13, 2006 - link
I ordered one, hopefully it will do well in the over clocking section. I am a bit concerned with the differences in over clocking the cards from different manufacturers. Does anyone know the cause of that? I mean if two cards are the exact same as the reference except for the sticker you have to wonder if there is a bit of a variance in quality of semiconductor production. Maybe favorite distributors get the better cores? Any thoughts on what causes these differences?yyrkoon - Monday, November 13, 2006 - link
I assume this text about the sparkle card is in refference to it's in-ability to overclock ? In my opinion, I would rather use this card, or another card that ran equaly (or better), and remained as cool (or cooler). I dont know about you guys, or anyone else, but the though of a Graphics card approaching 90C (@ load, barring the sparkle) scares the crap out of me, and if this is a sign of things to come, then I'm not sure what my future options are. Lets not forget about 300WATTS + under load . . .
Just as the heat / power consumption is an issue (once again, in my opinion), equally disturbing, is the brass it takes to charge $650 usd, for a first generation, card, that obviously needs alot of work. Yes, it would be nice to own such a card, for pumping out graphics better than anything previous, however, I personally would rather pay $650 for something that ran a lot cooler, and offered just as much performance, or better.
Now, to the guy talking about Vista RC2 drivers from nVidia . . . Do you really expect someone to keep up on drivers, for a "product" that is basicly doomed to die a quiet death ? "RC2" . . . Release candadite . . . as far as I'm aware, the last I checked, alot of the graphics features (of Vista) in these betas were not even implemented. This means, that quite possibly, the drivers between RC2, and release could be a good bit different. Personally, I'd rather have nVidia work on the finished product drivers, VS. the release candadite drivers any day of the week. Aside from yourself, I hardly think anyone cares if you want to run RC2 until May 2007 (legally).
Griswold - Thursday, November 23, 2006 - link
I fail to see your issue with temperatures. These cards were designed to run safely at these temperatures. Just because the figures are higher than you have come used to over the years, doesnt mean its bad.RMSistight - Monday, November 13, 2006 - link
How come the Quad SLI setup was not included on the tests? Quad SLI owners want to know.DigitalFreak - Monday, November 13, 2006 - link
LOL. You really want to see how bad a $1200 setup will get spanked by a single card that costs half as much? You must be a masochist.
penga - Monday, November 13, 2006 - link
Hey, iam always interested in the most exact wattage number a card uses and i find it hard to do the maths from the given total system power consumption and conclude how much only the card eats. So my idea was why not use a mainboard with integrated graphics card and compare the numbers? hope u get the idea. what u think, wouldnt that work?DerekWilson - Monday, November 13, 2006 - link
The only way to do this would be to place extremely low resistance (but high current) shunt resistors in the power lines AND build a PCIe riser card to measure the power supplied by the motherboard while the system is running at load.There isn't a really good way to report the power of just the card any other way -- using an onboard graphics card wouldn't do it because the rest of the system would be using a different ammount of power as well (different cards require the system to do different types of work -- a higher powered graphics card will cause the CPU, memory, and chipset to all work harder and draw more power than a lower performance card).
yyrkoon - Monday, November 13, 2006 - link
Derek, I think he was asking: "why not use an integrated graphics motherboard, as a refference system, for power consumption tests".However, it should be obvious, that this wouldnt be a good idea from a game benchmark perspective, in that, it's been my experience that integrated graphics mainboards dont normally perform as well, and often use dated technology / components. Although I havent really paid that much attention to detail, I would assume you guys use the "best" motherboard, for gaming benchmarks, and probably use the same mainboard for the rest of your tests.