NVIDIA's GeForce 7800 GTX Hits The Ground Running
by Derek Wilson on June 22, 2005 9:00 AM EST- Posted in
- GPUs
No More Memory Bandwidth
Again, we have a 256 bit (4x 64 bit) memory interface to GDDR3 memory. The local graphics memory setup is not significantly different from the 6800 series of cards and only runs slightly faster at a 1.2 GHz effective data rate. This will work out in NVIDIA's favor as long as newer games continue to put a heavier burden on pixel shader processing. NVIDIA sees texture bandwidth as outweighing color and z bandwidth in the not too distant future. This doesn't mean the quest after ever increasing bandwidth will stop; it just means that the reasons we will need more bandwidth will change.A good example of the changing needs of graphics cards is Half-Life 2. While the game runs very well even on older graphics cards like the 9800 Pro, the design is such that increased memory bandwidth is far less important than having more shader processing power. This is why we see the 6600GT cards significantly outperform the 9800 Pro. Even more interesting is that in our testing, we found that enabling 4xAA on a 9800 Pro didn't affect performance of HL2 much at all, while increasing the resolution from 1024x768 to 1280x1024 had a substantial impact on frame rates. If the HL2 model is a good example of the future of 3D engines, NVIDIA's decision to increase pixel processing power while leaving memory bandwidth for the future makes a lot of sense.
On an interesting side note, the performance tests in this article are mostly based around 1600x1200 and higher resolutions. Memory usage at 2048x1536 with 32bit color and z-buffer runs a solid 144MB for double buffered rendering with 4x AA. This makes a 256MB card a prerequisite for this setup, but depending on the textures, render targets and other local memory usage, 256MB may be a little short. PCI Express helps a little to alleviate any burden placed on system memory, but it is conceivable that some games could get choppier when swapping in and out large textures, normal maps, and the like.
We don't feel that ATI's 512MB X850 really brings anything necessary to the table, but with this generation we could start to see a real use for 512MB of local memory. MRTs, larger textures, normal maps, vertex textures, huge resolutions, and a lack of hardware compression for fp16 and fp32 textures all mean that we are on the verge of seeing games push memory usage way up. Processing these huge stores of data require GPUs powerful enough to utilize them efficiently. The G70 begins to offer that kind of power. For the majority of today's games, we are fine with 256MB of RAM, but moving into the future it's easy to see how more would help.
In addition to these issues, a 512MB card would be a wonderful fit for Dual-Link DVI. This would make the part a nice companion to Apple's largest Cinema Display (which is currently beyond the maximum resolution supported by the GeForce 7800 GTX). In case anyone is curious, a double buffered 4xAA 32bit color+z framebuffer at 2560x1600 is about 190MB.
In our briefings on G70, we were told that every part of the chip has been at least slightly updated from NV4x, but the general architecture and feature set is the same. There have been a couple of more significant updates as well, namely the increased performance capability of a single shader pipe and the addition of transparency antialiasing. Let's take a look at these factors right now.
127 Comments
View All Comments
Diasper - Wednesday, June 22, 2005 - link
oops posted before i wrote anything. Some of the results are impressive, others aren't at all. In fact results seem to be all over the board - I suspect drivers are something of the culprit and are to be blamed. Hopefully, as new drivers come out we'll see some performance increases or at least more a uniform distribution of good resultsDiasper - Wednesday, June 22, 2005 - link
Live - Wednesday, June 22, 2005 - link
Derek get cracking, the graphs are all messed up! And the Transparency AA Performance section could use some info on what game it is tested on and some more comments. I also think that each benchmark warrants some comments for all of us that have a hard time remembering two numbers at the same time. Keep it simple folks….Johnmcl7 - Wednesday, June 22, 2005 - link
I agree something is wrong with these results, I thought they were odd but when I got to the Enemy Territory ones they seem completely wrong - at 2048x1536 and 4xAA the X850XT is apparently getting 104 fps, while the 6800 Ultra gets 48.3 and the SLI 6800 Ultras are only getting 34.6 fps! Especially bearing in mind it's an OpenGL game.John
rimshot - Wednesday, June 22, 2005 - link
This has got to be an error by Anandtech, all other reviews show the 7800GTX in SLI at those same settings hammering the 6800Ultra in SLI.Lonyo - Wednesday, June 22, 2005 - link
The benchmarks are all a load of crap it seems.Check the Wolfenstein benchmarks.
The X850XT goes from 74fps @ 1600x1200 w/4xAA to 103fps @ 2048x1536 w/4xAA
A 33% increase when the res gets turned up. Good one.
There also seem to be many other similar things which look like errors, but they could just be crappy nVidia drivers, or something wrong with SLI profiles.
Who knows, but there's definately a lot of things which look VERY odd/suspicious here.
Dukemaster - Wednesday, June 22, 2005 - link
My Iiyama VMP 454 does 2048 no prob so i'm game :pvanish - Wednesday, June 22, 2005 - link
oh and in several of the benchmarks, the 6800U SLI more than doubles the performance over the single 6800U. Is that normal? I thought SLI gains were generally about 45% or so.rimshot - Wednesday, June 22, 2005 - link
Is it just me or is it a little strange that the 6800Ultra SLI outperforms the 7800GTX SLI at 1600x1200 with 4xAA in every benchmark ???PrinceXizor - Wednesday, June 22, 2005 - link
No comment on the fact that in virtually every game it LOSES to the 6800 SLI at 1600x1200 at 4XAA.All other scores look very impressive. But, in this particualar group of settings, the 6800 SLI eats it for lunch.
P-X