NVIDIA's GeForce 7800 GTX Hits The Ground Running
by Derek Wilson on June 22, 2005 9:00 AM EST- Posted in
- GPUs
No More Shader Replacement
The secret is all in compilation and scheduling. Now that NVIDIA has had more time to work with scheduling and profiling code on an already efficient and powerful architecture, they have an opportunity. This generation, rather than build a compiler to fit hardware, they were able to take what they've learned and build their hardware to better fit a mature compiler already targeted to the architecture. All this leads up to the fact that the 7800 GTX with current drivers does absolutely no shader replacement. This is quite a big deal in light of the fact that, just over a year ago, thousands of shaders were stored in the driver ready for replacement on demand in NV3x and even NV4x. It's quite an asset to have come this far with hardware and software in the relatively short amount of time NVIDIA has spent working with real-time compilation of shader programs.All these factors come together to mean that the hardware is busy more of the time. And getting more things done faster is what it's all about.
So, NVIDIA is offering a nominal increase in clock speed to 430MHz, just a little more memory bandwidth (256bit memory buss running at a 1.2GHz data rate), 1.33x vertex pipelines, 1.5x pixel pipelines, and various increases in efficiency. These all work together to give us as much as double the performance in extreme cases. If the performance increase can actually be realized, we are looking at a pretty decent speed increase over the 6800 Ultra. Obviously, in the real world we won't be seeing a threefold performance increase in anything but a bad benchmark. In cases where games are CPU limited, we will likely see a much lower increase in performance, but performance double that of the 6800 Ultra is entirely possible in very shader limited games.
In fact, EPIC reports that under certain Unreal Engine 3 tests they currently see two to 2.4x improvements in framerate over the 6800 Ultra. Of course, UE3 is not finished yet and there won't be games out based on the engine for a while. We don't usually like reporting performance numbers from software that hasn't been released, but even if these numbers are higher than we will see in a shipping product, it seems that NVIDIA has at least gotten it right for one developer's technology. We are very interested in seeing how next generation games will perform on this hardware. If we can trust these numbers at all, it looks like the performance advantage will only get better for the GeForce 7800 GTX until Windows Graphics Foundation 2.0 comes along and inspires new techniques beyond SM3.0 capabilities.
Right now, each triangle that gets fed through the vertex pipeline, there are many pixels inside the object that needs her help.
Bringing It All Together
Why didn't NVIDIA build a part with unified shaders?
Every generation, NVIDIA evaluates alternative architectures, but at this time they don't feel that a unified architecture is a good match to the current PC landscape. We will eventually see a unified shader architecture from NVIDIA, but it will not likely be until DirectX itself is focused around a unified shader architecture. At this point, vertex hardware doesn't need to be as complex or intricate as the pixel pipeline. As APIs develop more and more complex functionality it will be advantageous for hardware developers to move towards a more generic and programmable shader unit that can easily adapt to any floating point processing need.
As pixel processing is currently more important than vertex processing, NVIDIA is separating the two in order to focus attention where it is due. Making hardware more generic usually makes it necessarily slower, but explicitly targeting a specific aspect of something can often improve performance a great deal.
When WGF 2.0 comes along and geometry shaders are able to dynamically generate vertex data inside the GPU we will likely see an increased burden on vertex processing as well. Being able to programmatically generate vertex data will help to remove the burden on the system to supply all the model data to the GPU.
127 Comments
View All Comments
VIAN - Wednesday, June 22, 2005 - link
"NVIDIA sees texture bandwidth as outweighing color and z bandwidth in the not too distant future." That was a quote from the article after saying that Nvidia was focusing less on Memory Bandwidth.Do these two statements not match or is there something I'm not aware of.
obeseotron - Wednesday, June 22, 2005 - link
These benchmarks are pretty clearly rushed out and wrong, or at least improperly attributed to the wrong hardware. SLI 6800 show up faster than SLI 7800's in many benchmarks, in some cases much more than doubling single 6800 scores. I understand NDAs suck with the limited amount of time to produce a review, but I'd rather it have not been posted until the afternoon than ignore the benchmarks section.IronChefMoto - Wednesday, June 22, 2005 - link
#28 -- Mlittl3 can't pronounce Penske or terran properly, and he's giving out grammar advice? Sad. ;)SDA - Wednesday, June 22, 2005 - link
QUESTIONOkay, allcaps=obnoxious. But I do have a question. How was system power consumption measured? That is, was the draw of the computer at the wall measured, or was the draw on the PSU measured? In other words, did you measure how much power the PSU drew from the wall or how much power the components drew from the PSU?
Aikouka - Wednesday, June 22, 2005 - link
Wow, I'm simply amazed. I said to someone as soon as I saw this "Wow, now I feel bad that I just bought a 6800GT ... but at least they won't be available for 1 or 2 months." Then I look and see that retailers already have them! I was shocked to say the least.RyDogg1 - Wednesday, June 22, 2005 - link
But my question was "who," was buying them. I'm a hardware goon as much as the next guy, but everyone knows that in 6-12 months, the next gen is out and price is lower on these. I mean the benches are presenting comparisons with cards that according to the article are close to a year old. Obviously some sucker lays down the cash because the "premium," price is way too high for a common consumer.Maybe this one of the factors that will lead to the Xbox360/PS3 becoming the new gaming standard as opposed to the Video Card market pushing the envelope.
geekfool - Wednesday, June 22, 2005 - link
What no Crossfire benchies? I guess they didn't wany Nvidia to loose on their big launch day.Lonyo - Wednesday, June 22, 2005 - link
The initial 6800U's cost lots because of price gouging.They were in very limited supply, so people hiked up the prices.
The MSRP of these cards is $600, and they are available.
MSRP of the 6800U's was $500, the sellers then inflated prices.
Lifted - Wednesday, June 22, 2005 - link
#24: In the Wolfenstein graph they obviously reversed the 7800 GTX SLI with the Radeon.They only reveresed a couple of labels here and there, chill out. It's still VERY OBVIOUS which card is which just by looking at the performance!
WAKE UP SLEEPY HEADS.
mlittl3 - Wednesday, June 22, 2005 - link
Derek,I know this article must have been rushed out but it needs EXTREME proofreading. As many have said in the other comments above, the results need to be carefully gone over to get the right numbers in the right place.
There is no way that the ATI card can go from just under 75 fps at 1600x1200 to over 100 fps at 2048x1535 in Enemy Territory.
Also, the Final Words heading is part of the paragraph text instead of a bold heading above it.
There are other grammatical errors too but those aren't as important as the erroneous data. Plus, a little analysis of each of the benchmark results for each game would be nice but not necessary.
Please go over each graph and make sure the numbers are right.