NVIDIA's GeForce 7800 GTX Hits The Ground Running
by Derek Wilson on June 22, 2005 9:00 AM EST- Posted in
- GPUs
No More Memory Bandwidth
Again, we have a 256 bit (4x 64 bit) memory interface to GDDR3 memory. The local graphics memory setup is not significantly different from the 6800 series of cards and only runs slightly faster at a 1.2 GHz effective data rate. This will work out in NVIDIA's favor as long as newer games continue to put a heavier burden on pixel shader processing. NVIDIA sees texture bandwidth as outweighing color and z bandwidth in the not too distant future. This doesn't mean the quest after ever increasing bandwidth will stop; it just means that the reasons we will need more bandwidth will change.A good example of the changing needs of graphics cards is Half-Life 2. While the game runs very well even on older graphics cards like the 9800 Pro, the design is such that increased memory bandwidth is far less important than having more shader processing power. This is why we see the 6600GT cards significantly outperform the 9800 Pro. Even more interesting is that in our testing, we found that enabling 4xAA on a 9800 Pro didn't affect performance of HL2 much at all, while increasing the resolution from 1024x768 to 1280x1024 had a substantial impact on frame rates. If the HL2 model is a good example of the future of 3D engines, NVIDIA's decision to increase pixel processing power while leaving memory bandwidth for the future makes a lot of sense.
On an interesting side note, the performance tests in this article are mostly based around 1600x1200 and higher resolutions. Memory usage at 2048x1536 with 32bit color and z-buffer runs a solid 144MB for double buffered rendering with 4x AA. This makes a 256MB card a prerequisite for this setup, but depending on the textures, render targets and other local memory usage, 256MB may be a little short. PCI Express helps a little to alleviate any burden placed on system memory, but it is conceivable that some games could get choppier when swapping in and out large textures, normal maps, and the like.
We don't feel that ATI's 512MB X850 really brings anything necessary to the table, but with this generation we could start to see a real use for 512MB of local memory. MRTs, larger textures, normal maps, vertex textures, huge resolutions, and a lack of hardware compression for fp16 and fp32 textures all mean that we are on the verge of seeing games push memory usage way up. Processing these huge stores of data require GPUs powerful enough to utilize them efficiently. The G70 begins to offer that kind of power. For the majority of today's games, we are fine with 256MB of RAM, but moving into the future it's easy to see how more would help.
In addition to these issues, a 512MB card would be a wonderful fit for Dual-Link DVI. This would make the part a nice companion to Apple's largest Cinema Display (which is currently beyond the maximum resolution supported by the GeForce 7800 GTX). In case anyone is curious, a double buffered 4xAA 32bit color+z framebuffer at 2560x1600 is about 190MB.
In our briefings on G70, we were told that every part of the chip has been at least slightly updated from NV4x, but the general architecture and feature set is the same. There have been a couple of more significant updates as well, namely the increased performance capability of a single shader pipe and the addition of transparency antialiasing. Let's take a look at these factors right now.
127 Comments
View All Comments
Alx - Wednesday, June 22, 2005 - link
Face it, this launch isn't gonna hurt anyone except people with minds too small to accept that there is simply one more option than there was before. If you liked pc gaming yesterday, then there is no reason why this launch should make you stop liking it today. Unless you're a retarded buttbaby who can't handle choices. In that case please get a console and stop coming to this site.mlittl3 - Wednesday, June 22, 2005 - link
#82, WesleyWell that sucks that ya'll have lost your web editor for awhile. Especially when there is so much cool hardware coming out around now. In our research lab, we pass around our publications and conference posters to others in the group so that a fresh pair of eyes see them before they go live or to the journal editor. But of course, everyone else at AT is also busy so oh well.
Good work guys and I look forward to the "new CPU speed bump" article (or FX-57 for those not under NDAs).
Mark
PS. If ya'll have an opening for another web editor, you should hire #84 (ironchefmorimoto). I hear he can cook really well.
AtaStrumf - Wednesday, June 22, 2005 - link
Nicely corrected Derek, I think there are just a few typos left, like this one (**):Page 20
Power Consumption
We measured power consumption between the power supply and the wall. This multiplies essentially amplifies any differences in power draw because the powersupply is not 100% efficient. Ideally we would measure power draw of the card, but it is very difficult **determine** to determine the power draw from both the PCIe bus and the 12V molex connector.
AND a few double "Performances" in the title (Performance Performance) starting with page 10.
Nice card nVidia!!! I hope ATi isn't too far behind though. Crossfire --> cheap SLi ;-) I need a nice midrange product out by September when it'll be time to upgrade to a nice E6 stepping S939 A64 and something to take the place of my sweet old GF2 MX (I'm not kidding, I sold my 6600GT AGP, and now I'm waiting for the right time to move to PCIe).
IronChefMoto - Wednesday, June 22, 2005 - link
Amen -- you guys work hard on your articles. Keep up the great work. And don't f*cking bother the web editor. We...er...they don't get enough vacation as it is.IronChefMorimoto
(another web editor who needs a break)
Wesley Fink - Wednesday, June 22, 2005 - link
Derek was too modest to mention this in his comments, but I think you should know all the facts. Our Web Editor is on vacation and we are all doing our own HTML and editing for the next 10 days. In our usual process, the article goes from an Editor to the Web Editor who codes the article, checks the grammar, and checks for obvious content errors. Those steps are not in the loop right now.The next thing is NDA's and launches. We are always under the gun for launches, and lead times seem to get shorter and shorter. Derek was floating questions and graphs last night at 3 to 4 AM with an NDA of 9AM. Doing 21 pages of meaningful commentary in a short time frame, then having to code it in HTML (when someone else normally handles that task), is not as easy as it might appear.
I do know Derek as a very conscientious Editor and I would ask that you please give him, and the rest of us, a little slack this next week and a half. If you see errors please email the Editor of the article instead of making it the end of the world in these comments. I assure you we will fix what is wrong. That approach, given the short staff, would be a help to all of us. We all want to bring you the information and quality reviews you want and expect from AnandTech.
IronChefMoto - Wednesday, June 22, 2005 - link
#79 -- But why wouldn't it be a high quality article, mlittl3? I thought you told me that AT was infallible? Hmmm? ;-)Houdani - Wednesday, June 22, 2005 - link
Thanks for the refresh, Derek. I went back and took a peek at the revised graphs. I have a couple of comments on this article before you move on to the next project.>> When the Splinter Cell page was refreshed, the graph for 20x15x4 apparently disappeared.
>> When you removed the SLI's from the Guild War page, it looks like the 7800GTX changed from 50.5 to 55.1 (which is the score previously given to the 6800 Ultra SLI).
>> Several of the pages have scores for no AA benches listed first, while other pages have scores for the 4xAA listed first. While the titles for the graphs are correct, it's a little easier to read when you stay consistent in the ordering. This is a pretty minor nit-pick, though.
>> Thanks for updating the transparency images to include mouseover switches ... quite handy.
fishbits - Wednesday, June 22, 2005 - link
"They priced themselves into an extremely small market, and effectively made their 6800 series the second tier performance cards without really dropping the price on them. I'm not going to get one, but I do wonder how this will affect the company's bottom line."The 6800s were "priced into an extremely small market." How'd that line turn out? I don't imagine they've released this product with the intention of losing money overall. Why do you think retailers bought them? Because they know the cards won't sell and they're happy to take the loss? It's already been proven that people will pay for you to develop and sell a $300, wait $400, wait $500 video card. It's already been proven that people will pay a $100+ premium for cards that are incrementally better, not just a generation better. Sounds like this target is a natural, especially knowing it'll eventually fall into everyone else's purchasing ability.
Being able to say you have the bar-none best card out there by leaps and bounds is certainly worth something. Look at all the fanboys that are out there. Every week or month you're able to stay on top of the benches means you get more people who'll swear by your products no matter what for years to come. Everyone you can entice into buying your card who sees it as a good product will buy your brand in the future as a preference, all other options being equal. I could be wrong, but suspect Nvidia's going to make money off this just fine.
-----------------------
"I am proud that our readership demands a quality above and beyond the norm, and I hope that that never changes. Everything in our power will be done to assure that events like this will not happen again."
See... that's why I'm a big fan of the site.
mlittl3 - Wednesday, June 22, 2005 - link
#78, I bet you didn't even read the article. How do you know it demonstrated editoral integrity?IronChefMoto - Wednesday, June 22, 2005 - link
#23 (mlittl3) still can't pronounce "Penske" and "terran" right, regardless of the great editorial integrity demonstrated by the AT team today. Thanks!