LeadTek 6800 and eVGA 6800 Ultra Extreme: New GeForce on the Block
by Derek Wilson on July 9, 2004 1:00 AM EST- Posted in
- GPUs
The Test
Our benchmarks this time around will consist of our previously tested standard benchmarks. The scores of older cards have simply been copied from our old test (from our first X800 review). Performance gains on most applications haven't been huge on the older cards with the latest couple of driver revisions from either camp, but these numbers should still only be used as a reference point.Unfortunately, the 4.7 Catalyst drivers were released the day after testing was completed, and were not able to make it into the article. We will, of course, be looking into the performance of new drivers in other articles (though the 1 per month release schedule of ATI is tough to keep pace with at times).
Performance Test Configuration | |
Processor(s): | AMD Athlon 64 3400+ |
RAM: | 2 x 512MB OCZ PC3200 (2:2:3:6) |
Hard Drives | Seagate Barracuda 7200.7 120GB PATA |
Video AGP & IDE Bus Master Drivers: | VIA Hyperion 4in1 4.51 |
Video Card(s): | eVGA GeForce 6800 Ultra Extreme NVIDIA GeForce 6800 Ultra NVIDIA GeForce 6800 GT LeadTek GeForce 6800 NVIDIA GeForce FX 5950 Ultra ATI Radeon X800 XT Platinum Edition ATI Radeon X800 XT ATI Radeon X800 Pro ATI Radeon 9800 XT ATI Radeon 9700 Pro |
Video Drivers: | NVIDIA 61.45 Beta NVIDIA 61.11 Beta (5950U) ATI Catalyst 4.6> ATI Catalyst 4.4 (9800/9700) |
Operating System(s): | Windows XP Professional SP1 |
Power Supply: | PC Power & Cooling Turbo Cool 510 |
Motherboards: | FIC K8T800 (754 pin) |
For easy reference, here is the pixel width, core clock speed and memory data rate of all current generation parts:
NVIDIA GeForce 6800: 12 pipes, 325 core, 700 mem
NVIDIA GeForce 6800 GT: 16 pipes, 350 core, 1000 mem
NVIDIA GeForce 6800 Ultra: 16 pipes, 400 core, 1100 mem
NVIDIA GeForce 6800 Ultra Extreme: 16 pipes, 460 core, 1200 mem
ATI Radeon X800 Pro: 12 pipes, 475 core, 900 mem
ATI Radeon X800 XT: 16 pipes, 500 core, 1000 mem
ATI Radeon X800 XT Platinum Edition: 16 pipes, 520 core, 1120 mem
Here is the pricing data that we gathered from pricewatch and our own RealTime Pricing Engine (all prices are USD).
NVIDIA GeForce FX 5950: $380
NVIDIA GeForce 6800: $300
NVIDIA GeForce 6800 GT: $410
NVIDIA GeForce 6800 Ultra: $540
NVIDIA GeForce 6800 Ultra Extreme: $?
ATI Radeon 9700 Pro: $180
ATI Radeon 9800 XT: $400
ATI Radeon X800 Pro: $420
ATI Radeon X800 XT: $540
ATI Radeon X800 XT Platinum Edition: $?
We don't yet have any reliable pricing information for the 6800 Ultra Extreme or the X800 XT Platinum Edition. With the 6800 Ultra and X800 XT both at $540, we can expect the beefed up versions of these cards to be priced a little more. We'll guess $600 each as the price points for the ultra high end cards. Who knows whether or not this will prove to be the case, but that's the best that we can do right now.
We have seen 6800 GTs on sale for their MSRP of $400 and there are a few links on pricewatch showing $410, but we couldn't touch an X800 Pro for less than $420.
We are always trying to bring more sanity to the decision making process, so for this series of tests, we will add a value graph to each performance test that will essentially rank all the cards by price/performance.
Even choosing to graph this data requires that we essentially assign a "value" to frame rate. Unfortunately, the way every individual values frame rate is unique, and we can't tailor make a graph for every individual. The once constant when graphing this data will be rank: no matter what you do, higher frame rates will raise rank, and lower prices will raise rank. Therefore, to try to help alleviate the problem of attaching a dollar value to every frame, we have decided to use a log scale. Specifically, our value graphs will be based on the following equation:
Value = 10 * log(100 * performance / cost)
We multiply performance / cost by 100 in order to avoid the problem of negative log values (our graphing engine doesn't like that), and we multiply by 10 for readability.
It is important, when looking at this data, to remember that performance and value need to be taken into account at the same time. In certain price difference situations, (for instance between the 6800 GT and X800 Pro), performance will be in favor of one and value the other. In these cases, the $10 USD difference may or may not be an issue. It's up to the reader to be the final judge.
But, that's enough talk. Let's move on to the numbers.
46 Comments
View All Comments
TrogdorJW - Friday, July 9, 2004 - link
My final comment (for now):On the Warcraft III page, you had this to say: "Even at 16x12, this benchmark is very CPU limited, and yes, vsync was disabled. Oddly, when AA/AF is enabled, the FX 5950U actually outperforms the X800 XT PE. This is an atypical situation, and we will try to look into the matter further."
My thought on this is that the likely reason has to do with optimizations. In most benchmarks, the 6800 series of cards outperforms their X800 equivalents when running at standard settings. Enabling 4xAA and 8xAF often diminishes the gap or shifts the benchmark into ATI's favor. However, you don't really do a full suite of benchmarks, so it's difficult to say why the shift takes place. Having looked at other sites, the shift seems to be related almost entirely to the anisotropic filtering. Turning on/off AA seems to have very little impact on the placing of the cards when you're not CPU limited, while turning on/off AF can put a much larger burden on the Nvidia cards, especially cards of the FX era.
So what does this have to do with Warcraft III? Well, I won't bother arguing which of the two AF methods is actually better out of Nvidia and ATI. They seem to be roughly equivalent. However, ATI seems to get more AF performance out of their hardware. Basically, the ATI algorithm simply appears to be superior in performance.
So again, what does this have to do with Warcraft III and the Geforce FX? One word: perspective. WCIII uses an overhead perspective, so much of the screen is filled with polygons (the ground) that are perpendicular to the camera angle. If I recall correctly from my graphics programming classes, there is less that can be done to optimize the AF algorithms in this scenario. I believe that perpendicular polygons are already almost "perfectly optimized". (Or maybe it's just that Nvidia has better optimizations on the FX architecture in this instance?) The end result is that the GPU doesn't have to do a whole lot of extra work, so in this particular instance, the FX architecture does not suffer nearly as much when enabling AF. Not that any of us would actually go out and buy an FX5950 these days....
Honestly, though, the benchmarking methodology for WCIII (playback of a demo at 8X speed) seems pretty much worthless - i.e. on the level of 3DMark usefulness. It's a DX7 game that will run well even on old Pentium 3 systems with GeForce 2 cards, and anything more recent than a GeForce 4 Ti with a 2 GHz CPU will have no difficulty whatsoever with the game. Running a demo playback at 8X might not work well, but that's not actually playing the game. I'm sure there are plenty of WCIII fans that think this is a meaningful performance measurement, but there are probably people out there that still play the original Quake and think that it gives meaningful results. :)
TrogdorJW - Friday, July 9, 2004 - link
A few other comments from the article:"The 9700 Pro may be a good value for many games, but it just won't deliver the frame rates in current and future titles, at the resolutions to which people are going to want to push their systems."
I really have to disagree with that opinion. These tests were done exclusively at 1280x1024 and 1600x1200, as well as with 4xAA and 8xAF. Only the extreme fringe of gamers actually have a desire to push their systems that far. Well, I suppose we would all *want* to, but most of us simply cannot afford to. First, you would need a much better monitor than the typical PC is equipped with - 19" CRT or 17" LCD would be the minimum. You would also need to run at 4xAA and 8xAF at the maximum resolution your display supports in several of the games. Finally, you would need to max out all the graphics in each game. While some people certainly feel this is "necessary", I'm pretty sure they're in the minority.
My opinion? The difference between 800x600 and 800x600+2xAA is rather noticeable; the difference between 800x600+2xAA and 800x600+4xAA is much less so. I also think that 800x600+4xAA is roughly equivalent to 1024x768+2xAA or 1280x1024 without any AA. Personally, I would prefer higher resolutions up to a point (beyond 1280x1024, it's not nearly as important). For graphical quality, there's a pretty major improvement from bilinear to trilinear filtering, but you don't notice the bump to anisotropic filtering nearly as much. There is also a very drastic change in quality when going from low detail to medium detail, and generally a noticeable change when going from medium to high detail. Beyond that (going to very high or ultra high - assuming the game permits), there is usually very little qualitative difference, while the performance generally suffers a lot.
But hey - it's just one man's opinion against anothers. I point this out not as a rebuke of your opinion. It is as disagreement with your pushing your opinion as being something more. Often, writers don't like wishy-washy conclusions, but a more moderate stance is probably warranted with many of the hardware sites. The fastest hardware comes with a major price increase that most people are simply unwilling to pay. The use of a logarithmic scale is also part of this problem, as most people would be more than happy to pay half as much for 75% of the performance.
TrogdorJW - Friday, July 9, 2004 - link
#24 - I'm amazed that you're the only other person that even wondered about that. Basically, using the Log of the performance/price makes everything a lot closer. There is a reason for this, of course: if you take the straight performance/price (multiplied by 10 or 100 if you want to get the numbers into a more reasonable range), it makes all the expensive cards look really, really bad.However, the reality is that while an X800 Pro or 6800 GT might cost over twice as much as the 9700 Pro, there is a real incentive to purchase the faster cards. Minimum frame rates on a 9700 Pro would often be completely unacceptable at these resolutions. The use of a logarithmic chart makes large differences in price and/or performance less of a deal killer.
For example, let's look at Warcraft III 1600x1200 without AA/AF. The cards range from 58.2 to 61.1 FPS, but the price range is from $300 to $600. In this particular instance, the $300 6800 would be almost twice as "desirable" as the 6800UE or X800XTPE. Apply their log-based calculation to it, though, and the 6800 is now only 30% more desirable than the $600 cards.
What it amounts to, though, is their statement at the beginning: every person has a different set of criteria for rating overall "value". In Anandtech's case, they like performance and are willing to pay a lot of extra money for it. (Which of course flies in the face of their comments about the $10 difference in price between the 6800GT and X800 Pro, but that's a different story. As someone already pointed out, if the GT leads in performance, costs a little less, and also has more features, what numbskull wouldn't choose it over the X800 Pro?!? Of course, there are instances where the X800 Pro still wins, so if you value those specific situations more, then you might want the X800 instead of the GT.)
Leuf - Friday, July 9, 2004 - link
How can you leave out the 9800 pro when talking value, especially when the video card guide right under this article says the 9800 pro is the best price/perfomance now?One thing you don't take into account is that someone buying a lower end card probably doesn't have the same cpu as someone buying a top end card. While it wouldn't make sense to test each card with a different cpu for this article it's worth mentioning. I'd actually like to see a perfomance plot of a couple value cards tested across the gamut of cpus. Looking at video card value and cpu value completely separate from each other isn't necessarily going to lead to the best choices.
Neekotin - Friday, July 9, 2004 - link
araczynski, i used an asetek waterchill v2. dedicated only for the GPU and a custom coolant, my recipe... i havent tried it with a CPU, my 3400 is barely overclockable.Marsumane - Friday, July 9, 2004 - link
Yes that is a good deal. But it doesnt represent the actual price of the card w/o promotions. The GT was $300 from bestbuy. Thats not the overall price tho, just a pricing mistake. You cant count them.snikrep - Friday, July 9, 2004 - link
Did you guys notice the Far Cry specs? That's some pretty huge numbers... the X800XT is beating the 6800 Ultra by about 15FPS from what I could tell.That's huge!!
And those "actual" pricing numbers seem way off... I picked up a retail X800 Pro from Best Buy 2 weeks ago for $399, so I don't see why we'd include price gouging vendors.
And the X800XT Platinum Edition is below $499 at most places, I personally have it on order from Gateway for $390 which makes it the best deal by FAR (of course I'll get it sometime in August with my luck, but who cares, it's cheap).
nserra - Friday, July 9, 2004 - link
So many rich people here, discussing de price of 600$ card's, and worried about their little price differences (10$), funny.After watching this review, I would go for a 9700pro or 9800pro, or even better a softmod 9500/9800se.
I play always (if the games permit) at 1024x768 2xAA and 4xAF. It's more than enough.
The review doesn’t take into account that most of monitors (CRT) do 60 Hz at 1280x1024 and 1600x1200.
araczynski - Friday, July 9, 2004 - link
Neekotin: What hardware are you using for your liquid cooling setup? I've been thinking about incorporating it into my next build possibly.Drayvn - Friday, July 9, 2004 - link
Sorry to post again, but the cheapest 6800 Ultra, we dont even have the Extreme yet is....$621
So the difference is about $100 still, which in my opinion i would by the XT-PE still, but prices could go down when the UE comes out, dunno...