Half-Life 2 Performance Benchmark Preview
by Anand Lal Shimpi on September 12, 2003 12:34 AM EST- Posted in
- GPUs
It's almost ironic that the one industry we deal with that is directly related to entertainment has been the least exciting for the longest time. The graphics world has been littered with controversies surrounding very fickle things as of late; the majority of articles you'll see relating to graphics these days don't have anything to do with how fast the latest $500 card will run. Instead, we're left to argue about the definition of the word "cheating". We pick at pixels with hopes of differentiating two of the fiercest competitors the GPU world has ever seen, and we debate over 3DMark.
What's interesting is that all of the things we have occupied ourselves with in recent times have been present throughout history. Graphics companies have always had questionable optimizations in their drivers, they have almost always differed in how they render a scene and yes, 3DMark has been around for quite some time now (only recently has it become "cool" to take issue with it).
So why is it that in the age of incredibly fast, absurdly powerful DirectX 9 hardware do we find it necessary to bicker about everything but the hardware? Because, for the most part, we've had absolutely nothing better to do with this hardware. Our last set of GPU reviews were focused on two cards - ATI's Radeon 9800 Pro (256MB) and NVIDIA's GeForce FX 5900 Ultra, both of which carried a hefty $499 price tag. What were we able to do with this kind of hardware? Run Unreal Tournament 2003 at 1600x1200 with 4X AA enabled and still have power to spare, or run Quake III Arena at fairytale frame rates. Both ATI and NVIDIA have spent countless millions of transistors, expensive die space and even sacrificed current-generation game performance in order to bring us some very powerful pixel shader units with their GPUs. Yet, we have been using them while letting their pixel shading muscles atrophy.
Honestly, since the Radeon 9700 Pro, we haven't needed any more performance to satisfy the needs of today's games. If you take the most popular game in recent history, the Frozen Throne expansion to Warcraft III, you could run that just fine on a GeForce4 MX - a $500 GeForce FX 5900 Ultra was in no way, shape or form necessary.
The argument we heard from both GPU camps was that you were buying for the future; that a card you would buy today could not only run all of your current games extremely well, but you'd be guaranteed good performance in the next-generation of games. The problem with this argument was that there was no guarantee when the "next-generation" of games would be out. And by the time they are out, prices on these wonderfully expensive graphics cards may have fallen significantly. Then there's the issue of the fact that how well cards perform in today's pixel-shaderless games honestly says nothing about how DirectX 9 games will perform. And this brought us to the joyful issue of using 3DMark as a benchmark.
If you haven't noticed, we've never relied on 3DMark as a performance tool in our 3D graphics benchmark suites. The only times we've included it, we've either used it in the context of a CPU comparison or to make sure fill rates were in line with what we were expecting. With 3DMark 03, the fine folks at Futuremark had a very ambitious goal in mind - to predict the performance of future DirectX 9 titles using their own shader code designed to mimic what various developers were working on. The goal was admirable; however, if we're going to recommend something to millions of readers, we're not going to base it solely off of one synthetic benchmark that potentially may be indicative of the performance of future games. The difference between the next generation of games and what we've seen in the past is that the performance of one game is much less indicative of the performance of the rest of the market; as you'll see, we're no longer memory bandwidth bound - now we're going to finally start dealing with games whose pixel shader programs and how they are handled by the execution units of the GPU will determine performance.
All of this discussion isn't for naught, as it brings us to why today is so very important. Not too long ago, we were able to benchmark Doom3 and show you a preview of its performance; but with the game being delayed until next year, we have to turn to yet another title to finally take advantage of this hardware - Half-Life 2. With the game almost done and a benchmarkable demo due out on September 30th, it isn't a surprise that we were given the opportunity to benchmark the demos shown off by Valve at E3 this year.
Unfortunately, the story here isn't as simple as how fast your card will perform under Half-Life 2; of course, given the history of the 3D graphics industry, would you really expect something like this to be without controversy?
111 Comments
View All Comments
dvinnen - Friday, September 12, 2003 - link
#31: I know what I said. DX9 dosen't require 32 bit. It's not in the spec so you couldn't write shader that uses more than 24bit percision.XPgeek - Friday, September 12, 2003 - link
Well #26, if the next gen of games do need 32 bit precision, then the tides will once again be turned. and all these "my ATi is so faster than for nVidia" will have to just suck it up and buy another new card, whereas the GFFX's will still be plugging along. by then, who knows, maybe DX10 will support 32 bit precision on the nVidia cards better...btw, im still loading down my GF3 Ti500. so regardless, i will have crappy perf. but i also buy cards from the company i like, that being Gainward/Cardex nVidia based boards. no ATi for me, also no Intel for me. Why? bcuz its my choice. so it may be slower, whoopty-doo!
for all i know, HL2 could run for crap on AMD CPUs as well. so i'll be in good shape then with my XP2400+ and GF3
sorry, i know my opinions dont matter, but i put em here anyhow.
buy what you like, dont just follow the herd... unless you like having your face in everyones ass.
Anonymous User - Friday, September 12, 2003 - link
#28 Not 24bit, 32 bit.Anonymous User - Friday, September 12, 2003 - link
Yeah, like mentioned above, what about whether or not AA and AF were turned on in these tests? Do you talk about it somewhere in your article?I can't believe it's not mentioned since this site was the one that make a detailed (and excellent) presentation of the differences b/w ati and nvdia's AA and AF back in the day.
Strange your benchmarks appear to be silent on the matter. I assume they were both turned off.
Anonymous User - Friday, September 12, 2003 - link
>>thus need full 32-bit precision."<<Huh? Wha?
This is an interesting can of worms. So in the future months time, if ATI stick to 24bit, or cannot develop 32 bit precision, the tables will have reversed on the current situation - but even moreso because there would not be a work around (Or optimization).
Will ATI users in the future accuse Valve of sleeping with Nvidia because their cards cannot shade with 32-bit precision?
Will Nvidia users claim that ATI users are "non-compliant with directX 9"? Will ATI users respond that 24bit precision is the only acceptable standard Direct 9 standard, and that Valve are traitors?
Will Microsoft actually force manufacturers to bloody well wait and force them to follow the standard.
And finally, who did shoot Colonel Mustard in the Dining Room?
Questions, Questions.
dvinnen - Friday, September 12, 2003 - link
#26: It means it can't cheat and use 16 bit registries to do it and need a full 24bit. SO it would waste the rest of the registryAnonymous User - Friday, September 12, 2003 - link
#26 That was in reference to the fx cards. They can do 16 or 32 bit precision. Ati cards do 24 bit precision, which is the dx 9 standard.24 bit is the dx 9 standard because it's "good enough." It's much faster than 32 bit, and much better looking then 16 bit. So 16 bit will wear out sooner. Of course, someday 24 bit won't be enough, either, but there's no way of knowing when that'll be.
Anonymous User - Friday, September 12, 2003 - link
Valve says no benchmarks on Athlon 64! :-/Booo!
Quote:
http://www.tomshardware.com/business/20030911/inde...
"Valve was able to heavily increase the performance of the NVIDIA cards with the optimized path but Valve warns that such optimizations won't be possible in future titles, because future shaders will be more complex and will thus need full 32-bit precision."
The new ATI cards only have 24bit shaders!
So would that make ALL current ATI cards without any way to run future Valve titles?
Perhaps I do not understand the technology fully, can someone elaborate on this?
Anonymous User - Friday, September 12, 2003 - link
I agree with #23 in terms of money making power the ATI/Valve combo is astounding. ATI's design is superior as we can see but the point is that ATI is going to get truckloads of money and recognition for this. Its a good day to have stock in ATI, lets all thank them for buying ArtX!Anonymous User - Friday, September 12, 2003 - link
I emailed gabe about my 9600 pro, but he didnt have to do all this just for me :DI love it.