ATI Radeon HD 2900 XT: Calling a Spade a Spade
by Derek Wilson on May 14, 2007 12:04 PM EST- Posted in
- GPUs
DX10 Redux
Being late to the game also means being late to DirectX 10; luckily for AMD there hasn't been much in the way of DX10 titles released nor will there be for a while - a couple titles should have at least some form of DX10 support in the next month or two, but that's about it. What does DX10 offer above and beyond DX9 that makes this move so critical? We looked at DirectX 10 in great detail when NVIDIA released G80, but we'll give you a quick recap here as a reminder of what the new transistors in R600 are going to be used for.
From a pure performance standpoint, DX10 offers more efficient state and object management than DX9, resulting in less overhead from the API itself. There's more room to store data for use in shader programs, and this is largely responsible for the reduction in management overhead. For more complex shader programs, DX10 should perform better than DX9.
A hot topic these days in the CPU world is virtualization, and although it's not as much of a buzzword among GPU makers there's still a lot to talk about the big V when it comes to graphics. DirectX 10 and the new WDDM (Windows Display Driver Model) require that graphics hardware supports virtualization, the reason being that DX10 applications are no longer guaranteed exclusive access to the GPU. The GPU and its resources could be split between a 3D game and physics processing, or in the case of a truly virtualized software setup, multiple OSes could be vying for use of the GPU.
Virtual memory is also required by DX10, meaning that the GPU can now page data out to system memory if it runs out of memory on the graphics card. If managed properly and with good caching algorithms, virtualized graphics memory can allow game developers to use ridiculously large textures and objects. 3DLabs actually first introduced virtualized graphics memory on its P10 graphics processor back in 2002; Epic Games' Tim Sweeney had this to say about virtual memory for graphics cards back then:
"This is something Carmack and I have been pushing 3D card makers to implement for a very long time. Basically it enables us to use far more textures than we currently can. You won't see immediate improvements with current games, because games always avoid using more textures than fit in video memory, otherwise you get into texture swapping and performance becomes totally unacceptable. Virtual texturing makes swapping performance acceptable, because only the blocks of texels that are actually rendered are transferred to video memory, on demand.
Then video memory starts to look like a cache, and you can get away with less of it - typically you only need enough to hold the frame buffer, back buffer, and the blocks of texels that are rendered in the current scene, as opposed to all the textures in memory. So this should let IHVs include less video RAM without losing performance, and therefore faster RAM at less cost.
This does for rendering what virtual memory did for operating systems: it eliminates the hard-coded limitation on RAM (from the application's point of view.)"
Obviously the P10 was well ahead of its time as a gaming GPU, but now with DX10 requiring it, virtualized graphics memory is a dream come true for game developers and will bring even better looking games to DX10 GPUs down the line. Of course, with many GPUs now including 512MB or more RAM it may not be as critical a factor as before, at least until we start seeing games and systems that benefit from multiple Gigabytes of RAM.
Continuing on the holy quest of finding the perfect 3D graphics API, DX10 does away with the hardware capability bits that were present in DX9. All DX10 hardware must support the same features and furthermore, Microsoft will only allow DX10 shaders to be written in HLSL (High Level Shader Language). The hope is that the combination of eliminating cap bits and shader level assembly optimization will prevent any truly "bad" DX10 hardware/software from getting out there, similar to what happened in the NV30 days.
Although not specifically a requirement of DX10, Unified Shaders are a result of changes to the API. While DX9 called for different precision for vertex and pixel shaders, DX10 requires all shaders to use 32-bit precision, making the argument for unified shader hardware more appealing. With the same set of 32-bit execution hardware, you can now run pixel, vertex, and the new geometry shaders (also introduced in DX10). Unified shaders allow extracting greater efficiency out of the hardware, and although they aren't required by DX10 they are a welcome result now implemented by both major GPU makers.
Finally, there are a number of enhancements to Shader Model 4.0 which simply allow for more robust shader programs and greater programmer flexibility. In the end, the move to DX10 is just as significant as prior DirectX revisions but don't expect this rate of improvement to continue. All indications point to future DX revisions slowing in their pace and scope. However, none of this matters to us today; with the R600 as AMD's first DX10 architecture, much has changed since the DX9 Radeon X1900 series.
86 Comments
View All Comments
GoatMonkey - Monday, May 14, 2007 - link
That's obviously BS. This IS their high end part, it just doesn't perform as well as nVidia's high end part, so it is priced accordingly.poohbear - Monday, May 14, 2007 - link
sweet review though! thanks for including all the important and pertinent cards in your roundup (the 8800gts 320mb inparticular). also love how neutral Anand is in their reviews, unlike some other sites.:pCreig - Monday, May 14, 2007 - link
The R600 is finally here. I'm sure the overall performance is not what AMD was hoping for. Nobody ever shoots to have their newest product be the 2nd best. But pricing it at $399 and including a very nice game bundle will make the HD 2900 XT a VERY worthwhile purchase. I also have the feeling that there is a significant amount of performance increase to be realized through future driver releases ala X1800XT.shady28 - Tuesday, May 15, 2007 - link
Nvidia has gone over the cliff on pricing.
I know of no one personally who has an 88xx series card. I know one who recently picked up an 8600 of some kind, that's it. I have the best GPU of anyone I know.
It's a real shame that there is so much focus on graphics cards that virtually no one buys. These are niche products folks - yet 'who is best' seems to be totally dependent on these niche products. That's patently ridiculous.
It's like saying, since IBM makes the fastest computers in the world (they do), they're the best and you should be buying IBM (or now, lenovo) laptops and desktops.
No one ever said that sort of thing because it's patently ridiculous. Why do people say it now for graphics cards? The fact that they do says a lot about the mentality of sites like AT.
DerekWilson - Tuesday, May 15, 2007 - link
We don't say what you are implying, and we are also very upset with some of NVIDIA's pricing (specifically the 8800 ultra)the 8800 gts 320mb is one of the best values for your money anywhere and isn't crazy expensive -- it's actually the card I'd recommend to anyone who cares about graphics in games and wants good quality and performance at 1600x1200.
I would never tell anyone to buy an 8600 gts because nvidia has the fastest high end card. In fact, in this article, I hope I made it clear that AMD has the opportunity to capitalize on the huge performance gap nvidia left between the 8600 and 8800 series ... If AMD builds a part that performs in this range is priced competitively, they'll have our recommendation in a flash.
Recommending parts based on value at each price or performance segment is something we take pride in and will always do, no matter who has the absolute fastest hardware out there.
The reason our focus was on AMD's fastest part is because they haven't given us any other hardware to test. We will absolutely be talking a lot and in much depth about midrange and budget hardware when AMD makes these parts available to us.
yacoub - Monday, May 14, 2007 - link
$400 is a lot of money. Not terribly long ago the highest end GPU available didn't cost more than $400. Now they hit $750 so you start to think $400 sounds cheap. It's really not. It's a heck of a lot of money for one piece of hardware. You can put together a 650i SLI rig with 2GB of DDR2 6400 and an E4400 for that much money. I know because I just did that. I kept my 7900GT from my old rig because I wanted to see how R600 did before purchasing an 8800GTS 640MB. Now that we've seen initial results I will wait to see how R600 does with more mature drivers and also wait to see the 640MB GTS price come down even more in the meantime.vijay333 - Monday, May 14, 2007 - link
http://www.randomhouse.com/wotd/index.pperl?date=1...">http://www.randomhouse.com/wotd/index.pperl?date=1..."the expression to call a spade a spade is thousands of years old and etymologically has nothing whatsoever to do with any racial sentiment."
yacoub - Monday, May 14, 2007 - link
Yes, a spade was a shovel long before muslims enslaved europeans to do hard labor in north africa and europeans enslaved africans to do hard labor in the 'new world'.vijay333 - Monday, May 14, 2007 - link
whoops...replied to the wrong one.rADo2 - Monday, May 14, 2007 - link
It is not 2nd best (after 8800ULTRA), not 3rd best (after 8800GTX), not 4th best (after 8800GTX-640), but 5th best (after 8800GTS-320), or even worse ;)Bad performance with AA turned on (everybody turns on AA), huge power consumption, late to the market.
A definitive failure.