The Radeon HD 4850 & 4870: AMD Wins at $199 and $299
by Anand Lal Shimpi & Derek Wilson on June 25, 2008 12:00 AM EST- Posted in
- GPUs
AMD's RV770 vs. NVIDIA's GT200: Which one is More Efficient?
It is one thing to be able to sustain high levels of performance and altogether another to do it efficiently. AMD's architecture is clearly the more area efficient compared to NVIDIA.
Alright now, don't start yelling that RV770 is manufactured at 55nm while GT200 is a 65nm part: we're taking that into account. The die size of GT200 is 576mm^2, but if we look at scaling the core down to 55nm, we would end up with a 412mm^2 part with perfect scaling. This is being incredibly generous though, as we understand that TSMC's 55nm half-node process scales down die size much less efficiently one would expect. But lets go with this and give NVIDIA the benefit of the doubt.
First we'll look at area efficiency in terms of peak theoretical performance using GFLOPS/mm^2 (performance per area). Remember, these are just ratios of design and performance aspects; please don't ask me what an (operation / (s * mm * mm)) really is :)
Normalized Die Size | GFLOPS | GFLOPS/mm^2 | |
AMD RV770 | 260 mm^2 | 1200 | 4.62 |
NVIDIA GT200 | 412 mm^2 | 933 | 2.26 |
This shows us that NVIDIA's architecture requires more than 2x the die area of AMD's in order to achieve the same level of peak theoretical performance. Of course theoretical performance doesn't mean everything, especially in light of our previous discussion on extracting parallelism. So let's take a look at real performance per area and see what we get in terms of some of our benchmarks, specifically Bioshock, Crysis, and Oblivion. We chose these titles because relative performance of RV770 is best compared to GT200 in Bioshock and worst in Oblivion (RV770 actually leads the GT200 in bioshock performance while the GT200 crushes RV770 in Oblivion). We included Crysis because it's engine is quite a popular and stressful benchmark that falls somewhere near the middle of the range in performance difference between RV770 and GT200 in the tests we looked at.
These numbers look at performance per cm^2 (because the numbers look prettier when multiplied by 100). Again, this doesn't really show something that is a thing -- it's just a ratio we can use to compare the architectures.
Performance per Die Area | Normalized Die Size in cm^2 | Bioshock | Crysis | Oblivion |
AMD RV770 | 2.6 | 27 fps/cm^2 | 11.42 fps/cm^2 | 10.23 fps/cm^2 |
NVIDIA GT200 | 4.12 | 15.51 fps/cm^2 | 8.33 fps/cm^2 | 8.93 fps/cm^2 |
While it doesn't tell the whole story, it's clear that AMD does have higher area efficiency relative to the performance they are able attain. Please note that comparing these numbers directly doesn't yield anything that can be easily explained (the percent difference in frames per second per millimeter per millimeter doesn't really make much sense as a concept), which is part of why these numbers aren't in a graph but are in a table. So while higher numbers show that AMD is more area efficient, this data really doesn't show how much of an advantage AMD really has. Especially since we are normalizing sizes and looking at game performance rather than microbenches.
Some of this efficiency may come from architectural design, while some may stem from time spent optimizing the layout. AMD said that some time was spent doing area optimization on their hardware, and that this is part of the reason they could get more than double the SPs in there without more than doubling the transistor count or building a ridiculously huge die. We could try to look at transistor density, but transistor counts from AMD and NVIDIA are both just estimates that are likely done very differently and it might not reflect anything useful.
We can talk about another kind of efficiency though. Power efficiency. This is becoming more important as power costs rise, as computers become more power hungry, and as there is a global push towards conservation. The proper way to look at power efficiency is to look at the amount of energy it takes to render a frame. This is a particularly easy concept to grasp unlike the previous monstrosities. It turns out that this isn't a tough thing to calculate.
To get this data we recorded both frame rate and watts for a benchmark run. Then we look at average frame rate (frames per second) and average watts (joules per second). We can then divide average watts by average frame rate and we end up with: average joules / frames. This is exactly what we need to see energy per frame for a given benchmark. And here's a look at Bioshock, Crysis and Oblivion.
Average energy per frame | Bioshock | Crysis | Oblivion |
AMD RV770 | 4.45 J/frame | 10.33 J/frame | 11.07 J/frame |
NVIDIA GT200 | 5.37 J/frame | 9.99 J/frame | 9.57 J/frame |
This is where things get interesting. AMD and NVIDIA trade off on power efficiency when it comes to the tests we showed here. Under Bioshock RV770 requires less energy to render a frame on average in our benchmark. The opposite is true for Oblivion, and NVIDIA does lead in terms of power efficiency under Crysis. Yes, RV770 uses less power to achieve it's lower performance in Crysis and Oblivion, but for the power you use NVIDIA gives you more. But RV770 leads GT200 in performance under Bioshock while drawing less power, which is quite telling about the potential of RV770.
The fact that this small subset of tests shows the potential of both architectures to have a performance per watt advantage under different circumstances means that as time goes on and games come out, optimizing for both architectures will be very important. Bioshock shows that we can achieve great performance per watt (and performance for that matter) on both platforms. The fact that Crysis is both forward looking in terms of graphics features and shows power efficiency less divergent than Bioshock and Oblivion is a good sign for (but not a guarantee of) consistent performance and power efficiency.
215 Comments
View All Comments
araczynski - Wednesday, June 25, 2008 - link
...as more and more people are hooking up their graphics cards to big HDTVs instead of wasting time with little monitors, i keep hoping to find out whether the 9800gx2/4800 lines have proper 1080p scaling/synching with the tvs? for example the 8800 line from nvidia seems to butcher 1080p with tv's.anyone care to speak from experience?
DerekWilson - Wednesday, June 25, 2008 - link
i havent had any problem with any modern graphics card (dvi or hdmi) and digital hdtvsi haven't really played with analog for a long time and i'm not sure how either amd or nvidia handle analog issues like overscan and timing.
araczynski - Wednesday, June 25, 2008 - link
interesting, what cards have you worked with? i have the 8800gts512 right now and have the same problem as with the 7900gtx previously. when i select 1080p for the resolution (which the drivers recognize the tv being capable of as it lists it as the native resolution) i get a washed out messy result where the contrast/brightness is completely maxed (sliders do little to help) as well as the whole overscan thing that forces me to shrink the displayed image down to fit the actual tv (with the nvidia driver utility). 1600x900 can usually be tolerable in XP (not in vista for some reason) and 1080p is just downright painful.i suppose it could by my dvi to hdmi cable? its a short run, but who knows... i just remember reading a bit on the nvidia forums that this is a known issue with the 8800 line, so was curious as to how the 9800 line or even the 4800 line handle it.
but as the previous guy mentioned, ATI does tend to do the TV stuff much better than nvidia ever did... maybe 4850 crossfire will be in my rig soon... unless i hear more about the 4870x2 soon...
ChronoReverse - Wednesday, June 25, 2008 - link
ATI cards tend to do the TV stuff properlyFXi - Wednesday, June 25, 2008 - link
If Nvidia doesn't release SLI to Intel chipsets (and on a $/perf ratio it might not even help if it does), the 4870 in CF is going to stop sales of the 260's into the ground.Releasing SLI on Intel and easing the price might help ease that problem, but of course they won't do it. Looks like ATI hasn't just come back, they've got a very, very good chip on their hands.
Powervano - Wednesday, June 25, 2008 - link
Anand and DerekWhat about temperatures of HD4870 under IDLE and LOAD? page 21 only shows power comsumption.
iwodo - Wednesday, June 25, 2008 - link
Given how ATI architecture greatly rely on maximizing its Shader use, wouldn't driver optimization be much more important then Nvidia in this regard?And is ATI going about Nvidia CUDA? Given CUDA now have a much bigger exposure then how ever ATI is offering.. CAL or CTM.. i dont even know now.
DerekWilson - Wednesday, June 25, 2008 - link
getting exposure for AMD's own GPGPU solutions and tools is going to be though, especially in light of Tesla and the momentum NVIDIA is building in the higher performance areas.they've just got to keep at it.
but i think their best hope is in Apple right now with OpenCL (as has been mentioned above) ...
certainly AMD need to keep pushing their GPU compute solutions, and trying to get people to build real apps that they can point to (like folding) and say "hey look we do this well too" ...
but in the long term i think NVIDIA's got the better marketing there (both to consumers and developers) and it's not likely going to be until a single compute language emerges as the dominant one that we see level competition.
Amiga500 - Wednesday, June 25, 2008 - link
AMD are going to continue to use the open source alternative - Open CL.In a relatively fledgling program environment, it makes all the sense in the world for developers to use the open source option, as compatibility and interoperability can be assured, unlike older environments like graphics APIs.
OSX v10.6 (snow lepoard) will use Open CL.
DerekWilson - Wednesday, June 25, 2008 - link
OpenCL isn't "open source" ...Apple is trying to create an industry standard heterogeneous compute language.
What we need is a compute language that isn't "owned" by a specific hardware maker. The problem is that NVIDIA has the power to redefine the CUDA language as it moves forward to better fit their architecture. Whether they would do this or not is irrelevant in light of the fact that it makes no sense for a competitor to adopt the solution if the possibility exists.
If NVIDIA wants to advance the industry, eventually they'll try and get CUDA ANSI / ISO certified or try to form an industry working group to refine and standardize it. While they have the exposure and power in CUDA and Tesla they won't really be interested in doing this (at least that's our prediction).
Apple is starting from a standards centric view and I hope they will help build a heterogeneous computing language that combines the high points of all the different solutions out there now into something that's easy to develop or and that can generate code to run well on all architectures.
but we'll have to wait and see.