The Radeon HD 4850 & 4870: AMD Wins at $199 and $299
by Anand Lal Shimpi & Derek Wilson on June 25, 2008 12:00 AM EST- Posted in
- GPUs
Derek Gets Technical Again: Of Warps, Wavefronts and SPMD
From our GT200 review, we learned a little about thread organization and scheduling on NVIDIA hardware. In speaking with AMD we discovered that sometimes it just makes sense to approach the solution to a problem in similar ways. Like NVIDIA, AMD schedules threads in groups (called wavefronts by AMD) that execute over 4 cycles. As RV770 has 16 5-wide SPs (each of which process one "stream" or thread or whatever you want to call it) at a time (and because they said so), we can conclude that AMD organizes 64 threads into one wavefront which all must execute in parallel. After GT200, we did learn that NVIDIA further groups warps into thread blocks, and we just learned that their are two more levels of organization in AMD hardware.
Like NVIDIA, AMD maintains context per wavefront: register space, instruction stream, global constants, and local store space are shared between all threads running in a wavefront and data sharing and synchronization can be done within a thread block. The larger grouping of thread blocks enables global data sharing using the global data store, but we didn't actually get a name or specification for it. On RV770 one VLIW instruction (up to 5 operations) is broadcast to each of the SPs which runs on it's own unique set of data and subset of the register file.
To put it side by side with NVIDIA's architecture, we've put together a table with what we know about resources per SM / SIMD array.
NVIDIA/AMD Feature | NVIDIA GT200 | AMD RV770 |
Registers per SM/SIMD Core | 16K x 32-bit | 16K x 128-bit |
Registers on Chip | 491,520 (1.875MB) | 163,840 (2.5MB) |
Local Store | 16KB | 16KB |
Global Store | None | 16KB |
Max Threads on Chip | 30,720 | 16,384 |
Max Threads per SM/SIMD Core | 1,024 | > 1,000 |
Max Threads per Warp/Wavefront | 960 | 256 (with 64 reserved) |
Max Warps/Wavefronts on Chip | 512 | We Have No Idea |
Max Thread Blocks per SM/SIMD Core | 8 | AMD Won't Tell Us |
We love that we have all this data, and both NVIDIA's CUDA programming guide and the documentation that comes with AMD's CAL SDK offer some great low level info. But the problem is that hard core tuners of code really need more information to properly tune their applications. To some extent, graphics takes care of itself, as there are a lot of different things that need to happen in different ways. It's the GPGPU crowd, the pioneers of GPU computing, that will need much more low level data on how resource allocation impacts thread issue rates and how to properly fetch and prefetch data to make the best use of external and internal memory bandwidth.
But for now, these details are the ones we have, and we hope that programmers used to programming massively data parallel code will be able to get under the hood and do something with these architectures even before we have an industry standard way to take advantage of heterogeneous computing on the desktop.
Which brings us to an interesting point.
NVIDIA wanted us to push some ridiculous acronym for their SM's architecture: SIMT (single instruction multiple thread). First off, this is a confusing descriptor based on the normal understanding of instructions and threads. But more to the point, there already exists a programming model that nicely fits what NVIDIA and AMD are both actually doing in hardware: SPMD, or single program multiple data. This description is most often attached to distributed memory systems and large scale clusters, but it really is actually what is going on here.
Modern graphics architectures process multiple data sets (such as a vertex or a pixel and its attributes) with single programs (a shader program in graphics or a kernel if we're talking GPU computing) that are run both independently on multiple "cores" and in groups within a "core". Functionally we maintain one instruction stream (program) per context and apply it to multiple data sets, layered with the fact that multiple contexts can be running the same program independently. As with distributed SPMD systems, not all copies of the program are running at the same time: multiple warps or wavefronts may be at different stages of execution within the same program and support barrier synchronization.
For more information on the SPMD programming model, wikipedia has a good page on the subject even though it doesn't talk about how GPUs would fit into SPMD quite yet.
GPUs take advantage of a property of SPMD that distributed systems do not (explicitly anyway): fine grained resource sharing with SIMD processing where data comes from multiple threads. Threads running the same code can actually physically share the same instruction and data caches and can have high speed access to each others data through a local store. This is in contrast to larger systems where each system gets a copy of everything to handle in its own way with its own data at its own pace (and in which messaging and communication become more asynchronous, critical and complex).
AMD offers an advantage in the SPMD paradigm in that it maintains a global store (present since RV670) where all threads can share result data globally if they need to (this is something that NVIDIA does not support). This feature allows more flexibility in algorithm implementation and can offer performance benefits in some applications.
In short, the reality of GPGPU computing has been the implementation in hardware of the ideal machine to handle the SPMD programming model. Bits and pieces are borrowed from SIMD, SMT, TMT, and other micro-architectural features to build architectures that we submit should be classified as SPMD hardware in honor of the programming model they natively support. We've already got enough acronyms in the computing world, and it's high time we consolidate where it makes sense and stop making up new terms for the same things.
215 Comments
View All Comments
shadowteam - Wednesday, June 25, 2008 - link
Did you know these chips can do up to 125C? 90C is so common for ATI cards, I haven't had one since 2005 that didn't blow me hair dry. Your NV card was just a bad chip I suppose. Why do you think NV or ATI would spend a billion dollars in research work, then let its product burn away due to some crappy cooling? They won't give you more cooling than you actually need. It's the same very cards that go to places like Abu-Dhabi, where room temps. easily hit 50C+.soloman02 - Wednesday, June 25, 2008 - link
Sorry, but no human would survive a temp of 50C.http://en.wikipedia.org/wiki/Thermoregulation#Hot">http://en.wikipedia.org/wiki/Thermoregulation#Hot
In fact the highest temp a human has survived was recorded by the Guinness book of world records as: 46.5C (115.7F). Keep in mind that was the internal temp of the guy. The temp on that day was 32.2C (90F).
http://www.powells.com/biblio?show=0553587129&...">http://www.powells.com/biblio?show=0553587129&...
http://www.time.com/time/magazine/article/0,9171,9...">http://www.time.com/time/magazine/article/0,9171,9...
If it is 50C in those rooms, the people inside are dead or dying.
The cards are probably fine. All it takes is to search google to back up your figures (or to disprove them like I just did).
shadowteam - Wednesday, June 25, 2008 - link
You're just a dumb pissed off loser. There's a big difference in internal human temperature to its surroundings. In places like Sahara, temperatures routinely hit 45C, and max out @ 55C. But does that mean people living there just die? No they don't, because they drink a lot of water, which helps their bodies get rid of excess heat so to keep their internals at normal temperature (32C). You didn't have this knowledge to share so you decided to Google it instead, and make fool out of yourself. Here, let me break it down for you,You said: "Keep in mind that was the internal temp of the guy"
Exactly, the guy was sick, and when you're sick, your body temperature rises, in which case 46C is the limit of survival. I suggest you take Bio-chemistry in college to learn more about human body, which is another 4 years before you finish school.
Ilmarin - Wednesday, June 25, 2008 - link
I'm not talking about chips failing altogether... just stability issues, similar to what you experience from over-zealous overclocking. Lots of people have encountered artifacting/crashes with stock-cooled cards over the years. If these are just 'bad chips' that are experiencing stability issues at high temps, then there are a lot of them getting through quality control. Of course NV and ATI do enough to make most people happy... but many of us have good reason to be nervous about temperature. I think they can and should do better. Dual slot exhaust coolers should be mandatory for the enthusiast/performance cards, with full fan control capability. Often it's up to the partners to get that right, and often it doesn't happen for at least a couple of months.shadowteam - Wednesday, June 25, 2008 - link
I think it's more profitable for board partners to just roll out a stock card rather than go through the trouble of investing time/money into performance cooling. What I've seen thus far, and it's quite apparent, that newer companies tend to go exotic cooling to get themselves heard. Once they're in the game, it's back to stock cooling. For example, Palit and ECS came up with nice coolers for its 9600s. Remember Leadtek from past years? They don't even do custom coolers any more. ASUS, Powercolor, Gigabyte, Sapphire etc just find it easier to throw in a 3rd party cooler from ZM, TT TR, and call it a day.DerekWilson - Wednesday, June 25, 2008 - link
you know we actually received an updated bios for a certain vendors 4850 that speeds the fan up a bit and should reduce heat ...i suspect a lot of vendors will start adjusting their fan tables actually ...
shadowteam - Wednesday, June 25, 2008 - link
I think this reply was meant for the guy right above me. I'm all for stock cooling :).ImmortalZ - Wednesday, June 25, 2008 - link
"Quake Wars once again shows the 4870 outperforming the GTX 280, but this time it offers essentially the same performance as the GTX 280 - but at half the price. "You mean the 260 in the first instance?
No text in The Witcher page. I assume this is intentional.
Also, I've heard on the web that the 48xx series has dual-link only on one of it's DVI ports. Is this true?
Oh and another thing - why is the post comment page titled "Untitled Page"? :P
rahat5810 - Wednesday, June 25, 2008 - link
Nice cards and nice article. But I would like to point out that there are some mistakes in the article, nothing fatal though. Like, not mentioning 4870 in the list of cards, writing 280 instead of 260, clicking on the picture to enlarge not working for some of the figures.feelingshorter - Wednesday, June 25, 2008 - link
AMD almost has a perfect card but the fact that the 4870 idles at 46.1 more watts than the 260 means the card will heat up people's room. At load, the difference of 16.1 watts more for the 4870 is forgivable.If its possible to overclock a card using software (without going into BIOS screen), then why isn't it possible to underclock a card also using software when the card's full potential isn't being used? I'd really be interested in knowing the answer, or maybe someone just hasn't asked the question?
I hardly care about Crysis, its more a matter of will it run Starcraft II with 600 units on the map without overheating. Why doesn't anandtech also test how hot the 4870 runs? Although the 4850 numbers aren't pretty at all, the 4870 is a dual slot cooler and might give better numbers right? I only want to know because, like a lot of readers, i have doubts as to whether a card like the 4850 can run super hot and not die within 1+ years of hardcore gaming.