AMD's Radeon HD 5870: Bringing About the Next Generation Of GPUs
by Ryan Smith on September 23, 2009 9:00 AM EST- Posted in
- GPUs
Meet the 5870
The card we’re looking at today is the Radeon HD 5870, based on the Cypress core.
Compared to the Radeon HD 4870, the 5870 has seen some changes to the board design. AMD has now moved to using a full sheath on their cards (including a backplate), very much like the ones that NVIDIA has been using since the 9800GTX. The card measures 10.5” long, an inch longer than the 4890 or the same as the 4870x2 and the NVIDIA GTX lineup.
The change in length means that AMD has moved the PCIe power connectors to the top of the card facing upwards, as there’s no longer enough room in the rear. Facing upwards is also a change from the 4870x2, which had them facing the front of the card. This, in our opinion, makes it easier to plug and unplug the PCIe power connectors, since it’s now possible to see what you’re doing.
Since the card has a TDP of 188W, AMD can still get away with using two 6-pin connectors. This is going to be good news for those of you with older power supplies that don’t feature 8-pin connectors, as previously the fastest cards without 8-pin connectors were the 4890 and GTX 285.
Briefly, the 5850 that we are not testing today will be slightly smaller than the 5870, coming in at 9.5”. It keeps the same cooler design, however the PCIe power connectors are back on the rear of the card.
With the 5800 series, DisplayPort is getting a much-needed kick in the pants. DisplayPort (full size) is standard on all 5800 series cards – prior to this it has been rather absent on reference cards. Along with a DisplayPort, the 5870 reference card contains a dedicated HDMI port, and a pair of DVI ports.
Making 4 ports fit on a card isn’t a trivial task, and AMD has taken an interesting direction in making it happen. Rather than putting every port on the same slot of the bracket as the card itself, one of the DVI ports is raised on to the other bracket. ATI could have just as easily only equipped these cards with 1 DVI port, and used an HDMI-to-DVI adapter for the second port. The advantage of going this direction is that the 5800 series can still drive two VGA monitors when using DVI-to-VGA adapters, and at the same time having an HDMI port built in means that no special adapters are necessary to get an HDMI port with audio capabilities. The only catch to this specific port layout is that the card still only has enough TMDS transmitters for two ports. So you can use 2x DVI or 1x DVI + HDMI, but not 2x DVI + HDMI. For 3 DVI-derived ports, you will need an active DisplayPort-to-DVI adapter.
With the configuration AMD is using, fitting that second DVI port also means that the exhaust vent of the 5800 series cards is not the full length of the card as is usually common, rather it’s a hair over half the length. The smaller size had us concerned about the 5870’s cooling capabilities, but as you’ll see with our temperature data, even with the smaller exhaust vent the load temperatures are no different than the 4870 or 4850, at 89C. And this is in spite of the fact that the 5870 is rated 28W more than the 4870.
With all of these changes also comes some changes to the loudness of the 5870 as compared to the 4870. The 27W idle power load means that AMD can reduce the speed of the fan some, and they say that the fan they’re using now is less noticeable (but not necessarily quieter) than what was on the 4870. In our objective testing the 5870 was no quieter than any of the 4800 series cards when it comes to idling at 46.6dB, and indeed it’s louder than any of those cards at 64dB at load. But in our subjective testing it has less of a whine. If you go by the objective data, this is a push at idle and louder at load.
Speaking of whining, we’re glad to report that the samples we received do not have the characteristic VRM whine/singing that has plagued many last-generation video cards. Most of our GTX cards and roughly half of our 4800 series cards generated this noise under certain circumstances, but the 5870 does not.
Finally, let’s talk about memory. Despite of doubling just about everything compared to RV770, Cypress and the 5800 series cards did not double their memory bandwidth. Moving from the 4870 and it’s 900MHz base memory clock, the 5870 only jumps up by 33% to 1.2Ghz, in effect increasing the ratio of GPU compute elements to memory bandwidth.
When looking back at the RV770, AMD believes that they were not bandwidth starved on the cards that used GDDR5. And since they had more bandwidth than they needed, it was not necessary to go for significantly more bandwidth for Cypress. This isn’t something we can easily test, but in our benchmarks the 5870 never doubles the performance of the 4870, in spite of being nearly twice the card. Graphics processing is embarrassingly parallel, but that doesn’t mean it perfectly scales. The different may be a product of that or a product of the lack of scaling in memory bandwidth, we can’t tell. What’s for certain however is that we don’t have any hard-capped memory bandwidth limited situations, the 5870 always outscores the 4870 by a great deal more than 33%.
327 Comments
View All Comments
SiliconDoc - Thursday, September 24, 2009 - link
Are you seriously going to claim that all ATI are not generally hotter than the nvidia cards ? I don't think you really want to do that, no matter how much you wail about fan speeds.The numbers have been here for a long time and they are all over the net.
When you have a smaller die cranking out the same framerate/video, there is simply no getting around it.
You talked about the 295, as it really is the only nvidia that compares to the ati card in this review in terms of load temp, PERIOD.
In any other sense, the GT8800 would be laughed off the pages comparing it to the 5870.
Furthermore, one merely needs to look at the WATTAGE of the cards, and that is more than a plenty accurate measuring stick for heat on load, divided by surface area of the core.
No, I'm not the one not thinking, I'm not the one TROLLING, the TROLLING is in the ARTICLE, and the YEAR plus of covering up LIES we've had concerning this very issue.
Nvidia cards run cooler, ati cards run hotter, PERIOD.
You people want it in every direction, with every lying whine for your red god, so pick one or the other:
1.The core sizes are equivalent, or 2. the giant expensive dies of nvidia run cooler compared to the "efficient" "new technology" "packing the data in" smaller, tiny, cheap, profit margin producing ATI cores.
------
NOW, it doesn't matter what lies or spin you place upon the facts, the truth is absolutely apparent, and you WON'T be changing the physical laws of the universe with your whining spin for ati, and neither will the trolling in the article. I'm going to stick my head in the sand and SCREAM LOUDLY because I CAN'T HANDLE anyone with a lick of intelligence NOT AGREEING WITH ME! I LOVE TO LIE AND TYPE IN CAPS BECAUSE THAT'S HOW WE ROLL IN ILLINOIS!
SiliconDoc - Friday, September 25, 2009 - link
Well that is amazing, now a mod or site master has edited my text.Wow.
erple2 - Friday, September 25, 2009 - link
This just gets better and better...Ultimately, the true measure of how much waste heat a card generates will have to look at the power draw of the card, tempered with the output work that it's doing (aka FPS in whatever benchmark you're looking at). Since I haven't seen that kind of comparison, it's impossible to say anything at all about the relative heat output of any card. So your conclusions are simply biased towards what you think is important (and that should be abundantly clear).
Given that one must look at the performance per watt. Since the only wattage figures we have are for OCCT or WoW playing, so that's all the conclusions one can make from this article. Since I didn't see the results from the OCCT test (in a nice, convenient FPS measure), we get the following:
5870: 73 fps at 295 watts = 247 FPS per milliwatt
275: 44.3 fps at 317 watts = 140 FPS per milliwatt
285: 45.7 fps at 323 watts = 137 FPS per milliwatt
295: 68.9 fps at 380 watts = 181 FPS per milliwatt
That means that the 5870 wins by at least 36% over the other 3 cards. That means that for this observation, the 5870 is, in fact, the most efficient of these cards. It therefore generates less heat than the other 3 cards. Looking at the temperatures of the cards, that strictly measures the efficiency of the cooler, not the efficiency of the actual card itself.
You can say that you think that I'm biased, but ultimately, that's the data I have to go on, and therefore that's the conclusions that can be made. Unfortunately, there's nothing in your post (or more or less all of your posts) that can be verified by any of the information gleaned from the article, and therefore, your conclusions are simply biased speculation.
SiliconDoc - Saturday, September 26, 2009 - link
4780, 55nm, 256mm die, 150watts HOTG260, 55nm, 576mm die, 171watts COLD
3870, 55nm, 192mm die, 106watts HOT
That's all the further I should have to go.
3870 has THE LOWEST LOAD POWER USEAGE ON THE CHARTS
- but it is still 90C, at the very peak of heat,
because it has THE TINIEST CORE !
THE SMALLEST CORE IN THE WHOLE DANG BEJEEBER ARTICLE !
It also has the lowest framerate - so there goes that erple theory.
---
The anomlies you will notice if you look, are due to nm size, memory amount on board (less electricity used by the memory means the core used more), and one slot vs two slot coolers, as examples, but the basic laws of physics cannot be thrown out the window because you feel like doing it, nor can idiotic ideas like framerate come close to predicting core temp and it's heat density at load.
Older cpu's may have horrible framerates and horribly high temps, for instance. The 4850 frames do not equal the 4870's, but their core temp/heat density envelope is very close to indentical ( SAME CORE SIZE > the 4850 having some die shaders disabled and ddr3, the 4870 with ddr5 full core active more watts for mem and shaders, but the same PHYSICAL ISSUES - small core, high wattage for area, high heat)
erple2 - Tuesday, September 29, 2009 - link
I didn't say that the 3870 was the most efficient card. I was talking about the 5870. If you actually read what I had typed, I did mention that you have to look at how much work the card is doing while consuming that amount of power, not just temperatures and wattage.You sir, are a Nazi.
Actually, once you start talking about heat density at load, you MUST look at the efficiency of the card at converting electricity into whatever it's supposed to be doing (other than heating your office). Sadly, the only real way that we have to abstractly measure the work the card is doing is "FPS". I'm not saying that FPS predict core temperature.
SiliconDoc - Wednesday, September 30, 2009 - link
No, the efficiency of conversion you talk about has NOTHING to do with core temp AT ALL. The card could be massively efficient or inefficient at produced framerate, or just ERROR OUT with a sick loop in the core, and THAT HAS ABSOLUTELY NOTHING TO DO WITH THE CORE TEMP. IT RESTS ON WATTS CONSUMED EVEN IF FRAMERATE OUTPUT IS ZERO OR 300SECOND.(your mind seems to have imagined that if the red god is slinging massive frames "out the dvi port" a giant surge of electricity flows through it to the monitor, and therefore "does not heat the card")
I suggest you examine that lunatic red notion.
What YOU must look at is a red rooster rooter rimshot, in order that your self deception and massive mistake and face saving is in place, for you. At least JaredWalton had the sense to quietly skitter away.
Well, being wrong forever and never realizing a thing is perhaps the worst road to take.
PS - Being correct and making sure the truth is defended has nothing to do with some REDEYE cleche, and I certainly doubt the Gregalouge would embrace red rooster canada card bottom line crumbled for years ever more in a row, and diss big green corporate profits, as we both obviously know.
" at converting electricity into whatever it's supposed to be doing (other than heating your office). "
ONCE IT CONVERTS ELECTRICITY, AS IN "SHOWS IT USED MORE WATTS" it doesn't matter one ding dang smidgen what framerate is,
it could loop sand in the core and give you NO screeen output,
and it would still heat up while it "sat on it's lazy", tarding upon itself.
The card does not POWER the monitor and have the monitor carry more and more of the heat burden if the GPU sends out some sizzly framerates and the "non-used up watts" don't go sailing out the cards connector to the monitor so that "heat generation winds up somewhere else".
When the programmers optimize a DRIVER, and the same GPU core suddenly sends out 5 more fps everything else being the same, it may or may not increase or decrease POWER USEAGE. It can go ANY WAY. Up, down, or stay the same.
If they code in more proper "buffer fills" so the core is hammered solid, instead of flakey filling, the framerate goes up - and so does the temp!
If they optimize for instance, an algorythm that better predicts what does not need to be drawn as it rests behind another image on top of it, framerate goes up, while temp and wattage used GOES DOWN.
---
Even with all of that, THERE IS ONLY ONE PLACE FOR THE HEAT TO ARISE... AND IT AIN'T OUT THE DANG CABLE TO THE MONITOR!
SiliconDoc - Friday, September 25, 2009 - link
You can modify that, or be more accurate, by using core mass, (including thickness of the competing dies) - since the core mass is what consumes the electricity, and generates heat. A smaller mass (or die size, almost exclusively referred to in terms of surface area with the assumption that thickness is identical or near so) winds up getting hotter in terms of degrees of Celcius when consuming a similar amount of electricity.Doesn't matter if one frame, none, or a thousand reach your eyes on the monitor.
That's reality, not hokum. That's why ATI cores run hotter, they are smaller and consume a similar amount of electricty, that winds up as heat in a smaller mass, that means hotter.
Also, in actuality, the ATI heatsinks in a general sense, have to be able to dissipate more heat with less surface area as a transfer medium, to maintain the same core temps as the larger nvidia cores and HS areas, so indeed, should actually be "better stock" fans and HS.
I suspect they are slightly better as a general rule, but fail to excel enough to bring core load temps to nvidia general levels.
erple2 - Friday, September 25, 2009 - link
You understand that if there were no heatsink/cooling device on a GPU, it would heat up to crazy levels, far more than would be "healthy" for any silicon part, right? And you understand that measuring the efficiency of a part involves a pretty strong correlation between the input power draw of the card vs. the work that the card produces (which we can really only measure based on the output of the card, namely FPS), right?So I'm not sure that your argument means anything at all?
Curiously, the output wattage listed is for the entire system, not just for the card. Which means that the actual differences between the ATI cards vs. the nvidia cards is even larger (as a percentage, at least). I don't know what the "baseline" power consumption of the system (sans video card) is for the system acting as the test bed is.
Ultimately, the amount of electricity running through the GPU doesn't necessarily tell you how much heat the processors generate. It's dependent on how much of that power is "wasted" as heat energy (that's Thermodynamics for you). The only way to really measure the heat production of the GPU is to determine how much power is "wasted" as heat. Curiously, you can't measure that by measuring the temperature of the GPU. Well, you CAN, but you'd have to remove the Heatsink (and Fan). Which, for ANY GPU made in the last 15 years, would cook it. Since that's not a viable alternative, you simply can't make broad conclusions about which chip is "hotter" than another. And that is why your conclusions are inconclusive.
BTW, the 5870 consumes "less" power than the 275, 285 and 295 GPUs (at least, when playing WoW).
I understand that there may be higher wattage per square millimeter flowing through the 5870 than the GTX cards, but I don't see how that measurement alone is enough to state whether the 5870 actually gets hotter.
SiliconDoc - Saturday, September 26, 2009 - link
Take a look at SIZE my friend.http://www.hardforum.com/showthread.php?t=1325165">http://www.hardforum.com/showthread.php?t=1325165
There's just no getting around the fact that the more joules of heat in any time period (wattage used!= amount of joules over time!) that go into a smaller area, the hotter it gets, faster !
Nothing changes this, no red rooster imagination will ever change it.
SiliconDoc - Saturday, September 26, 2009 - link
NO, WRONG." Ultimately, the true measure of how much waste heat a card generates will have to look at the power draw of the card, tempered with the output work that it's doing (aka FPS in whatever benchmark you're looking at)."
NO, WRONG.
---
Look at any of the cards power draw in idle or load. They heat up no matter how much "work" you claim they do, by looking at any framerate, because they don't draw the power unless they USE THE POWER. That's the law that includes what useage of electricity MEANS for the law of thermodynamics, or for E=MC2.
DUHHHHH.
---
If you're so bent on making idiotic calculations and applying them to the wrong ideas and conclusions, why don't you take core die size and divide by watts (the watts the companies issue or take it from the load charts), like you should ?
I know why. We all know why.
---
The same thing is beyond absolutely apparent in CPU's, their TDP, their die size, and their heat envelope, including their nm design size.
DUHHH. It's like talking to a red fanboy who cannot face reality, once again.