ATI Radeon HD 3870 & 3850: A Return to Competition
by Anand Lal Shimpi & Derek Wilson on November 15, 2007 12:00 AM EST- Posted in
- GPUs
New Features you Say? UVD and DirectX 10.1
As we mentioned, new to RV670 are UVD, PowerPlay, and DX10.1 hardware. We've covered UVD quite a bit before now, and we are happy to learn that UVD is now part of AMD's top to bottom product line. To recap, UVD is AMD's video decode engine which supports decode, deinterlacing, and post processing for video playback. The key features of UVD are full decode support for both VC-1 and H.264. MPEG-2 decode is also supported, but the entropy decode step is not performed for MPEG-2 video in hardware. The advantage over NVIDIA hardware is the inclusion of entropy decode support for VC-1 video, but this tends to be overplayed by AMD. VC-1 is lighter weight than H.264, and the entropy decode step for VC-1 doesn't make or break playability even on lower end CPUs.
DirectX 10.1 is basically a release of DirectX that clarifies some functionality and adds a few features. Both AMD and NVIDIA's DX10 hardware support some of the DX10.1 requirements, but since they don't support everything they can't claim DX10.1 as a feature. Because there are no capability bits, game developers can't rely on any of the DX10.1 features to be implemented in DX10 hardware.
It's good to see AMD embracing DX10.1 so quickly, as it will eventually be the way of the world. The new capabilities that DX10.1 enables are enhanced developer control of AA sample patterns and pixel coverage, blend modes can be unique per render target rather, vertex shader inputs are doubled, fp32 filtering is required, and cube map arrays are supported which can help make global illumination algorithms faster. These features might not make it into games very quickly, as we're still waiting for games that really push DX10 as it is now. But AMD is absolutely leading NVIDIA in this area.
Better Power Management
As for PowerPlay, which is usually found in mobile GPUs, AMD has opted to include broader power management support in their desktop GPUs as well. While they aren't to wholly turn off parts of the chip, clock gaiting is used, as well as dynamic adjustment of core and memory clock speed and voltages. The command buffer is monitored to determine when power saving features need to be applied. This means that when applications need the power of the GPU it will run at full speed, but when less is going on (or even when something is CPU limited) we should see power, noise, and heat characteristics improve.
One of the cool side effects of PowerPlay is that clock speeds are no longer determined by application state. On previous hardware, 3d clock speeds were only enabled when a fullscreen 3D application started. This means that GPU computing software (like folding@home) was only run at 2D clock speeds. Since these programs will no doubt fill the command queue, they will get full performance from the GPU now. This also means that games run in a window will perform better which should be good news to MMO players everywhere.
But like we said, dropping 55nm parts less than a year after the first 65nm hardware is a fairly aggressive schedule and one of the major benefits of the 3800 series and an enabler of the kind of performance this hardware is able to deliver. We asked AMD about their experience with the transition from 65nm to 55nm, and their reply was something along the lines of: "we hate to use the word flawless... but we're running on first silicon." Moving this fast even surprised AMD it seems, but it's great when things fall in line. This terrific execution has served to put AMD back on level competition with NVIDIA in terms of release schedule and performance segment. Coming back from the delay in R600 to hit the market in time to compete with 8800 GT is a huge thing and we can't stress it enough. To spoil the surprise a bit, AMD did not outperform 8800 GT, but this schedule puts AMD back in the game. Top performance is secondary at this point to solid execution, great pricing, and high availability. Good price/performance and a higher level of competition with NVIDIA than the R600 delivered will go a long way to reestablish AMD's position in the graphics market.
Keeping in mind that this is an RV GPU, we can expect AMD to have been working on a new R series part in conjunction with this. It remains to be seen what (and if) this part will actually be, but hopefully we can expect something that will put AMD back in the fight for a high end graphics part.
Right now, all that AMD has confirmed is a single slot dual GPU 3800 series part slated for next year, which makes us a little nervous about the prospect of a solid high end single GPU product. But we'll have to wait and see what's in store for us when we get there.
117 Comments
View All Comments
Agent11 - Sunday, November 18, 2007 - link
I was very disappointed with the use of a p35 chipset to compare crossfire to SLI.You use a motherboard with 16x by 16x pcie lanes for SLI but use one with 16x by 4x for crossfire... And then make a point of crossfire not scaling as well!
Ask any bencher, it does matter.
SmoulikNezbeda - Sunday, November 18, 2007 - link
Hi,I would like to know what numbers in graphs really represents. Are those average FPS or something like (min + max + ave)/3 FPS?
Thanks
Agent11 - Monday, November 19, 2007 - link
If it isn't average then theres a problem.wecv - Monday, August 14, 2017 - link
Hello, I am from the future.We now have 2GB GPUs with GDDR5 as entry level, 4GB-8GB GPUs for midrange with GDDR5 and 8GB GDDR5/GDDR5X/HBM2 or 11GB GDDR5X for High-end and enthusiast!
You may go and live back in the past.
TheOtherRizzo - Saturday, November 17, 2007 - link
What would you need a frame buffer of 512 MB for? That's enough room for about 80 1080p images. Sounds to me like someone at ATI is stuck in 1994 when framebuffers were the only memory on a graphics card...wecv - Monday, August 14, 2017 - link
Hello, I am from the future.We now have 2GB GPUs with GDDR5 as entry level, 4GB-8GB GPUs for midrange with GDDR5 and 8GB GDDR5/GDDR5X/HBM2 or 11GB GDDR5X for High-end and enthusiast!
You may go and live back in the past.
ZipFreed - Friday, April 13, 2018 - link
Lol, this comment is awesome and cracked me up. I am reading these older GPU reviews researching something and have been thinking similar sentiments to myself as I go.Glad you necro'd this.
0roo0roo - Saturday, November 17, 2007 - link
the convoluted naming systems of gpus garrantees pretty much only geeks in the know will make good purchasing decisions. this matters to the health of the pc game industry, i'm sure many have been turned off by the experience of going to their local store and buying a card within their budget and little other useful information and getting a lousy experience. i'm sure retailers actually benifit from the confusion since they can charge more and just hope the customer just bases their decision on their price range.Shark Tek - Saturday, November 17, 2007 - link
Finally GPU manufacturers are thinking right. Instead of making oven like heaters power hogs GPUs they're trying to make things right like Intel and AMD are doing with their CPU lines with less heat and power consumption.Lets see the upcoming generations how they will perform. ;)
araczynski - Friday, November 16, 2007 - link
I'm assuming this is a mid line card with better stuff coming out?otherwise I don't see the point of getting anything other than an 8800gt, prices are too close to give up top of the line for merely 60 or so bucks, or better yet, waiting a few more months till the 8900's roll out.