AMD's Radeon HD 6990: The New Single Card King
by Ryan Smith on March 8, 2011 12:01 AM EST- Posted in
- AMD
- Radeon HD 6990
- GPUs
The launch drivers for the 6990 will be a preview version of Catalyst 11.4, which have been made available today and the final version launching sometime in April. Compared to the earlier drivers we’ve been using performance in most of our games is up by at least a few percent, particularly in CrossFire. For launching a dual-GPU card like the 6990, the timing couldn’t be better.
Along with these performance improvements AMD is also throwing a few new features in to the Catalyst Control Center, making it the first time they’ve touched it since the introduction of the new design in January. Chief among these features – and also timed to launch with the 6990 today - is 5x1 portrait Eyefinity mode. Previously AMD has supported 3x1 and 3x2, but never anything wider than 3 monitors (even on the Eyefinity 6 series).
The 6990 is of course perfectly suited for the task as it's able to drive 4 + 1 monitors without any miniDP MST hubs, and indeed the rendering capabilities of this card are wasted a good deal of the time only driving one monitor. Other cards will also support 5x1P, but only E6 cards can work without a MST hub at the moment. Notably, in spite of requiring one fewer monitor than 3x2 Eyefinity this is easily the most expensive option for Eyefinty yet, as portrait modes require monitors with wide vertical viewing angles to avoid color washout – you’d be hard pressed to build a suitable setup with cheap TN monitors like you can the landscape modes.
The other big change for power users is that AMD is adding a software update feature to the Catalyst Control Center, which will allow users to check for driver updates from within the CCC. It will also have an automatic update feature, which will check for driver updates every 2 weeks. At this point there seems to be some confusion over at AMD over whether this will be enabled by default or not – our drivers have it enabled by default, while we were initially told it would be disabled. From AMD’s perspective having the auto update feature enabled improves the user experience by helping to get users on newer drivers that resolve bugs in similarly new games, but at the same time I could easily see this backfiring with users by being one more piece of software nagging for an update every month.
Finally, AMD is undergoing a rebranding (again), this time for the Catalyst Control Center. If you use an AMD CPU + AMD consumer GPU, the Catalyst Control Center is now the AMD VISION Engine Control Center. If you use an Intel CPU + AMD consumer GPU it’s still the Catalyst Control Center. If you use a professional GPU (regardless of CPU), it’s the Catalyst Pro Control Center.
The Test
Due to the timing of this launch we haven’t had an opportunity to do in-depth testing of Eyefinity configurations. We will be updating this article with Eyefinity performance data in the next day. In the meantime we have our usual collection of single monitor tests.
CPU: | Intel Core i7-920 @ 3.33GHz |
Motherboard: | Asus Rampage II Extreme |
Chipset Drivers: | Intel 9.1.1.1015 (Intel) |
Hard Disk: | OCZ Summit (120GB) |
Memory: | Patriot Viper DDR3-1333 3 x 2GB (7-7-7-20) |
Video Cards: |
AMD Radeon HD 6990 AMD Radeon HD 6970 AMD Radeon HD 6950 2GB AMD Radeon HD 6870 AMD Radeon HD 6850 AMD Radeon HD 5970 AMD Radeon HD 5870 AMD Radeon HD 5850 AMD Radeon HD 5770 AMD Radeon HD 4870X2 AMD Radeon HD 4870 NVIDIA GeForce GTX 580 NVIDIA GeForce GTX 570 NVIDIA GeForce GTX 560 Ti NVIDIA GeForce GTX 480 NVIDIA GeForce GTX 470 NVIDIA GeForce GTX 460 1GB NVIDIA GeForce GTX 460 768MB NVIDIA GeForce GTS 450 NVIDIA GeForce GTX 295 NVIDIA GeForce GTX 285 NVIDIA GeForce GTX 260 Core 216 |
Video Drivers: |
NVIDIA ForceWare 262.99 NVIDIA ForceWare 266.56 Beta NVIDIA ForceWare 266.58 AMD Catalyst 10.10e AMD Catalyst 11.1a Hotfix AMD Catalyst 11.4 Preview |
OS: | Windows 7 Ultimate 64-bit |
130 Comments
View All Comments
smookyolo - Tuesday, March 8, 2011 - link
My 470 still beats this at compute tasks. Hehehe.And damn, this card is noisy.
RussianSensation - Tuesday, March 8, 2011 - link
Not even close, unless you are talking about outdated distributed computing projects like Folding@Home code. Try any of the modern DC projects like Collatz Conjecture, MilkyWay@home, etc. and a single HD4850 will smoke a GTX580. This is because Fermi cards are limited to 1/8th of their double-precision performance.In other words, an HD6990 which has 5,100 Gflops of single-precision performance will have 1,275 Glops double precision performance (since AMD allows for 1/4th of its SP). In comparison, the GTX470 has 1,089 Gflops of SP performance which only translates into 136 Gflops in DP. Therefore, a single HD6990 is 9.4x faster in modern computational GPGPU tasks.
palladium - Tuesday, March 8, 2011 - link
Those are just theoretical performance numbers. Not all programs *even newer ones* can effectively extract ILP from AMD's VLIW4 architecture. Those that can will no doubt with faster; others that can't would be slower. As far as I'm aware lots of programs still prefer nV's scalar arch but that might change with time.MrSpadge - Tuesday, March 8, 2011 - link
Well.. if you can oly use 1 of 4 VLIW units in DP then you don't need any ILP. Just keep the threads in flight and it's almost like nVidias scalar architecture, just with everything else being different ;)MrS
IanCutress - Tuesday, March 8, 2011 - link
It all depends on the driver and compiler implementation, and the guy/gal coding it. If you code the same but the compilers are generations apart, then the compiler with the higher generation wins out. If you've had more experience with CUDA based OpenCL, then your NVIDIA OpenCL implementation will outperform your ATI Stream implementation. Pick your card for it's purpose. My homebrew stuff works great on NVIDIA, but I only code for NVIDIA - same thing for big league compute directions.stx53550 - Tuesday, March 15, 2011 - link
off yourself idiotm.amitava - Tuesday, March 8, 2011 - link
".....Cayman’s better power management, leading to a TDP of 37W"- is it honestly THAT good? :P
m.amitava - Tuesday, March 8, 2011 - link
oops...re-read...that was idle TDP !!MamiyaOtaru - Tuesday, March 8, 2011 - link
my old 7900gt used 48 at loadD:
Don't like the direction this is going. In GPUs it's hard to see any performance advances that don't come with equivalent increases in power usage, unlike what Core 2 was compared to Pentium4.
Shadowmaster625 - Tuesday, March 8, 2011 - link
Are you kidding? I have a 7900GTX I dont even use, because it fried my only spare large power supply. A 5670 is twice as fast and consumes next to nothing.