Unreal Tournament 3 CPU & High End GPU Analysis: Next-Gen Gaming Explored
by Anand Lal Shimpi & Derek Wilson on October 17, 2007 3:35 AM EST- Posted in
- GPUs
It's been a long time coming, but we finally have Epic's first Unreal Engine 3 based game out on the PC. While the final version of Unreal Tournament 3 is still a little farther out, last week's beta release kept us occupied over the past several days as we benchmarked the engine behind Rainbow Six: Vegas, Gears of War and Bioshock.
Used in some very beautiful games, Epic's Unreal Engine 3 has been bringing us some truly next-generation game titles and is significantly more demanding on the CPU and GPU than Valve's Source engine. While far from the impossible-to-run that Oblivion was upon its release, UE3 is still more stressful on modern day hardware than most of what we've seen thus far.
The Demo Beta
Although Unreal Tournament 3 is due out before the end of the year, what Epic released is a beta of the UT3 Demo and thus it's not as polished as a final demo. The demo beta has the ability to record demos but it can't play them back, so conventional benchmarking is out. Thankfully Epic left in three scripted flybys that basically take a camera and fly around the levels in a set path, devoid of all characters.
Real world UT3 performance will be more strenuous than what these flybys show but it's the best we can muster for now. The final version of UT3 should have full demo playback functionality, with which we'll be able to provide better performance analysis. The demo beta also only ships with medium quality textures, so the final game can be even more stressful/beautiful if you so desire.
The flybys can run for an arbitrary period of time, we standardized on 90 seconds for each flyby in order to get repeatable results while still keeping the tests manageable to run. There are three flyby benchmarks that come bundled with the demo beta: DM-ShangriLa, DM-HeatRay and vCTF-Suspense.
As their names imply, the ShangriLa and HeatRay flybys are of the Shangri La and Heat Ray deathmatch levels, while the vCTF-Suspense is a flyby of the sole vehicle CTF level that comes with the demo.
Our GPU tests were run at the highest quality settings and with the -compatscale=5 switch enabled, which puts all detail settings at their highest values.
Our CPU tests were run at the default settings without the compatscale switch as we're looking to measure CPU performance and not GPU performance.
The Test
Test Setup | |
CPU | Intel Core 2 Extreme QX6850 (3.33GHz 4MB 1333FSB) |
Motherboard | Intel: Gigabyte GA-P35C-DS3R AMD: ASUS M2N32-SLI Deluxe |
Video Cards | AMD Radeon HD 2900 XT AMD Radeon X1950 XTX NVIDIA GeForce 8800 Ultra NVIDIA GeForce 8800 GTX NVIDIA GeForce 8800 GTS 320MB NVIDIA GeForce 7900 GTX |
Video Drivers | AMD: Catalyst 7.10 NVIDIA: 163.75 |
Hard Drive | Seagate 7200.9 300GB 8MB 7200RPM |
RAM | 2x1GB Corsair XMS2 PC2-6400 4-4-4-12 |
Operating System | Windows Vista Ultimate 32-bit |
72 Comments
View All Comments
decalpha - Wednesday, October 17, 2007 - link
Why not compare the CPUs with similar cache size, since Athlon 64 X2 6000 has 2MB cache whereas the Core 2 Duo E6850 has 4MB and cache size does seem to matter.drebo - Wednesday, October 17, 2007 - link
I think it's even more relevant to point out that clock-for-clock comparisons have been worthless for a very long time, and only seem to have come back on this site now that Intel has a more efficient pipeline.PrinceGaz - Wednesday, October 17, 2007 - link
The X2 6000+ actually has 2x 1MB cache, which in most cases is worse than 2MB shared, so the cache situation is even worse for AMD in the comparison that was performed.drebo - Thursday, October 18, 2007 - link
Well, cache size in general is less important for AMD processors, as the path from CPU to RAM is much, much quicker. It would be interesting (and very, very difficult to gauge) what the difference would be. This is most likely why they left AMD off of the cache comparison charts. It's impossible, due to far too dissimilar architectures, to isolate ONLY the memory subsystems, which is what a cache comparison would be attempting to do.Cache misses on an Intel architecture are far more expensive than on AMD's architecture. But, without otherwise identical chips, there's simply no way to make a comparison.
bloc - Wednesday, October 17, 2007 - link
I think if you compared the 8600 gts and x2600 xt, the perf would be pretty close, with the x2600 xt being $50 cheaper.The architecture is there. Some games like cod4 hasn't taken advantage of it yet.
ImmortalZ - Wednesday, October 17, 2007 - link
The second set of graphs on page 3 seem to be all confused. Mixed up title text?Also, regarding the ATI midrange part, surely you guys have heard about the 2900PRO?
JarredWalton - Wednesday, October 17, 2007 - link
P3 graphs fixed. I'd imagine trying to get a 2900 Pro for testing is proving more difficult than anticipated. I know looking online that the few places I've seen that list them are out of stock.ImmortalZ - Wednesday, October 17, 2007 - link
Well, it's easy to test a 2900PRO. Underclock a 2900XT to 600Mhz core and 1600Mhz memory and test away! :D (there are 512MB GDDR3 and 1GB GDDR4 versions, so...). Just change the price from 389.99 to 249.99 for the 512MB and 319.99 for the 1GB.Of course, I'd personally wait for the 2950s to show up - single slot coolers are teh win :P
Bremen7000 - Wednesday, October 17, 2007 - link
What about the page 6 graphs? Am I missing something or are they lacking something?RobberBaron - Wednesday, October 17, 2007 - link
Second that. The second set of charts on page 6 is Intel CPu's only. Little confusing