Memory Scaling on Core i7 - Is DDR3-1066 Really the Best Choice?
by Gary Key on June 24, 2009 9:00 AM EST- Posted in
- Memory
Game Performance – Overclocked with SLI
Once again, we overclock our Core i7 920 to 3.8GHz (19x200) and run our standard game benchmarks at 1680x1050 2xAA HQ settings in single card and SLI configurations. This is a short synopsis of the results, but our other game benchmarks along with a 1920x1080 resolution performed in a similar manner. It appears in most of our games that minimum frame rates and sometimes average frame rates responded to the latency advantages inherent in 1200 C5 operation compared to the pure bandwidth advantage in 2000 C8.
FarCry 2
We set the performance feature set to Very High, graphics to High, and enable DX10 with AA set to 2x. The in-game benchmark tool is utilized with the Ranch Small level being selected for demo duties.
Average frame rates are up an astounding (had to make it interesting) 1% utilizing 1200 C5 over 2000 C8 while minimum frame rates improve by a ground shattering 0% when moving from 1200 C5 to 2000 C8 in single card results. In SLI operation, average frame rates improve by a familiar 1% as we crank up memory speed while minimum frame rates are 1% better when using 1200 C5 compared to 2000 C8. Obviously, the impact on actual game performance was nonexistent with any of our memory choices.
Warhammer 40K: Dawn of War II
We crank all options to High, enable AA, and then run the built-in performance benchmark for our result.
In our stock tests, this game responded very well to memory changes as average frame rate performance increased by 3% and minimum frame rates by 41% when moving from 1066 C7 to 1600 C6. However, in single card mode we see a minimal impact on average frame rates while minimum frame rates improve by 12% using 1200 C5 compared to 2000 C8. In SLI mode, average frame rates increased almost 3% moving from 1200 C5 to 2000 C8 but minimum frame rates improved a little over 2% with 1200 C5 having the advantage over 2000 C8.
World in Conflict - Soviet Assault
We set our options to High, DX10, 2xAA. 4xAF, and then utilize FRAPS to track a repeatable game sequence for our results.
Our single card results have the average frame rates flat lining but we do see a drop in minimum frame rates with DDR3-1600 C8. In the SLI test, average frame rates improve by 2% and minimum frame rates by 5% when moving from 1200 C5 to 1600 C6.
47 Comments
View All Comments
Seikent - Wednesday, June 24, 2009 - link
I'm not very sure if it's relevant, but I missed a load times comparation. I know that the bottleneck there should be the hdd, but I still think that there can be a performance boost.deputc26 - Wednesday, June 24, 2009 - link
ave and min lines are mixed up.MadBoris - Wednesday, June 24, 2009 - link
I'll be considering upgrading in October at the same time I go from XP to Win 7.So this is good to know if/when I go Core I7.
I guess I can see how Winrar RAM workload sdtays high since it grabs the buffers of compressed data chunks and writes them to disk as fast as the HW permits, so bandwidth matters then.
While it looks like very few apps can saturate the bandwidth latency benefits/penalties are always having an effect as usual.
Maybe I missed it but I didn't see anywhere in the article that tried to explain the technical reasons "why" 2000 doesn't provide advantage over 1066.
I understand the differences of latency and bandwidth. Is it really because no software is using RAM workloads large enough to benefit from increased bandwidth (except compression) or is there another bottleneck in the subsystem or CPU that doesn't allow moving all the data the RAM is capable of?
vol7ron - Wednesday, June 24, 2009 - link
Your question is long, so i didn't read it all, but does bottom of pg2 answer:"That brings us to another story. We had planned to incorporate a full overclocking section in this article but our DDR3-1866 and DDR3-2000 kits based on the Elpida DJ1108BASE, err Hyper ICs, have been experiencing technical difficulties as of late."
They said some other stuff, but it seems like it wouldn't be right to post info on faulty chips.
TA152H - Wednesday, June 24, 2009 - link
I'd like to see a test between the crippled i5 memory controller with very fast memory, and the i7 with low cost 1333 Mhz memory. There's really no point in the 1066 memory, except for Dell, HP, etc... to throw in generic machines; it's not much cheaper than 1333 MHz, and the performance bump really seems to be biggest there. I think 1333 MHz (low latency) is a reasonable starting point for most people, the cost seems to warrant the performance. After that, you definitely see diminishing returns.It seems anyone buying an i5 with very expensive memory is probably a fool, but, a few benchmarks might be interesting to validate or invalidate that. Of course, the i5 might be better when released, so even then it wouldn't be proof.
Gary Key - Wednesday, June 24, 2009 - link
I wish I could show i5 numbers, but that ability is officially locked down now. I can say that our results today will not be that much different when i5 launches, low latency 1333 or possibly 1600 will satisfy just about everyone. :)strikeback03 - Thursday, June 25, 2009 - link
Of course, by the time you can share those numbers we will most likely have to specify whether we are talking about LGA-1366 i7 or LGA-1156 i7. Thanks Intel.kaoken - Wednesday, June 24, 2009 - link
I think there is a mistake with the farcry graph. The min and avg lines should be switched.hob196 - Thursday, June 25, 2009 - link
Looking closer it might be that you have the SLI min on there instead of the Non SLI min.halcyon - Wednesday, June 24, 2009 - link
It's so nice to see AT calling things as they are.This is why we come here.
Straight up honest talk from adults to adults, with very little marketing speech and numbers do most of the talking.
Excellent test round up, mucho kudos.