Memory Scaling on Core i7 - Is DDR3-1066 Really the Best Choice?
by Gary Key on June 24, 2009 9:00 AM EST- Posted in
- Memory
Sometimes Memory Bandwidth Makes a Difference
Thankfully, for the memory makers at least, things aren't always as bleak as the PCMark Vantage results. Out of the forty or so applications we tested (23 reported on) in compiling this article, these are the applications that displayed differences worth mentioning when either increasing memory bandwidth or decreasing latencies.
WinRAR 3.9b3 x64
This benchmark compresses our AT workload consisting of a main folder that contains 954MB of files in 15 subfolders. The result is a file approximately 829MB in size.
WinRAR loves bandwidth and latency improvements. Of all the applications we tested, this one responded best to improved memory performance. Going from DDR3-1066 C7 to DDR3-1866 C7 resulted in a 20% decrease in processing time by just varying memory speeds. Latency improvements within a given memory speed were most noticeable with 1066 C5 being about 6% quicker than 1066 C7 and 1333 C6 around 8% faster than 1333 C9.
Cinema 4D R11 x64
Cinema 4D R11 is one of our favorite programs to create high-end 3D images and animations. We track the time it takes to render a swimming pool layout.
Maxon’s top flight program is bottlenecked at DDR3-1066 speeds and responded well to improved memory bandwidth with a 7% performance increase going from 1066 C7 to 1866 C7.
LightWave 3D 9.6 x64
Another popular 3D rendering program is Lightwave 3D 9.6. In this test we time the rendering of a single frame from an office building animation. The time to render the full scene is approximately four and a half hours.
Newtek’s premier application responded well to improved memory bandwidth with a 6% advantage for DDR3-1866 C7 over DDR3-1066 C7.
Tom Clancy's H.A.W.X.
While not a true flight simulation or even serious air combat game, it is a lot of fun and looks visually stunning on a 30” monitor with all options turned up. In our case, we set all options to high, enable 2xAA and DX10, and then use FRAPS to time a custom demo sequence. We run three loops of the benchmark and average our scores for the results.
We noticed in several games that decreased latencies and/or increased bandwidth tended to improve minimum frame rates more so than average frame rates. In the case of H.A.W.X., minimum frame rates improved about 14% going from 1066 C7 to 1866 C7, while average frame rates improved 6%.
47 Comments
View All Comments
Seikent - Wednesday, June 24, 2009 - link
I'm not very sure if it's relevant, but I missed a load times comparation. I know that the bottleneck there should be the hdd, but I still think that there can be a performance boost.deputc26 - Wednesday, June 24, 2009 - link
ave and min lines are mixed up.MadBoris - Wednesday, June 24, 2009 - link
I'll be considering upgrading in October at the same time I go from XP to Win 7.So this is good to know if/when I go Core I7.
I guess I can see how Winrar RAM workload sdtays high since it grabs the buffers of compressed data chunks and writes them to disk as fast as the HW permits, so bandwidth matters then.
While it looks like very few apps can saturate the bandwidth latency benefits/penalties are always having an effect as usual.
Maybe I missed it but I didn't see anywhere in the article that tried to explain the technical reasons "why" 2000 doesn't provide advantage over 1066.
I understand the differences of latency and bandwidth. Is it really because no software is using RAM workloads large enough to benefit from increased bandwidth (except compression) or is there another bottleneck in the subsystem or CPU that doesn't allow moving all the data the RAM is capable of?
vol7ron - Wednesday, June 24, 2009 - link
Your question is long, so i didn't read it all, but does bottom of pg2 answer:"That brings us to another story. We had planned to incorporate a full overclocking section in this article but our DDR3-1866 and DDR3-2000 kits based on the Elpida DJ1108BASE, err Hyper ICs, have been experiencing technical difficulties as of late."
They said some other stuff, but it seems like it wouldn't be right to post info on faulty chips.
TA152H - Wednesday, June 24, 2009 - link
I'd like to see a test between the crippled i5 memory controller with very fast memory, and the i7 with low cost 1333 Mhz memory. There's really no point in the 1066 memory, except for Dell, HP, etc... to throw in generic machines; it's not much cheaper than 1333 MHz, and the performance bump really seems to be biggest there. I think 1333 MHz (low latency) is a reasonable starting point for most people, the cost seems to warrant the performance. After that, you definitely see diminishing returns.It seems anyone buying an i5 with very expensive memory is probably a fool, but, a few benchmarks might be interesting to validate or invalidate that. Of course, the i5 might be better when released, so even then it wouldn't be proof.
Gary Key - Wednesday, June 24, 2009 - link
I wish I could show i5 numbers, but that ability is officially locked down now. I can say that our results today will not be that much different when i5 launches, low latency 1333 or possibly 1600 will satisfy just about everyone. :)strikeback03 - Thursday, June 25, 2009 - link
Of course, by the time you can share those numbers we will most likely have to specify whether we are talking about LGA-1366 i7 or LGA-1156 i7. Thanks Intel.kaoken - Wednesday, June 24, 2009 - link
I think there is a mistake with the farcry graph. The min and avg lines should be switched.hob196 - Thursday, June 25, 2009 - link
Looking closer it might be that you have the SLI min on there instead of the Non SLI min.halcyon - Wednesday, June 24, 2009 - link
It's so nice to see AT calling things as they are.This is why we come here.
Straight up honest talk from adults to adults, with very little marketing speech and numbers do most of the talking.
Excellent test round up, mucho kudos.