MultiGPU Update: Finding the True Halo with 4-way
by Derek Wilson on February 28, 2009 11:45 PM EST- Posted in
- GPUs
What We Couldn't Cover
Our tests will include the GeForce GTX 295 Quad SLI, GeForce 9800 GX2 Quad SLI, Radeon HD 4870 1GB Quad CrossFireX, and Radeon HD 4850 Quad CrossFireX. We were unfortunately not able to test the 4850 Quad with 1GB per GPU because we didn't have 2 of the 4850 X2 2GB cards. This would undoubtedly have made the 4850 look a little stronger in Quad at 2560x1600 (where it really counts). While it wouldn't compete for the highest end performance, the higher memory Quad 4850 is certainly of interest to us after seeing the value in two of them. But we really don't expect any Quad option to deliver on bang for buck metrics.
Past that, we also didn't include Race Driver GRID this time around. Due to our continuing issue with FRAPS, we couldn't record performance data for either of our Quad NVIDIA solutions. We didn't feel that presenting the data from the game with just AMD hardware was highly useful, but it is worth mentioning that just looking at the numbers we could tell that the Quad NVIDIA solutions performed slower than the AMD solutions. I do apologize for a lack of quantitative data, but sometimes that's how it goes. We will continue to try and collect this data and we may do something with it down the line if we are successful.
Our first article explored 1 to 2 GPUs. The second looked at 1 to 3 and 2 to 3 GPUs. This one only focuses on the performance improvement from 2 to 4 GPUs. The reason for this is that there are no single GPU versions that exactly match half a GTX 295 or half a 9800 GX2. We can see when things don't scale and how they scale differently from 3 way by looking back at the previous article if people want, so having a lopsided analysis that included some metrics for AMD and not NVIDIA didn't seem quite right.
Just like the diminished returns we saw when moving from 2-way to 3-way, we see more diminishing returns when moving from 3-way to 4-way. The way we can get a feel for that more directly is that we see much less scaling when moving from 2-way to 4-way than when moving from a single GPU to two (even though the theoretical performance improvement is the same).
This time around, we didn't zero the value data when performance didn't meet a threshold. We know some people liked that way of doing it, but value really isn't a focus of a 4-way GPU shootout anyway, so we feel that the data is more academic on its face. This article rounds out our data and has all the performance numbers for all the parts we've looked at, while our analysis focuses on 4-way. We are still actively refining our approach to representing value moving forward, so your feedback is not only welcome, it is greatly appreciated.
44 Comments
View All Comments
JarredWalton - Sunday, March 1, 2009 - link
Fixed, thanks. Note that it's easier to fix issues if you can mention a page, just FYI. :)askeptic - Sunday, March 1, 2009 - link
This is my observation based on their review over the last couple of yearsssj4Gogeta - Sunday, March 1, 2009 - link
It's called being fair and not being biased. They did give the due credit and praise to AMD for RV770 and Phenom II. You probably haven't been reading the articles.SiliconDoc - Wednesday, March 18, 2009 - link
He's a red fan freak-a-doo, with his tenth+ name, so anything he sees is biased against ati.Believe me, that one is totally goners, see the same freak under krxxxx names.
He must have gotten spanked in a fps by an nvidia card user so badly he went insane.
Captain828 - Sunday, March 1, 2009 - link
In the last couple of years, nVidia and Intel have had better performing hardware than the competition.So I don't see any bias and the charts don't show any either.
lk7200 - Wednesday, March 11, 2009 - link
Shut the *beep* up f aggot, before you get your face bashed in and cut
to ribbons, and your throat slit.
SiliconDoc - Wednesday, March 18, 2009 - link
Another name so soon raging red fanboy freak ? Going to fantasize about murdering someone again, sooner rather than later ?If ati didn't suck so badly, and be billion dollar losers, you wouldn't be seeing red, huh, loser.
JonnyDough - Tuesday, March 3, 2009 - link
Hmm...X1900 series ring a bell? Methinks you've been drinking...Razorbladehaze - Sunday, March 1, 2009 - link
Wow, what i was really looking forward to here disappeared entirely. I was expecting to see more commentary on the subjective image quality of the benchmarks, and there was even less discussion relating to that then in the past two articles kinda a bummer.On the side note what was shown was what I expected from piecemeal of a number of other reviews. Nice to see it combined though.
The only nougat of information I found disturbing is to hear the impression that CUDA is better than what ATI has promoted. This in light of my understanding that nVidia just hired a head tech officer from the University where Stream (what ati uses) computing took roots. Albeit that CUDA is just an offshoot of this, it would seem to me that, this hiring would lead me to beleive that nvidia will be migrating towards stream rather than the opposite. Especially if GPGPU computing is to become commonplace.
I think that it would be in nVidia's best interest to do this as I am afraid that Intel is right and that nvidia's future may be bleak if GPGPU computing does not take hold and this is one strategy to migrate towards their rival AMD's GPGPU to reduce resource usage to explore this tech.
Well yeah... i think i went way way off on a tangent on this one so...yeah im done.
DerekWilson - Monday, March 2, 2009 - link
Sorry about the lack of image quality discussion. It's our observation that image quality is not significantly impacted by multiGPU. There are some instances of stuttering here and there, but mostly this is in places where performance is already bad or borderline, otherwise we did note where there were issues.As far as GPGPU / GPU computing, CUDA is a more robust and more widely adopted solution than is ATI Stream. CUDA has made more inroads in the consumer space, and especially in the HPC space than has Stream. There aren't that many differences in the programming model, but CUDA for C does have some advantages over Brook+. I prefer the fact that ATI opens up it's ISA down to the metal (along side a virtual ISA), while NVIDIA only offers a virtual ISA.
The key is honestly adoption though: the value of the technology only exists as far as the end user has a use for it. CUDA leads here. OpenCL, in our eyes, will close the gap between NVIDIA and ATI and should put them both on the same playing field.