MultiGPU Update: Finding the True Halo with 4-way
by Derek Wilson on February 28, 2009 11:45 PM EST- Posted in
- GPUs
Who Scales ... And Timing
In previous articles, we took a look at some stats on how many tests scale more or less than a certain threshold. Well, it gets a little trickier here to make sense of everything, so rather than pick points ourselves, we decided to list a bunch of them. In the chart below, we've listed the number of tests that fail to scale better than the percentage listed at the top of the column. Lots of tests fail to scale at what we would call a reasonable percentage, but people looking at these parts have a different definition of reasonable.
The maximum scaling percent is 100% just like with scaling from 1 to 2 GPU. But fewer games scale past 2 GPUs, and of those that do, fewer scale as near linearly past 2 GPUs. And to top that off, many tests that do scale at all scale right into a system limitation. A good chunk of games fail to scale past 5%, and fully 13 out of 18 tests fail to scale beyond 50% in each case we tested.
<2.5 | <5 | <10 | <15 | <20 | <25 | <33.3 | <50 | |
NVIDIA GeForce GTX 295 Quad SLI | 5 | 5 | 6 | 7 | 8 | 9 | 10 | 13 |
NVIDIA GeForce 9800 GX2 Quad SLI | 7 | 7 | 8 | 8 | 8 | 8 | 8 | 13 |
ATI Radeon HD 4870 1GB Quad CrossFire | 5 | 5 | 5 | 7 | 9 | 11 | 12 | 13 |
ATI Radeon HD 4850 Quad CrossFire | 6 | 7 | 9 | 10 | 10 | 10 | 12 | 13 |
Looking at the lower end we can see a bunch of tests fail to scale at all. Also, at 33%, many fewer situations scale at this rate than when moving from 1 to 2 GPUs. Clearly 4-way multiGPU solutions are not designed with anything but maximum performance in mind. Scaling isn't as important as the fact that these solutions can provide some degree of higher performance in some situations.
We would also like to note that when paying ridiculous amounts of money for not quite as ridiculous performance gains, the robustness of the solution is of very high importance. No one wants to pay over $1000 and get a solution that sometimes provides good scaling and sometimes degrades performance. Neither AMD nor NVIDIA are immune to this, but we would like to see this issue tackled in more earnest beyond simply noting that SLI and CrossFire can be disabled if trouble arises.
NVIDIA does have an advantage at this level though. We would love to see AMD get their driver act together and consistently have drivers that provide good scaling and performance in newly release AAA titles on launch day. We would also love to see them refine their driver development model in order to make sure that improvements released as hotfixes always make it into the very next WHQL driver released (which is currently not the case). Everywhere else, this is merely a slight annoyance that people may take or leave. At the highest of the high end, however, a delay in getting good scaling or the need to use less recent drivers that contain more recent fixes (and juggling which is which) can prove more than just a trifle. For such a high price, NVIDIA delivers a better experience on this count.
Additionally, until OpenCL matures, CUDA is a better GPU computing alternative to what AMD offers, and PhysX can provide additional flexibility now that more titles are beginning to adopt it. Actually, this is the space in which we currently see the most value in CUDA and PhysX, as those in the market for equipment this high end will be more interested in these niche features that don't have broad enough support or large enough current impact for us to heartily recommend them as a must have for everyone.
Technophiles (like myself) that are willing to put this kind of money into hardware often get excited about the hardware on a more than practical level. The technology itself, rather than the experience it delivers, can often be a source of enjoyment for the end user. I know I like playing with PhysX and CUDA in spite of the fact that these technologies still need broader support to compel the average gamer.
Performance, itself, cannot be ignored, and is indeed of the highest importance when it comes to the highest end configurations. We will include the value graphs, but we expect that the line closest to the top of the performance charts are the key factor in decision making when it comes to Quad GPU options. The troubles that come with maintaining a 4 GPU configuration are not worth it if the system doesn't provide a consistently top of the line experience.
44 Comments
View All Comments
lk7200 - Wednesday, March 11, 2009 - link
Die painfully okay? Prefearbly by getting crushed to death in a
garbage compactor, by getting your face cut to ribbons with a
pocketknife, your head cracked open with a baseball bat, your stomach
sliced open and your entrails spilled out, and your eyeballs ripped
out of their sockets. Fucking bitch
I really hope that you get curb-stomped. It'd be hilarious to see you
begging for help, and then someone stomps on the back of your head,
leaving you to die in horrible, agonizing pain. *beep*
Shut the *beep* up f aggot, before you get your face bashed in and cut
to ribbons, and your throat slit.
http://www.youtube.com/watch?v=Po0j4ONZRGY">http://www.youtube.com/watch?v=Po0j4ONZRGY
I wish you a truly painful, bloody, gory, and agonizing death, *beep*
vailr - Sunday, March 1, 2009 - link
Any testing of 8x GPU's?4x Radeon 4870 x2 cards?
or:
4x nVidia 295 (dual GPU) cards?
Combined with an Intel Skulltrail board using a pair of quad core CPU's.
LinkedKube - Wednesday, March 4, 2009 - link
I'm running tri sli gtx 295's. My energy bill has gone up 110 usd a month since december. With that to think about, wth would someone test 4 gtx295's. Totally inefficient. This article imo was about price/performance through competitors giving us a new way to look at fps with the 100 usd fps chart.Jorgisven - Sunday, March 1, 2009 - link
That technology does not yet exist. The skulltrail board supports Quad SLI, meaning, 4 total gpu's (the x2 boards count for 2 each). Nothing supports 8x graphics cards. That would create ridiculous overhead, as you can probably tell from the scaling from going from 2-4 gpus.Hrel - Sunday, March 1, 2009 - link
This was a GREAT series of articles and I'm so glad you guys decided to make them. I'm pretty sure I've never heard anyone on a hardware review site actually admit it's a wash between AMD/ATI and Nvidia and it all comes down to brand preference; so props for coming out and saying the truth.One thing I've said many times before in these comments, that I'm still not seeing. "I would really love to see 3D Mark scores for all these cards included with each GPU article." You show the subjective tests of the hardware, the games, please show the objective test for the hardware, 3D Mark.
So yeah, amazing articles, thank you for writing them. And my only, very minor, complaints are that you didn't include hardware down to the 9600GT level(at least)or lower and you didn't include 3D Mark scores.
Yes, I know it's supposed to be a multi-GPU review, but you included enough other single GPU's, I would have really liked to see how the other cards stacked up, kind of a "whole market" GPU comparison.
P.S. Sorry, third complaint, I remembered after mentioning the lower end hardware. Had you included those cards, it would have been nice to see tests at 1440x900 and maybe 1366x768 too; seeing as how that's becoming a standard. And yes, I understand the amount of work that goes into testing that many configurations; and the time required to test at so many resolutions. And... I really truly appreciate all the work put into articles like this; I swear, I recruit more people to come visit this site then a tv ad could.
On an article design note: I really like the comparison for value, based off performance per dollar, or per 100 dollars in this case; very good idea. I also REALLY like that I could switch between resolutions just by clicking a link; I like bar graphs WAY more than Line graphs, ever since First Grade. Later guys, great work!
LinkedKube - Monday, March 2, 2009 - link
I agree with the fps per 100 dollar section, very cool. Something new to look at and think about.7Enigma - Sunday, March 1, 2009 - link
I have to agree with you on the 3dMark scores (and any of the other major ones Aquamark or something?) I think anyone crazy enough to purchase 4 cards or 2 dual's are probably doing it more for the competition of benchmarking than actual gaming. Or at the minimum of equal importance and so if the quad AMD/Nvidia decision is a wash based on game performance maybe the synthetic benchmarks would sway the decision.SiliconDoc - Wednesday, March 18, 2009 - link
Well you shouldn't. Software, especially benchware, favors this or that method or type of hardware, and given the differences pointed out between the gpu styles of Nvidia and Ati, no test is going to eliminate bias in it's guaging - as should be absolutely obvious to you after seeing massive variance in game scores here for the same two opposing gpu's, and realizing, if you had a scientific mind, that 3dmark also uses a GAME it "created" that will favor one architecture or another, definitively.So, you may "have to agree" - but you may also "change your mind" about that.
Razorbladehaze - Sunday, March 1, 2009 - link
Actually there is no "subjective" tests in this article. Subjective is non-empirical (non data based) testing. Or another aspect of subjective testing is when one would say that subjective is when the outcome reported is not supported by the data because of mitigating other factors (i.e. best card is not ----, because graphical glitches, despite having best FPS) . So FPS in benchmarking as all tests here demonstrate is in fact all objective testing.Furthermore 3d mark scores are really redundant and not practical. I for one am really glad that Anand have left them out, they are a waste of testing time in most cases. I used to really like the 3d mark scores for benchmarking my own stuff, and used to look forward to them in articles. Over time though i have really noticed that although they do provide a comparison between cards, they do not translate to much in terms of real world performance. The comparison between cards is still easily made using a common benchmark from a game, and it allows more differentiation and demonstrates more "across the board" performance when testing multiple games and, as mentioned in the first line of this paragraph, provides practical results.
Hrel - Thursday, March 5, 2009 - link
Yeah... no. You're wrong. Tests based on games are subjective because the results you get from that testing is subjective to that game. Each game is programmed differently and utilizes the GPU hardware differently. You can three cards, have one card be the fastest by a large margin in one test and be the slowest in another test.(Subjective: Characteristic of or belonging to reality as PERCEIVED rather than as independent of mind.) The results show up as PERCEIVED by the game, rather from independent results.
(Subjective: Peculiar to a particular individual.) That individual is the game. -These were taken from Merriam/Websters dictionary online.-
(Objective: Expressing or dealing with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretations.) Testing using only games causes distortion. That distortion is from "feelings", "prejudices" and "interpretations". Feeling of the programmers who wrote the game, some like to program for Nvidia hardware some prefer AMD. Also, some game studios are paid or given preferential treatment to favor one companies hardware over another's. What I just said has to do with prejudice too. Interpretations, Nvidia and AMD hardware is designed differently, a blatant example of this is that AMD uses 800 SP's where Nvidia uses 128 SP's and they both have similar performance; the code of the game, generally DirectX 9, interprets each set of hardware differently ergo we have a non-objective interpretation of the GPU's performance capability.
Games are meant to be played and perform the way each individual game studio wants them too; there are so many variables across companies, and employees and the games themselves you can't possibly use a small subset of video games to determine the performance differences between a set of GPU's. At least not reliably.
(Objective: Limited to choices of fixed alternatives and reducing subjective factors to a minimum.) Every scientific experiment strives to remove variables from the testing process; video games simply don't do that.
3D Mark and the newer 3DMark Vantage are as objective as software testing hardware can be. One test, programmed one way, programmed to only run one way no matter what GPU it is on. Also, 3D Mark is designed to stress the GPU hardware as much as possible, no matter what card it is, which means it will take full advantage of every card you test using it.
No, 3D mark doesn't equate to real world results in any way. But that doesn't matter, it's the most scientific, least variable test anyone can perform on multiple GPU's to determine the performance differences between them. And isn't that all anandtech is trying to do with this whole series of articles? Yes, yes it is. Of course it is always good to look at the games, to see that subjective measurement and to determine which card works best with the games YOU play. But it is imperative to look at 3D mark as well to get a complete idea of the DIFFERENCE IN PERFORMANCE between the cards. To see whole big picture.
To make it simple for you, if one card outperforms another card by 15% or more in 3D Mark, it's a good bet that card will outperform the other card in the majority of the games on the market; regardless or programming inconsistencies.
On another note, most people will never take a resolution beyond 1920x1080, so I'd really like to see more testing at resolutions lower than that; and the inclusion of lower end cards to see if they can play the latest games... even if I do have to lower the resolution a little.