Last year Lucidlogix came to us with a rather amazing claim: we can do multi-GPU better than the guys who make the video cards in the first place. Through their Hydra technology, Lucid could intercept OpenGL and DirectX API calls, redistribute objects to multiple video cards, and then combine the results into a single video frame. This could be done with dissimilar cards from the same company, even different companies altogether. It would be multi-GPU rendering, but not as you currently know it.
That was in August of 2008, when the company was first showcasing its technologies in hopes of finding a suitor. In 2009 they found that suitor in MSI, who are anchoring their new high-end Big Bang line of motherboards with the Hydra. After some bumps along the way, Lucid and MSI are finally ready to launch the first Hydra-equipped board: The Big Bang Fuzion.
We’ve had the Fuzion in our hands for over a month now, as the hardware has been ready well ahead of the software. Lucid has been continuing to develop the software side, and the two parties are finally ready to sign off on the finished product, although Hydra is still very much a work in progress.
The Big Bang Trinergy, the Fuzion's identical twin
As we’re currently in Las Vegas for CES (where MSI is launching the Fuzion), today we’ll be taking a quick look at the performance and compatibility of the Hydra, to answer the most burning of questions about the technology. Once we’re back from CES, we will be following that up with an in-depth look at image quality, edge cases, and other deeper issues. We’ve only had the newest drivers for a few days now, so we haven’t had a chance to give it a complete workover.
Finally, this is just a look at the Hydra technology itself. We’ll have a separate review of the Fuzion board as a motherboard at a later time. However it’s virtually identical to MSI’s other Big Bang board, the NVIDIA NF200-equipped Trinergy. The only significant difference between the boards is that the Fuzion has the Hydra chip, while the Trinergy has the NF200.
With that out of the way, let’s get started.
47 Comments
View All Comments
krneki457 - Friday, January 8, 2010 - link
Sorry Ryan just noticed you wrote the article. Well it was just an idea how to get at least some SLI results with as little hassle as possible. Presuming Hydra can be turned off to work only as PCIe bridge, than this ought to work.chizow - Thursday, January 7, 2010 - link
Have you tried flashing the Trinergy BIOS for SLI support? It might kill off Hydra capabilities in the meantime and deprecate the Hydra 200 to its basest form, a PCIe controller but for purposes of measuring N-mode performance that should suffice. The other alternative would be to simply use the Trinergy with SLI results as a plug-in doppelganger since it is identical to the Fuzion, save for the NF200 vs. Hydra 200 serving as PCIe switches.jabber - Thursday, January 7, 2010 - link
I think it has some promise. I think the ultimate aim is to be able to 'cobble' together a couple of GPUs of similar capability, have them work efficiently together and not have to worry about profiles. The profiles could just be handled seamlessly in the background.If they can push towards that then I'll give them the time.
chizow - Thursday, January 7, 2010 - link
The technology does still rely on profiles though. You don't need to set-up game specific profiles like with Nvidia, even if that kind of granularity is probably the best option, your choices are limited to a handful of somewhat generic performance/optimization profiles provided by Lucid.The scariest part of it all is that these profiles will rely on specific profiles/drivers from both Nvidia and AMD too. I'm pretty sure its covered in this article, but its covered for sure in Guru3D's write-up. Hydra only plans to release updates *QUARTERLY* and those updates will only support specific drivers from Nvidia and ATI.
Obviously, depending on Lucid's turnaround time, you're looking at signficant delays in their compatibilities with Nvidia/ATI, but you're also looking at potentially 3 months before an update for an Nvidia/ATI driver that supports a newer game you're interested in playing. Just way too many moving parts, added complexity and reliance on drivers/profiles, all for a solution that performs worst and costs more than the established AFR solutions.
danger22 - Thursday, January 7, 2010 - link
maybe the amd 5000 cards are to new to have support for hyrda? what about trying some older lower end cards? just for interest... i know you wouldn't put them in a $350 mobovol7ron - Thursday, January 7, 2010 - link
I like the way this technology is headed.Everyone is saying "fail" and maybe they're right because they want more from the release, but I think this still has potential. I would say either, keep the funding going, or open it up to the community at large to hopefully adopt/improve.
The main thing is that down the road this will be cheaper, faster, better. When SSDs came out stuttering, people were also saying "fail."
shin0bi272 - Thursday, January 7, 2010 - link
I know how you feel but their original claim was scalar performance with a 7watt chip on the mobo. It's not even as good as standard crossfire (and probably not even standard sli) so that's what's prompting the fail comments. Instead of getting 75fps on call of juarez with a pair of 5850's they should be getting 99 or 100 according to their original claim. Dont get me wrong it functions and for a chip thats literally a couple of months old (maybe 24 since its announcement) thats great but the entire point of hydra was to do it better out of the box than the card makers were doing it.shin0bi272 - Thursday, January 7, 2010 - link
I had high hopes for this technology but alas it appears it is just not meant to be. Maybe its the single pci-e 16x lane they are using to try to feed 2 pci-e 2.0 16x lane video cards... just saying. Would have been nice to be able to keep my 8800gtx and add in a 5870 but oh well.AznBoi36 - Thursday, January 7, 2010 - link
Why would you spend $350 on this mobo and then spend another $350 for a 5870, just so you can use your old 800GTX with a minimal gain? You could spend $150 on a CF mobo, plus 2 4890's at $150 each for a total of $350 that would give a 5870 a run for it's money.shin0bi272 - Thursday, January 7, 2010 - link
oh and the reason for the 5850's is because I am really wanting the dx11 capabilities... I could go with 2 4890's and end up paying less yes but it wouldnt be dx11.