Lucid's Multi-GPU Wonder: More Information on the Hydra 100
by Derek Wilson on August 22, 2008 4:00 PM EST- Posted in
- GPUs
Let's Talk About Applications
Obviously it'll accelerate games. What about GPGPU? That's not the focus of Lucid right now. They said they want to look at the largest market for the part and target that first, and gaming is certainly where that is at. It is physically possible that the hardware and software could load balance other tasks across the hardware, but this isn't something that is currently being explored or developed.
It will also accelerate games using multiple GPUs while outputting to multiple displays. Imagine 4 GPUs sharing the load over 3 monitors for a flight sim. Neither NVIDIA nor AMD can pull something like this off right now with their technology.
This can end up on both GPUs and on motherboards, and they can be cascaded. There is a limit to how many you can cascade because you will start introducing latency (but Lucid didn't define that limit). But 1 level deep is reasonable apparently. And this means it seems like it would be possible (except for the power requirements) to build a motherboard with 4 slots that had 4 cards each with 2 GPUs (let's say GTX 280s) connected by a Hyrda 100 chip.
And if scaling is really linear, 8x GTX 280 would certainly deliver way more than we could possibly need for a pretty good while. We'd be CPU and system limited until the cows come home (or at least a good 2 or 3 generations of hardware out into the future). Well, either that or developers would catch on that they could allow ridiculous features to be enabled for the kind of super ultra mega (filthy rich) users that would pick up such a crazy solution.
Upgrading hardware would be stupidly simple. Forget PhysX or anything like that: leave your older card in the system and upgrade to the latest generation and they'll both contribute equally to the rendering of frames (and since graphics is usually the largest bottleneck in the system, this will improve performance more than any other solution anyway). If we added a GTX 280 to a card with half it's performance, we'd see a 50% performance improvement over a single GTX 280. Not bad at all. There would be less downside in buying a high end part because it could continue to serve you for much longer than usual. And low end parts would still contribute as well (with a proportionally smaller gain, but a gain nonetheless).
Lucid also makes what seems like a ridiculous claim. They say that in some cases they could see higher than linear scaling. The reason they claim this should be possible is that the CPU will be offloaded by their hardware and doesn't need to worry about as much so that overall system performance will go up. We sort of doubt this, and hearing such claims makes us nervous. They did state that this was not the norm, but rather the exception. If it happens at all it would have to be the exception, but it still seems way too out there for me to buy it.
Aside from utterly invalidating SLI and CrossFire, this thing opens up a whole realm of possibilities. If Intel adopts it for their high end motherboards, they would have the ultimate solution for gaming. Period. If it's up to board vendors, chipset will still be less relevant in at least multi-GPU performance than the inclusion or exclusion of the Lucid Hydra 100.
But can they really do it? And how do they even attempt to do it? They've told us a little bit, and we'll brainstorm a bit and see what we can come up with.
57 Comments
View All Comments
pool1892 - Saturday, August 23, 2008 - link
i think it is possible to build a solution like this, but this thing has a lot to do, on-the-fly qos and scheduling and optimizing and so on. with data in the gigabits/s. sounds like a heavy duty cisco switch.i can imagine this working, but the chip will be a heavyweight - and it will be power consuming and expensive.
and it only has potential in the marketplace if the price premium for a mainboard with hydra beats the faster graphics you can buy for this premium. that will be tough.
larrabee is as usual a totally different animal, hydra could very well be a software feature for it (esp. with qpi in gen 2)
pool1892 - Saturday, August 23, 2008 - link
gotta correct myself - after a little diggin: the hydra is a tensilica diamond based programmable risc controller with custom logic around it running at 225mhz. it uses about 5watt. this is a tiny chip, it might be affordable. (but how is liquid going to earn money? and: they have to optimize their driver and the the programmable parts of the chip for different rendering techniques in different games - who is paying for that?)Goty - Saturday, August 23, 2008 - link
I don't see this as a bad thing for GPU makers, personally. Since ATI no longer has anything like the "master card" for crossfire, as long as they're selling two GPUs to people running multi-card systems, they're not losing out. Sure, they may lose a bit of money on the mainboard side of things since consumers will be able to use any chipset they want with this technology, but the margin on the GPU silicon is probably higher than that on the chipset side, anyhow.yyrkoon - Saturday, August 23, 2008 - link
"Lucid also makes what seems like a ridiculous claim. They say that in some cases they could see higher than linear scaling. The reason they claim this should be possible is that the CPU will be offloaded by their hardware and doesn't need to worry about as much so that overall system performance will go up. We sort of doubt this, and hearing such claims makes us nervous. They did state that this was not the norm, but rather the exception. If it happens at all it would have to be the exception, but it still seems way too out there for me to buy it."Come now guys . . . if a CPU dependent game such as World in Conflict could offload the CPU 10%, would it not make sense that the CPU could do an additional 10%, thus offering more performance ? I am not saying I believe this is possible myself, but taking Lucid at their word, this just makes sense to me.
"The demo we saw behind closed doors with Lucid did show a video playing on one 9800 GT while the combination of it and one other 9800 GT worked together to run Crysis DX9 with the highest possible settings at 40-60 fps (in game) with a resolution of 1920x1200. Since I've not tested Crysis DX9 mode on 9800 GT I have no idea how good this is, but it at least sounds nice."
Just going from this review, and assuming you meant a 9800GTX/GTX+: 47-41 FPS average with 16x AF/ 0x AA.
"An explanation for this is the fact that the Hydra software can keep requesting and queuing up tasks beyond what graphics cards could do, so that the CPU is able to keep going and send more graphics API calls than it would normally. This seems like it would introduce more lag to us, but they assured us that the opposite is true. If the Hydra engine speeds things up over all, that's great. But it certainly takes some time to do its processing and we'd love to know what it is."
Wait a minute . . . did you not just mention on a previous page somewhere that the number of cards implemented were limited due to latency implications ? . . .
"Of course, while it seems like an all or nothing situation that would serve no purpose but to destroy the experience of end users, NVIDIA and ATI have lots of resources to work on this sort of "problem" and I'm sure they'll try their best to come up with something. Maybe one day they'll wake up and realize (especially if one starts to dominate over the other other) that Microsoft and Intel got slammed with antitrust suits for very similar practices."
OR, they could just purchase the company outright, which seems to me what Lucid may have been aiming for to begin with. After that the buying company could do whatever they please, such as kill the project. or completely decimate the opposite camp *if* the hardware truely does what it claims. At least where gaming is concerned . . . and we all know that IGP's make up for a very large portion of home systems.
Now what I have to say is that this totally smells like the gaming Physics "fiasco". Buy the hardware now, and the hardware is dead in a year or two. Sure a few games implemented features that leveraged these cards, but do you think developers are going to write code for hardware that has gone way of the dodo ? Probably not.
The idea is interesting yes, but I will believe it when I see the hardware on sale at the egg . . .
DerekWilson - Saturday, August 23, 2008 - link
it was not 9800 gtx cards -- they were GT cards ... lower performance, single slot.also game devs wont have to optimize for it, so there is no problem with them ignoring the situation -- if it works it works
yyrkoon - Saturday, August 23, 2008 - link
9800GTX/GTX+ benchmarks ---> http://www.guru3d.com/article/geforce-9800-gtx-512...">http://www.guru3d.com/article/geforce-9800-gtx-512...JarredWalton - Saturday, August 23, 2008 - link
http://www.newegg.com/Product/ProductList.aspx?Sub...">9800 GT FTW!Basically, performance is closer (identical) to that of 8800 GT. You know, this goes along with the whole "let's rename 8800 GT and 8800 GTS 512MB to 9800 parts, because after all G92 is GeForce 9 hardware." Why the 8800 GT was ever launched with that name remains something of a mystery... well, except that performance was about the same as 8800 GTX.
yyrkoon - Saturday, August 23, 2008 - link
So basically just a 8800GTS with fewer ROPs ? nVidias naming convention definitely leaves a lot to be desired : /Lakku - Saturday, August 23, 2008 - link
Who are nVidia and AMD/ATi supposed to strong arm in this situation? I don't think they would be in any kind of position to strong arm ANYONE, if this works as advertised. Why? Because they'd have to strong arm Intel (apparently a very big investor into this tech and company) to do so, and that's just not going to happen. Intel only need put this on their own Intel branded gaming or consumer boards, and/or Intel can strong arm Asus and the others into putting this chip onto their motherboards if they want Intel chipsets, still by far the best selling PC chipsets. If this works as advertised, it's probably Intel who will be the biggest winner... and maybe us end users in some way, provided Intel and this company don't charge outrageous prices for this tech.djc208 - Monday, August 25, 2008 - link
Easy, like the author stated nVidia just writes in some code that looks for the Hydra software or hardware and shuts down parts of the driver. Therefore you can't use their hardware on a system running or equiped with Hydra. If it was a unified front then Intel will have only Larabee to use with this for gaming.Problem I see is that it could upset the market if the boycot isn't universal. If ATI let their hardware work with this and nVidia didn't then it could seriously hurt nVidia, as there would be even less reason to go with their chipsets or graphics cards at the high end, where nVidia likes to play.
More likely is that ATI/nVidia will quickly push out something along the same lines and now we'll have three competing solutions, and then ATI and nVidia will lock out Hydra since they offer an alternative, just like now.
All this assumes that Hydra works the way it's said to, if not then all bets are off.