Exclusive: ASUS Debuts AGEIA PhysX Hardware
by Derek Wilson on May 5, 2006 3:00 AM EST- Posted in
- GPUs
AGEIA PhysX Technology and GPU Hardware
First off, here is the low down on the hardware as we know it. AGEIA, being the first and only consumer-oriented physics processor designer right now, has not given us as much in-depth technical detail as other hardware designers. We certainly understand the need to protect intellectual property, especially at this stage in the game, but this is what we know.PhysX Hardware:
125 Million transistors
130nm manufacturing process
128MB 733MHz Data Rate GDDR3 RAM
128-bit memory bus interface
20 giga-instructions per second
2 Tb/sec internal memory bandwidth
"Dozens" of fully independent cores
There are quite a few things to note about this architecture. Even without knowing all the ins and outs, it is quite obvious that this chip will be a force to be reckoned with in the physics realm. A graphics card, even with a 512-bit internal bus running at core speed, has less than 350 Gb/sec internal bandwidth. There are also lots of restrictions on the way data moves around in a GPU. For instance, there is no way for a pixel shader to read a value, change it, and write it back to the same spot in local RAM. There are ways to deal with this when tackling physics, but making highly efficient use of nearly 6 times the internal bandwidth for the task at hand is a huge plus. CPUs aren't able to touch this type of internal bandwidth either. (Of course, we're talking about internal theoretical bandwidth, but the best we can do for now is relay what AGEIA has told us.)
Physics, as we noted in last years article, generally presents itself in sets of highly dependant small problems. Graphics has become sets of highly independent mathematically intense problems. It's not that GPUs can't be used to solve these problems where the input to one pixel is the output of another (performing multiple passes and making use of render-to-texture functionality is one obvious solution); it's just that much of the power of a GPU is mostly wasted when attempting to solve this type of problem. Making use of a great deal of independent processing units makes sense as well. In a GPU's SIMD architecture, pixel pipelines execute the same instructions on many different pixels. In physics, it is much more often the case that different things need to be done to every physical object in a scene, and it makes much more sense to attack the problem with a proper solution.
To be fair, NVIDIA and ATI are not arguing that they can compete with the physics processing power AGEIA is able to offer in the PhysX chip. The main selling points of physics on the GPU is that everyone who plays games (and would want a physics card) already has a graphics card. Solutions like Havok FX which use SM3.0 to implement physics calculations on the GPU are good ways to augment existing physics engines. These types of solutions will add a little more punch to what developers can do. This won't create a revolution, but it will get game developers to look harder at physics in the future, and that is a good thing. We have yet to see Havok FX or a competing solution in action, so we can't go into any detail on what to expect. However, it is obvious that a multi-GPU platform will be able to benefit from physics engines that make use of GPUs: there are plenty of cases where games are not able to take 100% advantage of both GPUs. In single GPU cases, there could still be a benefit, but the more graphically intensive a scene, the less room there is for the GPU to worry about anything else. We are certainly seeing titles coming out like Oblivion which are able to bring everything we throw at it to a crawl, so balance will certainly be an issue for Havok FX and similar solutions.
DirectX 10 will absolutely benefit AGEIA, NVIDIA, and ATI. For physics on GPU implementations, DX10 will decrease overhead significantly. State changes will be more efficient, and many more objects will be able to be sent to the GPU for processing every frame. This will obviously make it easier for GPUs to handle doing things other than graphics more efficiently. A little less obviously, PhysX hardware accelerated games will also benefit from a graphics standpoint. With the possibility for games to support orders of magnitude more rigid body objects under PhysX, overhead can become an issue when batching these objects to the GPU for rendering. This is a hard thing for us to test for explicitly, but it is easy to understand why it will be a problem when we have developers already complaining about the overhead issue.
While we know the PhysX part can handle 20 GIPS, this measure is likely simple independent instructions. We would really like to get a better idea of how much actual "work" this part can handle, but for now we'll have to settle for this ambiguous number and some real world performance. Let's take a look a the ASUS card and then take a look at the numbers.
101 Comments
View All Comments
Ickus - Saturday, May 6, 2006 - link
Hmmm - seems like the modern equivelant of the old-school maths co-processors.Yes these are a good idea and correct me if I'm wrong, but isn't that $250 (Aus $) CPU I forked out for supposedly quite good at doing these sorts of calculations what with it's massive FPU capabilities and all? I KNOW that current CPU's have more than enough ability to perform the calculations for the physics engines used in todays video games. I can see why companies are interested in pushings physics add-on cards though...
"Are your games running crap due to inefficient programming and resource hungry operating systems? Then buy a physics processing unit add-in card! Guaranteed to give you an unimpressive performance benefit for about 2 months!" If these PPU's are to become mainstream and we take another backwards step in programming, please oh please let it be NVidia who takes the reigns... They've done more for the multimedia industry in the last 7 years than any other company...
DerekWilson - Saturday, May 6, 2006 - link
CPUs are quite well suited for handling physics calculations for a single object, or even a handful of objects ... physics (especially game physics) calculations are quite simple.when you have a few hundred thousand objects all bumping into eachother every scene, there is no way on earth a current CPU will be able to keep up with PhysX. There are just too many data dependancies and too much of a bandwidth and parallel processing advantage on AGEIA's side.
As for where we will end up if another add-in card business takes off ... well that's a little too much to speculate on for the moment :-)
thestain - Saturday, May 6, 2006 - link
Just my opinion, but this product is too slow.Maybe there needs to be minimum ratio's to cpu and gpu speeds that Ageia and others can use to make sure they hit the upper performance market.
Software looks ok, looks like i might be going with software PhysX if available along with software Raid, even though i would prefer to go with the hardware... if bridged pci bus did not screw up my sound card with noise and wasn't so slow... maybe.. but my thinking is this product needs to be faster and wider... pci-e X4 or something like it, like I read in earlier articles it was supposed to be.
pci... forget it... 773 mhz... forget it... for me... 1.2 GHZ and pci-e X4 would have this product rocking.
any way to short this company?
They really screwed the pouch on speed for the upper end... should rename their product a graphics decelerator for faster cpus,.. and a poor man's accelerator.. but what person who owns a cpu worth $50 and a video card worth $50 will be willing to spend the $200 or more Ageia wants for this...
Great idea, but like the blockhead who give us RAID hardware... not fast enough.
DerekWilson - Saturday, May 6, 2006 - link
afaik, raid hardware becomes useful for heavy error checking configurations like raid 5. with raid 0 and raid 1 (or 10 or 0+1) there is no error correction processing overhead. in the days of slow CPUs, this overhead could suck the life out of a system with raid 5. Today it's not as big an impact in most situations (espeically consumer level).raid hardware was quite a good thing, but only in situations where it is necessary.
Cybercat - Saturday, May 6, 2006 - link
I was reading the article on FiringSquad (http://www.firingsquad.com/features/ageia_physx_re...">exact page here) where Ageia responded to Havok's claims about where the credit is due, performance hits, etc, and on performance hits they said:So they essentially responded immediately with a driver update that supposedly improves performance.
http://ageia.com/physx/drivers.html">http://ageia.com/physx/drivers.html
Driver support is certainly a concern with any new hardware, but if Ageia keeps up this kind of timely response to issues and performance with frequent driver updates, in my mind they'll have taken care of one of the major factors in determining their success, and swinging their number of advantages to outweigh their obstacles for making it in the market.
toyota - Friday, May 5, 2006 - link
i dont get it. Ghost Recon videos WITHOUT PhysX looks much more natural. the videos i have seen with it look pretty stupid. everything that blows up or gets shot has the same little black pieces flying around. i have shot up quite a few things in my life and seen plenty of videos of real explosions and thats not what it looks like.DeathBooger - Friday, May 5, 2006 - link
The PC version of the new Ghost Recon game was supposed to be released along side the Xbox360 version but was delayed at the last minute for a couple of months. My guess is that PhysX implementation was a second thought while developing the game and the delay came from the developers trying to figure out what to do with it.shecknoscopy - Friday, May 5, 2006 - link
Think <I>you're</i> part of a niche market?I gotta' tell you, as a scientist, this whole topic of putting 'physics' into games makes for an intensely amusing read. Of course I understand what's meant here, but when I first look at people's text in these articles/discussion, I'm always taken aback: "Wait? We need to <i>add</i> physics to stuff? Man, THAT's why my experiments have been failing!"
Anyway...
I wonder if the types of computations employed by our controversial little PhysX accelerator could be harvested *outside* of the gaming environment. As someone who both loves to game, but also would love to telecommute to my lab, I'd ideally like to be able to handle both tasks using one machine (I'm talking about in-house molecular modeling, crystallographic analysis, etc... ). Right now I have to rely on a more appropriate 'gaming' GPU at home, but hustle on in to work to use an essentially indentical computer which has been outfitted with a Quadro graphics card do to my crazy experiments. I guess I'm curious if it's plasuable to make, say, a 7900GT + PhysX perform comparable calculations to a Quado/Fire-style workstation graphics setup. 'Cause seriosuly, trying to play BF2 on your $1500 Quadro card is seriously disappointing. But then, so is trying to perform realtime molecular electron density rendering on your $320 7900GT.
SO - anybody got any ideas? Some intimate knowledge of the difference between these types of calculations? Or some intimate knowledge of where I can get free pizza and beer? Ah, Grad School.
Walter Williams - Friday, May 5, 2006 - link
It will be great for military use as well as the automobile industry.
escapedturkey - Friday, May 5, 2006 - link
Why don't developers use the second core of many dual core systems to do a lot of physics calculations? Is there a drawback to this?