Exclusive: ASUS Debuts AGEIA PhysX Hardware
by Derek Wilson on May 5, 2006 3:00 AM EST- Posted in
- GPUs
Introduction
A little over a year ago, we first heard about a company called AGEIA whose goal was to bring high quality physics processing power to the desktop. Today they have succeeded in their mission. For a short while, systems with the PhysX PPU (physics processing unit) have been shipping from Dell, Alienware, and Falcon Northwest. Soon, PhysX add-in cards will be available in retail channels. Today, the very first PhysX accelerated game has been released: Tom Clancy's Ghost Recon Advanced Warfighter, and to top off the excitement, ASUS has given us an exclusive look at their hardware.We have put together a couple benchmarks designed to illustrate the impact of AGEIA's PhysX technology on game performance, and we will certainly comment heavily on our experience while playing the game. The potential benefits have been discussed quite a bit over the past year, but now we finally get a taste of what the first PhysX accelerated games can do.
With NVIDIA and ATI starting to dip their toes into physics acceleration as well (with Havok FX and in-house demos of other technology), knowing the playing field is very important for all parties involved. Many developers and hardware manufacturers will definitely give this technology some time before jumping on the bandwagon, as should be expected. Will our exploration show enough added benefit for PhysX to be worth the investment?
Before we hit the numbers, we want to take another look at the technology behind the hardware.
101 Comments
View All Comments
Ickus - Saturday, May 6, 2006 - link
Hmmm - seems like the modern equivelant of the old-school maths co-processors.Yes these are a good idea and correct me if I'm wrong, but isn't that $250 (Aus $) CPU I forked out for supposedly quite good at doing these sorts of calculations what with it's massive FPU capabilities and all? I KNOW that current CPU's have more than enough ability to perform the calculations for the physics engines used in todays video games. I can see why companies are interested in pushings physics add-on cards though...
"Are your games running crap due to inefficient programming and resource hungry operating systems? Then buy a physics processing unit add-in card! Guaranteed to give you an unimpressive performance benefit for about 2 months!" If these PPU's are to become mainstream and we take another backwards step in programming, please oh please let it be NVidia who takes the reigns... They've done more for the multimedia industry in the last 7 years than any other company...
DerekWilson - Saturday, May 6, 2006 - link
CPUs are quite well suited for handling physics calculations for a single object, or even a handful of objects ... physics (especially game physics) calculations are quite simple.when you have a few hundred thousand objects all bumping into eachother every scene, there is no way on earth a current CPU will be able to keep up with PhysX. There are just too many data dependancies and too much of a bandwidth and parallel processing advantage on AGEIA's side.
As for where we will end up if another add-in card business takes off ... well that's a little too much to speculate on for the moment :-)
thestain - Saturday, May 6, 2006 - link
Just my opinion, but this product is too slow.Maybe there needs to be minimum ratio's to cpu and gpu speeds that Ageia and others can use to make sure they hit the upper performance market.
Software looks ok, looks like i might be going with software PhysX if available along with software Raid, even though i would prefer to go with the hardware... if bridged pci bus did not screw up my sound card with noise and wasn't so slow... maybe.. but my thinking is this product needs to be faster and wider... pci-e X4 or something like it, like I read in earlier articles it was supposed to be.
pci... forget it... 773 mhz... forget it... for me... 1.2 GHZ and pci-e X4 would have this product rocking.
any way to short this company?
They really screwed the pouch on speed for the upper end... should rename their product a graphics decelerator for faster cpus,.. and a poor man's accelerator.. but what person who owns a cpu worth $50 and a video card worth $50 will be willing to spend the $200 or more Ageia wants for this...
Great idea, but like the blockhead who give us RAID hardware... not fast enough.
DerekWilson - Saturday, May 6, 2006 - link
afaik, raid hardware becomes useful for heavy error checking configurations like raid 5. with raid 0 and raid 1 (or 10 or 0+1) there is no error correction processing overhead. in the days of slow CPUs, this overhead could suck the life out of a system with raid 5. Today it's not as big an impact in most situations (espeically consumer level).raid hardware was quite a good thing, but only in situations where it is necessary.
Cybercat - Saturday, May 6, 2006 - link
I was reading the article on FiringSquad (http://www.firingsquad.com/features/ageia_physx_re...">exact page here) where Ageia responded to Havok's claims about where the credit is due, performance hits, etc, and on performance hits they said:So they essentially responded immediately with a driver update that supposedly improves performance.
http://ageia.com/physx/drivers.html">http://ageia.com/physx/drivers.html
Driver support is certainly a concern with any new hardware, but if Ageia keeps up this kind of timely response to issues and performance with frequent driver updates, in my mind they'll have taken care of one of the major factors in determining their success, and swinging their number of advantages to outweigh their obstacles for making it in the market.
toyota - Friday, May 5, 2006 - link
i dont get it. Ghost Recon videos WITHOUT PhysX looks much more natural. the videos i have seen with it look pretty stupid. everything that blows up or gets shot has the same little black pieces flying around. i have shot up quite a few things in my life and seen plenty of videos of real explosions and thats not what it looks like.DeathBooger - Friday, May 5, 2006 - link
The PC version of the new Ghost Recon game was supposed to be released along side the Xbox360 version but was delayed at the last minute for a couple of months. My guess is that PhysX implementation was a second thought while developing the game and the delay came from the developers trying to figure out what to do with it.shecknoscopy - Friday, May 5, 2006 - link
Think <I>you're</i> part of a niche market?I gotta' tell you, as a scientist, this whole topic of putting 'physics' into games makes for an intensely amusing read. Of course I understand what's meant here, but when I first look at people's text in these articles/discussion, I'm always taken aback: "Wait? We need to <i>add</i> physics to stuff? Man, THAT's why my experiments have been failing!"
Anyway...
I wonder if the types of computations employed by our controversial little PhysX accelerator could be harvested *outside* of the gaming environment. As someone who both loves to game, but also would love to telecommute to my lab, I'd ideally like to be able to handle both tasks using one machine (I'm talking about in-house molecular modeling, crystallographic analysis, etc... ). Right now I have to rely on a more appropriate 'gaming' GPU at home, but hustle on in to work to use an essentially indentical computer which has been outfitted with a Quadro graphics card do to my crazy experiments. I guess I'm curious if it's plasuable to make, say, a 7900GT + PhysX perform comparable calculations to a Quado/Fire-style workstation graphics setup. 'Cause seriosuly, trying to play BF2 on your $1500 Quadro card is seriously disappointing. But then, so is trying to perform realtime molecular electron density rendering on your $320 7900GT.
SO - anybody got any ideas? Some intimate knowledge of the difference between these types of calculations? Or some intimate knowledge of where I can get free pizza and beer? Ah, Grad School.
Walter Williams - Friday, May 5, 2006 - link
It will be great for military use as well as the automobile industry.
escapedturkey - Friday, May 5, 2006 - link
Why don't developers use the second core of many dual core systems to do a lot of physics calculations? Is there a drawback to this?