NVIDIA's Bumpy Ride: A Q4 2009 Update
by Anand Lal Shimpi on October 14, 2009 12:00 AM EST- Posted in
- GPUs
Blhaflhvfa.
There’s a lot to talk about with regards to NVIDIA and no time for a long intro, so let’s get right to it.
At the end of our Radeon HD 5850 Review we included this update:
“Update: We went window shopping again this afternoon to see if there were any GTX 285 price changes. There weren't. In fact GTX 285 supply seems pretty low; MWave, ZipZoomFly, and Newegg only have a few models in stock. We asked NVIDIA about this, but all they had to say was "demand remains strong". Given the timing, we're still suspicious that something may be afoot.”
Less than a week later and there were stories everywhere about NVIDIA’s GT200b shortages. Fudo said that NVIDIA was unwilling to drop prices low enough to make the cards competitive. Charlie said that NVIDIA was going to abandon the high end and upper mid range graphics card markets completely.
Let’s look at what we do know. GT200b has around 1.4 billion transistors and is made at TSMC on a 55nm process. Wikipedia lists the die at 470mm^2, that’s roughly 80% the size of the original 65nm GT200 die. In either case it’s a lot bigger and still more expensive than Cypress’ 334mm^2 40nm die.
Cypress vs. GT200b die sizes to scale
NVIDIA could get into a price war with AMD, but given that both companies make their chips at the same place, and NVIDIA’s costs are higher - it’s not a war that makes sense to fight.
NVIDIA told me two things. One, that they have shared with some OEMs that they will no longer be making GT200b based products. That’s the GTX 260 all the way up to the GTX 285. The EOL (end of life) notices went out recently and they request that the OEMs submit their allocation requests asap otherwise they risk not getting any cards.
The second was that despite the EOL notices, end users should be able to purchase GeForce GTX 260, 275 and 285 cards all the way up through February of next year.
If you look carefully, neither of these statements directly supports or refutes the two articles above. NVIDIA is very clever.
NVIDIA’s explanation to me was that current GPU supplies were decided on months ago, and in light of the economy, the number of chips NVIDIA ordered from TSMC was low. Demand ended up being stronger than expected and thus you can expect supplies to be tight in the remaining months of the year and into 2010.
Board vendors have been telling us that they can’t get allocations from NVIDIA. Some are even wondering whether it makes sense to build more GTX cards for the end of this year.
If you want my opinion, it goes something like this. While RV770 caught NVIDIA off guard, Cypress did not. AMD used the extra area (and then some) allowed by the move to 40nm to double RV770, not an unpredictable move. NVIDIA knew they were going to be late with Fermi, knew how competitive Cypress would be, and made a conscious decision to cut back supply months ago rather than enter a price war with AMD.
While NVIDIA won’t publicly admit defeat, AMD clearly won this round. Obviously it makes sense to ramp down the old product in expectation of Fermi, but I don’t see Fermi with any real availability this year. We may see a launch with performance data in 2009, but I’d expect availability in 2010.
While NVIDIA just launched its first 40nm DX10.1 parts, AMD just launched $120 DX11 cards
Regardless of how you want to phrase it, there will be lower than normal supplies of GT200 cards in the market this quarter. With higher costs than AMD per card and better performance from AMD’s DX11 parts, would you expect things to be any different?
Things Get Better Next Year
NVIDIA launched GT200 on too old of a process (65nm) and they were thus too late to move to 55nm. Bumpgate happened. Then we had the issues with 40nm at TSMC and Fermi’s delays. In short, it hasn’t been the best 12 months for NVIDIA. Next year, there’s reason to be optimistic though.
When Fermi does launch, everything from that point should theoretically be smooth sailing. There aren’t any process transitions in 2010, it’s all about execution at that point and how quickly can NVIDIA get Fermi derivatives out the door. AMD will have virtually its entire product stack out by the time NVIDIA ships Fermi in quantities, but NVIDIA should have competitive product out in 2010. AMD wins the first half of the DX11 race, the second half will be a bit more challenging.
If anything, NVIDIA has proved to be a resilient company. Other than Intel, I don’t know of any company that could’ve recovered from NV30. The real question is how strong will Fermi 2 be? Stumble twice and you’re shaken, do it a third time and you’re likely to fall.
106 Comments
View All Comments
neomatrix724 - Wednesday, October 14, 2009 - link
Were you looking at the same cards as everyone else? AMD has always aimed for the best price for performance. nVidia has always won hands down on performance...but these came at the expense of costlier cards.AMD hit one out of the park with their new cards. OpenCL, Eyefinity and a strong improvement over previous cards is a very strong feature set. I'm not sure about Fermi and I'm curious to see where nVidia is going with it...but their moves have been confusing me lately.
shin0bi272 - Thursday, October 15, 2009 - link
actually nvidia hasnt always won. Their entire first 2 generations of dx9 cards were slower than ATi's because nvidia boycotted the meetings on the specs for dx9 and they made a card based on the beefier specs that they wanted and that card (the 5800) turned out to be 20% slower than the ati 9700pro. This trend sort of continued for a couple of years but nvidia got closer with the 5900 and eeked out a win with the 6800 a little later. Keep in mind I havent owned an ATi card for gaming since the 9700pro (and that was a gift). So I am no way an ATi fan but facts are facts. Nvidia has made great cards but not always the fastest.Griswold - Wednesday, October 14, 2009 - link
That didnt make alot of sense...vlado08 - Wednesday, October 14, 2009 - link
I am wondering about wddm 2.0 and multitasking on GPU are they coming soon. May be Fermi is better prepared for it?Scali - Wednesday, October 14, 2009 - link
WDDM 2.0 is not part of Windows 7, so we'll need to wait for at least another Windows generation before that becomes available. By then Fermi is most probably replaced by a newer generation of GPUs anyway.Multitasking on the GPU is possible for the first time on Fermi, as it can run multiple GPGPU kernels concurrently (I believe up to 16 different kernels).
vlado08 - Wednesday, October 14, 2009 - link
You are right about that we are going to wait but what about Microsoft and what about Nvidia they should be working on it. Probably Nvidia don't want to be late again. May be they want to be first this time seeing where the things are going. If their hardware is more prepared for wddm 2.0 today, then they will have more time to gain experience and to polish their drivers. Ati(AMD) have a hard only launch of the DirectX11. They are missing the "soft" part of it - (drivers not ready). ATI needs to win they have to make DX11 working and they are puting alot of efort in it so Nvidia is skipping DX11 battle and starting to get ready for the next one. Every thing is getting more complex and needs more time to mature. We are also getting more demanding and less forgiving. So for the next Windows to be ready after 2 or 3 years they need to start now. At least planning.Mills - Wednesday, October 14, 2009 - link
Couldn't these 'extra transistors' be utilized in games as well, similar to how NVIDIA handled PhysX? In other words, incorporate NVIDIA-specific game enhancements that utilize these functions in NVIDIA sponsored titles?Is it too late to do this? Perhaps they will just extend the PhysX API.
Though, PhysX has been out for quite some time and there are only 13(?) PhysX supported titles. NVIDIA better pick up its game here if they plan to leverage PhysX to out-value ATI. Does anyone know if there are any big name titles that have announced PhysX support?
Griswold - Wednesday, October 14, 2009 - link
Physx is a sinking ship, didnt you get the memo?shin0bi272 - Thursday, October 15, 2009 - link
nvidia says that the switching between gpu and cuda is going to be 10x faster in fermi meaning that physx performance will more than double.Scali - Thursday, October 15, 2009 - link
Yup, and that's a hardware feature, which applies equally to any language, be it C/C++ for Cuda, OpenCL or DirectCompute.So not only PhysX will benefit, but also Bullet or Havok, or whatever other GPU-accelerated physics library might surface.