At this year’s Consumer Electronics Show, NVIDIA had several things going on. In a public press conference they announced 3D Vision Surround and Tegra 2, while on the showfloor they had products o’plenty, including a GF100 setup showcasing 3D Vision Surround.
But if you’re here, then what you’re most interested in is what wasn’t talked about in public, and that was GF100. With the Fermi-based GF100 GPU finally in full production, NVIDIA was ready to talk to the press about the rest of GF100, and at the tail-end of CES we got our first look at GF100’s gaming abilities, along with a hands-on look at some unknown GF100 products in action. The message NVIDIA was trying to send: GF100 is going to be here soon, and it’s going to be fast.
Fermi/GF100 as announced in September of 2009
Before we get too far ahead of ourselves though, let’s talk about what we know and what we don’t know.
During CES, NVIDIA held deep dive sessions for the hardware press. At these deep dives, NVIDIA focused on 3 things: Discussing GF100’s architecture as is relevant for a gaming/consumer GPU, discussing their developer relations program (including the infamous Batman: Arkham Asylum anti-aliasing situation), and finally demonstrating GF100 in action on some games and some productivity applications.
Many of you have likely already seen the demos, as videos of what we saw have already been on YouTube for a few days now. What you haven’t seen and what we’ll be focusing on today, is what we’ve learned about GF100 as a gaming GPU. We now know everything about what makes GF100 tick, and we’re going to share it all with you.
With that said, while NVIDIA is showing off GF100, they aren’t showing off the final products. As such we can talk about the GPU, but we don’t know anything about the final cards. All of that will be announced at a later time – and no, we don’t know that either. In short, here’s what we still don’t know and will not be able to cover today:
- Die size
- What cards will be made from the GF100
- Clock speeds
- Power usage (we only know that it’s more than GT200)
- Pricing
- Performance
At this point the final products and pricing are going to heavily depend on what the final GF100 chips are like. The clockspeeds NVIDIA can get away with will determine power usage and performance, and by extension of that, pricing. Make no mistake though, NVIDIA is clearly aiming to be faster than AMD’s Radeon HD 5870, so form your expectations accordingly.
For performance in particular, we have seen one benchmark: Far Cry 2, running the Ranch Small demo, with NVIDIA running it on both their unnamed GF100 card and a GTX285. The GF100 card was faster (84fps vs. 50fps), but as Ranch Small is a semi-randomized benchmark (certain objects are in some runs and not others) and we’ve seen Far Cry 2 to be CPU-limited in other situations, we don’t put much faith in this specific benchmark. When it comes to performance, we’re content to wait until we can test GF100 cards ourselves.
With that out of the way, let’s get started on GF100.
115 Comments
View All Comments
dentatus - Monday, January 18, 2010 - link
" Im sure ATi could pull out the biggest, most expensive, hottest and fastest card in the world"- they have, its called the radeon HD5970.Really, in my Australia, the ATI DX11 hardware represents nothing close to value. The "biggest, most expensive, hottest and fastest card in the world" a.k.a HD5970 weighs in at a ridiculous AUD 1150. In the meantime the HD5850 jumped up from AUD 350 to AUD 450 on average here.
The "smaller, more affordable, better value" line I was used to associating with ATI went out the window the minute their hardware didn't have to compete with nVidia DX11 hardware.
Really, I'm not buying any new hardware until there's some viable alternatives at the top and some competition to burst ATI's pricing bubble. That's why it'd be good to see GF100 make a "G80" impression.
mcnabney - Monday, January 18, 2010 - link
You have no idea what a market economy is.If demand outstrips supply prices WILL go up. They have to.
nafhan - Monday, January 18, 2010 - link
It's mentioned in the article, but nvidia being late to market is why prices on ATI's cards are high. Based on transistor count, etc. There's plenty of room for ATI to drop prices once they have some competition.Griswold - Wednesday, January 20, 2010 - link
And thats where the article is dead wrong. For the most part, the ridiculous prices were dictated by low supply vs. high demand. Now, we finally arrived at decent supply vs. high demand and prices are dropping. The next stage may be good supply vs normal demand. That, and no second earlier, is when AMD themselves could willingly start price gouging due to no competition.However, the situation will be like this long after Thermi launched for the simple reason, that there is no reason to believe that Thermi wont have yield issues for quite some time after they have been sorted out for AMD - its the size of chipzilla that will give it a rough time for the first couple of months, regardless of its capabilities.
chizow - Monday, January 18, 2010 - link
I'm sure ATI would've if they could've instead of settling for 2nd place most of the past 3 years, but GF100 isn't just about the performance crown, its clearly setting the table for future variants based on its design changes for a broader target audience (think G92).bupkus - Monday, January 18, 2010 - link
So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance. And this is why NVIDIA has 16 PolyMorph Engines and 4 Raster Engines, because they need a lot of hardware to generate and process that much geometry.Are you saying that ATI's viability and funding resources for R&D are not supported by the majority of sales which traditionally fall into the lower priced hardware which btw requires smaller and cheaper GPUs?
Targon - Wednesday, January 20, 2010 - link
Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.
We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?
AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.
Targon - Wednesday, January 20, 2010 - link
Why do people not understand that with a six month lead in the DX11 arena, AMD/ATI will be able to come out with a refresh card that could easily exceed what Fermi ends up being? Remember, AMD has been dealing with the TSMC issues for longer, and by the time Fermi comes out, the production problems SHOULD be done. Now, how long do you think it will take to work the kinks out of Fermi? How about product availability(something AMD has been dealing with for the past few months). Just because a product is released does NOT mean you will be able to find it for sale.The refresh from AMD could also mean that in addition to a faster part, it will also be cheaper. So while the 5870 is selling for $400 today, it may be down to $300 by the time Fermi is finally available for sale, with the refresh part(same performance as Fermi) available for $400. Hmmm, same performance for $100 less, and with no games available to take advantage of any improved image quality of Fermi, you see a better deal with the AMD part. We also don't know what the performance will be from the refresh from AMD, so a lot of this needs to take a wait and see approach.
We have also seen that Fermi is CLEARLY not even available for some leaked information on the performance, which implies that it may be six MORE months before the card is really ready. Showing a demo isn't the same as letting reviewers tinker with the part themselves. Really, if it will be available for purchase in March, then shouldn't it be ready NOW, since it will take weeks to go from ready to shipping(packaging and such)?
AMD is winning this round, and they will be in the position where developers will have been using their cards for development since NVIDIA clearly can't. AMD will also be able to make SURE that their cards are the dominant DX11 cards as a result.
chizow - Monday, January 18, 2010 - link
@bupkus, no, but I can see a monster strawman coming from a mile away.Calin - Monday, January 18, 2010 - link
"Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better"No it won't.
If the game will ship with the "high resolution" displacement mappings, NVidia could make use of them (and AMD might not, because of the geometry power involved). If the game won't ship with the "high resolution" displacement maps to use for tesselation, then NVidia will only have a lot of geometry power going to waste, and the same graphical quality as AMD is having.
Remember that in big graphic game engines, there are multiple "video paths" for multiple GPU's - DirectX 8, DirectX 9, DirectX 10, and NVidia and AMD both have optimised execution paths.