Better Image Quality: Jittered Sampling & Faster Anti-Aliasing
As we’ve stated before, the DX11 specification generally leaves NVIDIA’s hands tied. Without capsbits they can’t easily expose additional hardware features beyond what DX11 calls for, and even if they could there’s always the risk of building hardware that almost never gets used, such as AMD’s Tessellator on the 2000-4000 series.
So the bulk of the innovation has to come from something other than offering non-DX11 functionality to developers, and that starts with image quality.
We bring up DX11 here because while it strongly defines what features need to be offered, it says very little about how things work in the backend. The Polymorph Engine is of course one example of this, but there is another case where NVIDIA has done something interesting on the backend: jittered sampling.
Jittered sampling is a long-standing technique used in shadow mapping and various post-processing techniques. In this case, jittered sampling is usually used to create soft shadows from a shadow map – take a random sample of neighboring texels, and from that you can compute a softer shadow edge. The biggest problem with jittered sampling is that it’s computationally expensive and hence its use is limited to where there is enough performance to pay for it.
In DX10.1 and beyond, jittered sampling can be achieved via the Gather4 instruction, which as the name implies is the instruction that gathers the neighboring texels for jittered sampling. Since DX does not specify how this is implemented, NVIDIA implemented it in hardware as a single vector instruction. The alternative is to fetch each texel separately, which is how this would be manually implemented under DX10 and DX9.
NVIDIA’s own benchmarks put the performance advantage of this at roughly 2x over the non-vectorized implementation on the same hardware. The benefit for developers will be that those who implement jittered sampling (or any other technique that can use Gather4) will find it to be a much less expensive technique here than it was on NVIDIA’s previous generation hardware. For gamers, this will mean better image quality through the greater use of jittered sampling.
Meanwhile anti-aliasing performance overall received a significant speed boost. As with AMD, NVIDIA has gone ahead and tweaked their ROPs to reduce the performance hit of 8x MSAA, which on previous-generation GPUs could result in a massive performance drop. In this case NVIDIA has improved the compression efficiency in the ROPs to reduce the hit of 8x MSAA, and also cites the fact that having additional ROPs improves performance by allowing the hardware to better digest smaller primitives that can’t compress well.
NVIDIA's HAWX data - not independently verified
This is something we’re certainly going to be testing once we have the hardware, although we’re still not sold on the idea that the quality improvement from 8x MSAA is worth any performance hit in most situations. There is one situation however where additional MSAA samples do make a stark difference, which we’ll get to next.
115 Comments
View All Comments
marc1000 - Tuesday, January 19, 2010 - link
hey, Banshee was fine! I had one because by that time the 3dfx api was better than DirectX. But suddenly everything became DX compatible and that was one thing 3dfx GPUs could not do... then I replaced that Banshee with a Radeon 9200, later a Radeon X300 (or something), then Radeon 3850, and now Radeon 5770. I'm always in for the mainstream, not the top of the line, and Nvidia is not paying enough atention to mainstream since Geforce FX series...Zool - Monday, January 18, 2010 - link
The question is when they will come with mid range variants. The GF100 seems to be 448SP variant and the 512SP card will be only after A4 revision or who knows.http://www.semiconductor.net/article/438968-Nvidia...">http://www.semiconductor.net/article/43...en_Calls...
The interesting part on the article is the graph which shows the exponecial increase in leakage power after 40nm and less. (which of course hurts more if u have a big chip and diferent clocks to maintain)
They will have even more problems now that dx11 cards will be only gt300 architecture so no rebrand choices for mid range and lower.
For consumer gf100 will be great if they can buy it somewhere in the future, but nvidia will bleed more on it than the GT200.
QChronoD - Monday, January 18, 2010 - link
Maybe I'm missing something, but it seems like PC gaming has lost most of its value in the last few years. I know that you can run games at higher resolutions and probably faster framerates than you can on consoles, but it will end up costing more than all 3 consoles combined to do so. It just seems to have gotten too expensive for the marginal performance advantage.That being said, I bet that one of these would really crank through Collatz or GPUGRID.
GourdFreeMan - Monday, January 18, 2010 - link
I certainly share that sentiment. The last major graphical showcase we had was Crysis in 2007. There have been nice looking PC exclusive titles (Crysis Warhead, Arma 2, the Stalker franchise) since then, but no significant new IP with new rendering engines to take advantage of new technology.If software publishers want our money, they are going to have to do better. Without significant GPGPU applications for the mainstream consumer, GPU manufacturers will eventually suffer as well.
dukeariochofchaos - Monday, January 18, 2010 - link
no, i think you're totally correct, from a certain point of view.i had the thought that the DX9 support is probably more than enough for console games, and why would developers pump money into DX11 support for a product that generates most of it's profits on consoles?
obviously, there is some money to be made in the pc game sphere, but is it really enough to drive game developers to sink money into extra quality just for us?
At least NV has made a product that can be marketed now, and into the future, for design/enterprise solutions. That should help them extract more of the value out of their r&d if there are very few DX11 games for the lifespan of fermi.
Calin - Monday, January 18, 2010 - link
If Fermi is working good, NVidia is in a great place for the development of their next GPU - they'll only need to update some things here and there, based mostly on where the card's performance lack (improve this, improve that, reduce this, reduce that). Also, they are in a very good place for making lower-end cards based on Fermi (cut everything in two or four, no need to redesign the previously fixed function blocks).As for AMD... their current design is in the works and probably too advanced for big changes, so their real Fermi-killer won't come faster than a year or so (that is, if Fermi proves to be so great a success as NVidia wants it to be).
toyota - Monday, January 18, 2010 - link
what I have saved on games this year has more than paid for the difference between the price of a console and my pc.Stas - Tuesday, January 19, 2010 - link
that ^^^^^^^besides, with Steam/D2D/Impulse there is new breath in PC gaming. constant sales on great games, automatic updates, active support, forums full of people, all integrated with virtual community (profiles, chats, etc.). a place to release demos, trailers, etc. I was worried about PC gaming 2-3 years ago, but I'm absolutely confident that it's coming back better than ever.
deeceefar2 - Monday, January 18, 2010 - link
Are the screen shots from left 4 dead 2 missing at the end of page 5?[quote]
As a consequence of this change, TMAA’s tendency to have fake geometry on billboards pop in and out of existence is also solved. Here we have a set of screenshots from Left 4 Dead 2 showcasing this in action. The GF100 with TMAA generates softer edges on the vertical bars in this picture, which is what stops the popping from the GT200.
[/quote]
Ryan Smith - Monday, January 18, 2010 - link
Whoops. Fixed.