Better Image Quality: Jittered Sampling & Faster Anti-Aliasing
As we’ve stated before, the DX11 specification generally leaves NVIDIA’s hands tied. Without capsbits they can’t easily expose additional hardware features beyond what DX11 calls for, and even if they could there’s always the risk of building hardware that almost never gets used, such as AMD’s Tessellator on the 2000-4000 series.
So the bulk of the innovation has to come from something other than offering non-DX11 functionality to developers, and that starts with image quality.
We bring up DX11 here because while it strongly defines what features need to be offered, it says very little about how things work in the backend. The Polymorph Engine is of course one example of this, but there is another case where NVIDIA has done something interesting on the backend: jittered sampling.
Jittered sampling is a long-standing technique used in shadow mapping and various post-processing techniques. In this case, jittered sampling is usually used to create soft shadows from a shadow map – take a random sample of neighboring texels, and from that you can compute a softer shadow edge. The biggest problem with jittered sampling is that it’s computationally expensive and hence its use is limited to where there is enough performance to pay for it.
In DX10.1 and beyond, jittered sampling can be achieved via the Gather4 instruction, which as the name implies is the instruction that gathers the neighboring texels for jittered sampling. Since DX does not specify how this is implemented, NVIDIA implemented it in hardware as a single vector instruction. The alternative is to fetch each texel separately, which is how this would be manually implemented under DX10 and DX9.
NVIDIA’s own benchmarks put the performance advantage of this at roughly 2x over the non-vectorized implementation on the same hardware. The benefit for developers will be that those who implement jittered sampling (or any other technique that can use Gather4) will find it to be a much less expensive technique here than it was on NVIDIA’s previous generation hardware. For gamers, this will mean better image quality through the greater use of jittered sampling.
Meanwhile anti-aliasing performance overall received a significant speed boost. As with AMD, NVIDIA has gone ahead and tweaked their ROPs to reduce the performance hit of 8x MSAA, which on previous-generation GPUs could result in a massive performance drop. In this case NVIDIA has improved the compression efficiency in the ROPs to reduce the hit of 8x MSAA, and also cites the fact that having additional ROPs improves performance by allowing the hardware to better digest smaller primitives that can’t compress well.
NVIDIA's HAWX data - not independently verified
This is something we’re certainly going to be testing once we have the hardware, although we’re still not sold on the idea that the quality improvement from 8x MSAA is worth any performance hit in most situations. There is one situation however where additional MSAA samples do make a stark difference, which we’ll get to next.
115 Comments
View All Comments
Ryan Smith - Wednesday, January 20, 2010 - link
At this point I'm not sure where that would be, and part of that is diminishing returns. Tessellation will return better models, but adding polygons will result in diminishing returns. We're going to have to see what games do in order to see if the extra geometry that GF100 is supposed to be able to generate can really result in a noticeable difference.
Will game makers take advantage of it? That's the million-dollar question right now. NVIDIA is counting on them doing so, but it remains to be seen just how many devs are going to make meaningful use of tessellation (beyond just n-patching things for better curves), since DX11 game development is so young.
Consoles certainly have a lot to do with it. One very real possibility is that the bulk of games continue to be at the DX9 level until the next generation of consoles hits with DX11-like GPUs. I'll answer the rest of this in your next question.
The good news is that it takes very little work. Game assets are almost always designed at a much greater level of detail than what they ship at. The textbook example is Doom3, where the models were designed on the order of 1mil polygons; they needed to be designed that detailed in order to compute proper bump maps and parallax maps. Tessellation and the displacement map is just one more derived map in that regard - for the most part you only need to export an appropriate displacement map from your original assets, and NV is counting on this.
The only downsides to NV's plan are that: 1) Not everything is done at this high of a detail level (models are usually highly detailed, the world geometry not so much), and 2) Higher quality displacement maps aren't "free". Since a game will have multiple displacement maps (you have to MIP-chain them just like you do any other kind of map), a dev is basically looking at needing to include at least 1 more level that's even bigger than the others. Conceivably, not everyone is going to have extra disc space to spend on such assets. Although most games currently still have space to spare on a DVD-9, so I can't quantify how much of a problem that might be. From my perspective, unless they can deliver better than 5870 performance at a reasonable price, then their image quality improvements aren't going to be enough to seal the deal. If they can meet those two factors however, then yes, image quality needs to be factored in to some degree.
FITCamaro - Monday, January 18, 2010 - link
It will be fast. But from the size of it, its going to be expensive as hell.I question how much success nvidia will have with yet another fast but hot and expensive card. Especially with the entire world in recession.
beginner99 - Monday, January 18, 2010 - link
Sounds nice but I doubt it's useful yet. DX11, probably takes at least 1-2 year till it takes off and the geometry power could be useful. Meaning could have easly waited a generation longer.Power consumption will probably be deciding. The new Radeons do rather well in that area.
But anyway, i'm gonna wait. unless it is complete crap, it will at least help for Radeon prices going south, even if you don't buy one.
just4U - Monday, January 18, 2010 - link
On Amd pricing. It seems pretty fair for the 57XX line. Cheaper overall then the 4850 and 4870 on their launches with similiar performance and added DX11 features.It would be nice to see the 5850 and 5870 priced about one third cheaper.. but here in Canada the cards are always sold out or of very limited stock so... I guess there is some justification for the higher pricing.
I still can't get a 275 cheap either. It's priced 30-40% higher then the 4870.
The only card(s) I've purchased so far are the 5750s as I feel the last gen products are still viable at their current pricing ... and I buy a fair amount of video cards (20-100 per year)
solgae1784 - Monday, January 18, 2010 - link
Let's just hope this GF100 doesn't become another disaster that was "Geforce FX".setzer - Monday, January 18, 2010 - link
While on paper these specs look great for the High-End market (>500€ cards) how much will the mainstream market lose, as in the cards that sell around the 150~300€ bracket, which coincidently are the cards the most people tend to buy. Nvidia tends to scale down the specifications but how much will it be scaled down, what is the interest of the new IQ improvements if you can only use them on high-end cards because the mainstream cards can't handle it.The 5 series radeons are similar, the new generation only has appeal if you go for the 58xx++ cards, which are overpriced, if you already have a 4850 you can hold out from buying a new card for at least one extra year, take the 5670, it has dx11 support but hasn't the horse power to use it effectively neutering the card from start as far as dx11 goes.
So even if Nvidia goes with a March launch of GF100, I'm guessing it will not be until June or July that we see the GeForce 10600GT (like or GX600GT, phun on ATI 10000 series :P), which will just have the effect of Radeon prices to stay where they are (high) and not where they should be in terms of performance (slightly on par with the HD 4000 series).
Beno - Monday, January 18, 2010 - link
page 2 isnt workingZool - Monday, January 18, 2010 - link
It will be interesting how much of the geometry performance will be true in the end from all these hype. I wouldnt put my hand into fire on nvidias pr slides and in house demos. Like the pr graph with 600% teselation performance increase over ati card. It will surely have some dark sides too like everything around. Nothing is free. Until real benchmarks u cant trust too much to pr graphs these days.haplo602 - Monday, January 18, 2010 - link
This looks similar to what Riva TNT used to be. Nvidia was promising everything including a cure for cancer. It turned out to be barely better than 3Dfx at that time because of clock/power/heat problems.Seems Fermi will be a big bang in workstation/HPC markets. Gaming not so much.
DominionSeraph - Monday, January 18, 2010 - link
Anyone with at least half a brain had a TNT. Tech noobs saw "Voodoo" and went with the gimped Banshee, and those with money to burn threw in dual Voodoo 2's.How does this at all compare to Fermi, whose performance will almost certainly not justify its price. The 5870's doesn't, not with the 5850 in town. Such is the nature of the bleeding edge.
Do you just type things out at random?