Tessellation: Because The GS Isn't Fast Enough
Microsoft and AMD tend to get the most excited about tessellation whenever the topic of DX11 comes up. AMD jumped on the tessellation bandwagon long ago, and perhaps it does make sense for consoles like the XBox 360. Adding fixed function hardware to quickly and efficiently handle a task that improves memory footprint has major advantages in the living room. We still aren't sold on the need for a tessellator on the desktop, but who's to argue with progress?
Or is it really progressive? The tessellator itself is fixed function rather than programmable. Sure, the input to and output of the tessellator can be manipulated a bit through the Hull Shader and Domain Shader, but the heart of the beast is just not that flexible. The Geometry Shader is the programmable block in the pipeline that is capable of tessellation as well as much more, but it just doesn't have the power to do tessellation on any useful scale. So while most everything has been moving towards programmability in the rendering pipe, we have sort of a step backward here. But why?
The argument between fixed function and programmable hardware is always one of performance versus flexibility and usefulness. In the beginning, fixed function was necessary to get the desired performance. As time went on, it became clear that adding in more fixed function hardware to graphics chips just wasn't feasible. The transistors put into specialized hardware just go unused if developers don't program to take advantage of it. This made a shift toward architectures where expanding the pool of compute resources that could be shared and used for many different tasks became a much more attractive way to go. In the general case anyway. But that doesn't mean that fixed function hardware doesn't have it's place.
We do still have the problem that all the transistors put into the tessellator are worthless unless developers take advantage of the hardware. But the reason it makes sense is that the ROI (return on investment: what you get for what you put in) on those transistors is huge if developers do take advantage of the hardware: it's much easier to get huge tessellation performance out of a fixed function tessellator than to put the necessary resources into the Geometry Shader to allow it to be capable of the same tessellation performance programmatically. This doesn't mean we'll start to see a renaissance of fixed function blocks in our graphics hardware; just that significantly advanced features going forward may still require the sacrifice of programability in favor of early adoption of a feature. The majority of tasks will continue to be enabled in a flexible programmable way, and in the future we may see more flexibility introduced into the tessellator until it becomes fully programmable as well (or ends up just being merged into some future version of the Geometry Shader).
Now don't let this technical assessment of fixed function tessellation make you think we aren't interested in reaping the benefits of the tessellator. Currently, artists need to create different versions of their objects for different LODs (Level of Detail -- reducing or increasing complexity as the object moves further or nearer the viewer), and geometry simulation through texturing at each LOD needs to be done by pixel shaders. This requires extra work from both artists and programmers and costs a good bit in terms of performance. There are also some effects than can only be done with more geometry.
Tessellation is a great way to get that geometry in there for more detail, shadowing, and smooth edges. High geometry also allows really cool displacement mapping effects. Currently, much geometry is simulated through textures and techniques like bump mapping or parallax occlusion mapping or some other technique. Even with high geometry, we will want to have large normal maps for our lighting algorithms to use, but we won't need to do so much work to make things like cracks, bumps, ridges, and small detail geometry appear to be there when it isn't because we can just tessellate and displace in a single pass through the pipeline. This is fast, efficient, and can produce very detailed effects while freeing up pixel shader resources for other uses. With tessellation, artists can create one sub division surface that can have a dynamic LOD free of charge; a simple hull shader and a displacement map applied in the domain shader will save a lot of work, increase quality, and improve performance quite a bit.
If developers adopt tessellation, we could see cool things, and with the move to DX11 class hardware both NVIDIA and AMD will be making parts with tessellation capability. But we may not see developers just start using tessellation (or the compute shader for that matter) right away. Because DirectX 11 will run on down level hardware and at the release of DX11 we will already have a huge number cards on the market capable of running a subset of DX11 bringing with it a better, more refined, programming language in the new version of HLSL and seamless parallelization optimizations, we will very likely see the first DX11 games only implementing features that can run completely on DX10 hardware.
Of course, at that point developers can be fully confident of exploiting all the aspects of DX10 hardware, which they still aren't completely taking advantage of. Many people still want and need a DX9 path because of Vista's failure, which means DX10 code tends to be more or less an enhanced DX9 path rather than something fundamentally different. So when DirectX 11 finally debuts, we will start to see what developers could really do with DX10.
Certainly there will be developers experimenting with tessellation, but these will probably just be simple amplification to get rid of those jagged edges around curved surfaces at first. It will take time for the real advanced tessellation techniques everyone is excited about to come to fruition.
109 Comments
View All Comments
epyon96 - Sunday, February 1, 2009 - link
That's very insightful. Can you go into more detail?I am confused because there appeared to be significant differences between Dx9C and Dx9B since NVidia made it sound like the difference was like the difference between Dx8.1/2/3 and Dx8.4 which did seem very significant if memory serves me right.
The difference between 8.4 and 9 seemed minimal in quality of the final output.
GourdFreeMan - Monday, February 2, 2009 - link
The guidelines I spoke of were mentioned on the MSDN Forums circa 2003 regarding how changes to Direct3D would affect DirectX versioning, but seem to have been abandoned in favor of the bimonthly SDK updates following the DX 9.0c release. Bimonthly updates led to faster bug fixes, which in prior versions of DirectX sometimes required a letter update.If you are interested in the exact technical changes between DirectX versions, I suggest downloading the old SDK versions prior to the move to bimonthly updates and looking at the Changes section of the documentation.
Regarding the move between DirectX 8 and DirectX 9, Shader Model 2.0 was introduced making way for games such as Far Cry (admittedly Far Cry was a DX 9.0b game, but the changes from 9.0 to 9.0b mainly involved SM 2.0a and SM 2.0b which for Far Cry meant enhanced performance on ATi and nVIDIA cards). Far Cry would later be patched to support DX 9.0c and SM 3.0, adding features like HDR, but I would argue that the unpatched game still looked considerably better than DX8 titles.
(Incidentally there is no DirectX 8.3 and 8.4 -- there was 8.1a and 8.1b in the progression instead).
epyon96 - Saturday, January 31, 2009 - link
I wish the article had more background on what you just hypothesized (obviously with some substantiated facts) instead instead of the unnecessary vista bashing. It wound satisfy an actual curiosity.I remember that's one of the reasons why the in depth analysis of the development cycle of R770 was so well liked.
gamerk2 - Saturday, January 31, 2009 - link
The issue with DX11 is this: You need to supply a DX10 codepath for those who won't update GFX cards (you can't release a game no one has hardware for), but also would need a DX9 codepath for XP.Why would anyone release a game with three seperate grpahics code paths? Its for that reason I see a slow use of DX11, as long as XP holds 15-20% market share.
ltcommanderdata - Saturday, January 31, 2009 - link
If I remember those OS market share reports correctly, as of the end of last year Windows XP had about 65% market share, Vista has about 20% after 2 years, and Mac is nearing 10%. Even if Windows 7 is a roaring success, XP just has too much built-up market share to disappear overnight, so XP and DX9 compatibility will be required for at least another 2 years. The other thing that works against Windows 7 is that even if it isn't released until next year, it's introduction looks to be right in the middle of this economic recession, since things probably won't really pick up until late 2010 or 2011. When the economy does pick up again, there will be huge demand as companies finally switch from XP which would be 10 years old by then, but the first year of Windows 7 sales will probably be slow.bobvodka - Saturday, January 31, 2009 - link
Well, to be fair, you don't have to have a DX10 path and a DX11 path as such. A few important features work on DX10 cards anyway, such as the multi-threaded rendering stuff, so you need a DX11 and a DX9 path at most; you just have to do some feature detection to find out if you are on a DX11, DX10 or DX10.1 card.Still a slight pain, but not as much as 3 real code paths.
DarkMadMax - Saturday, January 31, 2009 - link
And main reason is consoles. There are practically no PC exclusives anymore among large budget titles (e.g. the ones who concentrate on graphics) . So all games target xbox360 hardware (if they dont they are ps3 exclusives). So until new generation of consoles appears there will be no progress in graphics. Periodhaukionkannel - Saturday, January 31, 2009 - link
To me, this article mostly talks about new features of DX11 and that some fundamental fealtures can benefit allso dx10 and dx10.1 hardware...To me it seems that the Vista part was only there to say why there are not any real DX10 games now, even the features are there. I didn't read it as an Vista hate like many people here seems to think of it.
All in all it was very good article abou how DX11 can allow those promises that DX10 promised to flourish better this time.
scruffypup - Saturday, January 31, 2009 - link
That though this article was supposed to be about DirectX 11, Derek's bias and opinion about Vista overshadowed the subject of the article,...This article shows poor writing at its finest,.. afterall doesn't writing 101 teach one to make the article about the subject you are writing about and not something else?
Again I say,... Derek does a disservice to anandtech with this bias. If you want to put in your bias towards an unrelated subject,.. at least show clearly the links (relevancy) to your intended subject material and how you come by a conclusion to support that claim other than just spouting off needlessly,... for that is what you have done essentially, as it held no relevance to the subject material the way you wrote the article.
chizow - Saturday, January 31, 2009 - link
Article summary:1)DX11 offers nothing new over DX10, as quoted in the article its just a strict superset that builds on and adds features to DX10 capability.
2)Vista and DX10 sucked because no one wanted to use them.
Derek, like many others I disagree with your assessment of Vista's importance in the overall OS hierarchy, here's just a quick list:
1) First OS to bring 64-bit support to the mainstream.
2) First OS to offer multi-threaded driver improvements. Look at Rel 180 and 8.12 Hot Fix, where multi-threaded drivers are all the rage.
3) First OS to offer DX10 support. We're finally seeing some of the performance benefits we were promised in DX10 with multi-threaded drivers and improved AA with reading of the multi-sample depth buffer.
4) Much better OS stability compared to XP. It wasn't always the case, but contrary to your article, most of the problems were fixed in July/August with the various video hot fixes (Ryan Smith can probably confirm or deny this).
I think Win7 just emphasizes how good Vista is, and how many light years ahead both are compared to XP. You could say Win7 is like Mojave SE, not Vista SE, as you can clearly see all the Vista-haters who are running Win7 glowing about all the features and stability they've missed out on for at least a year (since Vista SP1).