Wrapping It Up
So there you have it. Triple buffering gives you all the benefits of double buffering with no vsync enabled in addition to all the benefits of enabling vsync. We get smooth full frames with no tearing. These frames are swapped to the front buffer only on refresh, but they have just as little input lag as double buffering with no vsync at the start of output to the monitor. Even though "performance" doesn't always get reported right with triple buffering, the graphics hardware is working just as hard as it does with double buffering and no vsync and the end user gets all the benefit with out the potential downside. Triple buffering does take up a handful of extra memory on the graphics hardware, but on modern hardware this is not a significant issue.
Just to recap, from our previous example, here are what the three frames we looked at rendering stack up side by side.
Triple Buffering
Double Buffering
Double Buffering with vsync
We've presented the qualitative argument and the quantitative argument in support of triple buffering. So, now the question is: does this data change things? Are people going to start looking for that triple buffering option more now than without this information? Let's find out.
{poll 135:300}
The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).
In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.
Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).
Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.
When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.
So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.
184 Comments
View All Comments
rna - Sunday, June 28, 2009 - link
From my own fiddling around,Left 4 Dead, "V-sync with Triple Buffering" = Unbearable input lag.
Doom 3 with Triple Buffering forced on in the nVidia control panel and v-sync turned on feels as responsive as with v-sync disabled.
DerekWilson - Wednesday, July 1, 2009 - link
I still haven't confirmed with the developer, but I now think the "triple buffering" that L4D uses is actually a flip queue with 1 frame render ahead (two back buffers; three total buffers).Doom 3 with triple buffering forced in the nvidia control panel with vsync will work exactly as described in this article ...
To double check, I asked NVIDIA for specifics -- triple buffering as forced in their control panel (which only works for OpenGL games) performs exactly the way this article describes that it should.
DerekWilson - Sunday, June 28, 2009 - link
I will do my best to develop a quantitative input lag test. If I can achieve that goal then I will test this and other reported issues.Dospac - Sunday, June 28, 2009 - link
It may be due to Crossfire or ATI's drivers, but enabling vsync and forcing triple buffering with D3Doverrider wrecks the input responsiveness on my system(Vista64 and 3870X2)I used to always play with Vsync and triple buffering when I was on a 120Hz CRT. With a 60Hz LCD, shooters are unplayable. This article is giving inaccurate advice when it states that input lag is not increased.
DerekWilson - Sunday, June 28, 2009 - link
multiGPU options and triple buffering do not play nice together at this point in time.bobjones32 - Sunday, June 28, 2009 - link
I just fired up Left 4 Dead and tested the various vsync options:-vsync disabled
-vsync enabled, double buffering
-vsync enabled, triple buffering
-vsync disabled in game, forced through D3DOverrider with triple buffering
My observations (note - I can retain a perfect 60fps on my 60Hz monitor):
1) triple-buffered vsync still had a noticeable amount of mouse lag
2) double-buffered vsync seemed to have *less* lag, oddly enough
3) There was some odd hitching that took place every second with vsync on, regardless of triple buffering settings.
Oddly enough, mouse lag in Half-Life 2: Episode Two (with either double buffering or triple buffering) was much less noticeable, but that hitching every second was still there.
Derek - any idea why this might be the case?
Scalarscience - Sunday, June 28, 2009 - link
Are you using Crossfire, SLI or a dual gpu card?bobjones32 - Sunday, June 28, 2009 - link
No, single-card 4870 setup.DerekWilson - Wednesday, July 1, 2009 - link
I have no idea why you would see the hitching issue.I do believe my guess about how L4D does it was wrong though: I now think they use a flip queue with three total buffers rather than the technique described in this article.
Ruud van Gaal - Friday, May 25, 2012 - link
One thing I had in my own game with a 1 second hitch was exposure calculation. Mipmapping (through the gfxcard) a single frame down to 1 pixel actually took quite a bit of time and was noticable by a dip in the framerate. Turning off this auto-exposure mipmapping solved it (for me).