Socket-A Chipset Comparison - April 2001
by Anand Lal Shimpi on April 5, 2001 2:16 AM EST- Posted in
- CPUs
UnrealTournament is a Direct3D (DX7) first person shooter that is generally not fill rate limited by most graphics accelerators. This makes it a perfect benchmark for our purposes since we aren’t being held back by our test bed’s graphics card, in this case a GeForce2 Ultra.
The fastest platform is obviously the AMD 760, using the 133MHz DDR FSB (effectively 266MHz) with PC2100 SDRAM. This configuration places the memory and FSB frequencies synchronous with one another, which helps reduce latency according to what we've seen from our cachemem benchmarks.
Coming in a close second place is the KT133A chipset running at the same 133/133 setting, except this time with PC133 SDRAM (since the KT133A doesn't support DDR memory). UnrealTournament is not memory bandwidth intensive to truly take advantage of DDR SDRAM's advantage over PC133 SDRAM, it is more focused on memory latency.
It is this focus on latency over memory bandwidth that allows the ALi MAGiK1 with PC2100 DDR SDRAM to actually be slightly slower (realistically it's about the same speed) as the same setup with PC133 SDRAM.
Another real-world example of the incredible latency penalty you'll pay when running your memory and front side buses out of synch is the comparison between the MAGiK1 running at 100/100 and 100/133. The latter configuration uses PC133 SDRAM, therefore offering 25% more memory bandwidth yet it is actually 8% slower than the 100/100 setup with PC100 SDRAM. After just mentioning that UT is more of a test of latency than bandwidth, this actually makes a great deal of sense.
The KT133A running at 100/133 (FSB/memory bus) is identical to the majority of KT133 based systems currently out there and as you can tell by its fourth place standing in the chart above, the performance this year-old solution offers isn't shabby. It is performing at around 94% of the PC2100 equipped AMD 760 solution which isn't enough to warrant an upgrade for current KT133 owners.
Realistically speaking, unless you're a truly hardcore gamer, you're not going to want to play at 640x480x16; you spent all that money on your graphics card to not only play your games at a fast pace but have them look good as well. Let's take a look at the picture at 1024x768x32:
Things change slightly from a performance perspective at 1024x768x32. From the perspective of what is happening inside your computer, now there are over 2.5x more pixels on the screen and since we are now running in 32-bit color, twice as much space is needed to store information about every pixel on the screen (32-bits of color data vs 16-bits of color data per pixel). This translates into more for the graphics card to deal with, but very little more (in most cases) for the rest of the system to deal with. The "weak-link" being the graphics card here, frame rates drop but the performance standings remain the same because all of the systems use the same graphics card.
This is also a small indication of the AGP performance of the various chipsets and configurations tested here. In theory, at 1024 x 768 x 32 there should be more being transferred over the AGP bus because less frame buffer memory local to the video card is available and thus the graphics chip must store data in main memory. With 64MB of on-board frame buffer on the GeForce2 Ultra and many of today's newer graphics accelerators, this happens very rarely, however if AGP performance was an issue then we would've also seen some more dramatic changes in the standings here.
0 Comments
View All Comments