Apple's Mac Pro: A Discussion of Specifications
by Anand Lal Shimpi on August 9, 2006 3:54 PM EST- Posted in
- Mac
Understanding FB-DIMMs
Since Apple built the Mac Pro out of Intel workstation components, it unfortunately has to use more expensive Intel workstation memory. In other words, cheap unbuffered DDR2 isn't an option, its time to welcome ECC enabled Fully Buffered DIMM (FBD) to your Mac.
Years ago, Intel saw two problems happening with most mainstream memory technologies: 1) As we pushed for higher speed memory, the number of memory slots per channel went down, and 2) the rest of the world was going serial (USB, SATA and more recently, Hyper Transport, PCI Express, etc...) yet we were still using fairly antiquated parallel memory buses.
The number of memory slots per channel isn't really an issue on the desktop; currently, with unbuffered DDR2-800 we're limited to two slots per 64-bit channel, giving us a total of four slots on a motherboard with a dual channel memory controller. With four slots, just about any desktop user's needs can be met with the right DRAM density. It's in the high end workstation and server space that this limitation becomes an issue, as memory capacity can be far more important, often requiring 8, 16, 32 or more memory sockets on a single motherboard. At the same time, memory bandwidth is also important as these workstations and servers will most likely be built around multi-socket multi-core architectures with high memory bandwidth demands, so simply limiting memory frequency in order to support more memory isn't an ideal solution. You could always add more channels, however parallel interfaces by nature require more signaling pins than faster serial buses, and thus adding four or eight channels of DDR2 to get around the DIMMs per channel limitation isn't exactly easy.
Intel's first solution was to totally revamp PC memory technology, instead of going down the path of DDR and eventually DDR2, Intel wanted to move the market to a serial memory technology: RDRAM. RDRAM offered significantly narrower buses (16-bits per channel vs. 64-bits per channel for DDR), much higher bandwidth per pin (at the time a 64-bit wide RDRAM memory controller would offer 6.4GB/s of memory bandwidth, compared to a 64-bit DDR266 interface which at the time could only offer 2.1GB/s of bandwidth) and of course the ease of layout benefits that come with a narrow serial bus.
Unfortunately, RDRAM offered no tangible performance increase, as the demands of processors at the time were no where near what the high bandwidth RDRAM solutions could deliver. To make matters worse, RDRAM implementations were plagued by higher latency than their SDRAM and DDR SDRAM counterparts; with no use for the added bandwidth and higher latency, RDRAM systems were no faster, if not slower than their SDR/DDR counterparts. The final nail in the RDRAM coffin on the PC was the issue of pricing; your choices at the time were this: either spend $1000 on a 128MB stick of RDRAM, or spend $100 on a stick of equally performing PC133 SDRAM. The market spoke and RDRAM went the way of the dodo.
Intel quietly shied away from attempting to change the natural evolution of memory technologies on the desktop for a while. Intel eventually transitioned away from RDRAM, even after its price dropped significantly, embracing DDR and more recently DDR2 as the memory standards supported by its chipsets. Over the past couple of years however, Intel got back into the game of shaping the memory market of the future with this idea of Fully Buffered DIMMs.
The approach is quite simple in theory: what caused RDRAM to fail was the high cost of using a non-mass produced memory device, so why not develop a serial memory interface that uses mass produced commodity DRAMs such as DDR and DDR2? In a nutshell that's what FB-DIMMs are, regular DDR2 chips on a module with a special chip that communicates over a serial bus with the memory controller.
The memory controller in the system stops having a wide parallel interface to the memory modules, instead it has a narrow 69 pin interface to a device known as an Advanced Memory Buffer (AMB) on the first FB-DIMM in each channel. The memory controller sends all memory requests to the AMB on the first FB-DIMM on each channel and the AMBs take care of the rest. By fully buffering all requests (data, command and address), the memory controller no longer has a load that significantly increases with each additional DIMM, so the number of memory modules supported per channel goes up significantly. The FB-DIMM spec says that each channel can support up to 8 FB-DIMMs, although current Intel chipsets can only address 4 FB-DIMMs per channel. With a significantly lower pin-count, you can cram more channels onto your chipset, which is why the Intel 5000 series of chipsets feature four FBD channels.
Bandwidth is a little more difficult to determine with FBD than it is with conventional DDR or DDR2 memory buses. During Steve Jobs' keynote, he put up a slide that listed the Mac Pro as having a 256-bit wide DDR2-667 memory controller with 21.3GB/s of memory bandwidth. Unfortunately, that claim isn't being totally honest with you as the 256-bit wide interface does not exist between the memory controller and the FB-DIMMs. The memory controller in the Intel 5000X MCH communicates directly with the first AMB it finds on each channel, that interface is actually only 24-bits wide per channel for a total bus width of 96-bits (24-bits per channel x 4 channels). The bandwidth part of the equation is a bit more complicated, but we'll get to that in a moment.
Below we've got the anatomy of a AMB chip:
The AMB has two major roles, to communicate with the chipset's memory controller (or other AMBs) and to communicate with the memory devices on the same module.
When a memory request is made the first AMB in the chain then figures out if the request is to read/write to its module, or to another module, if it's the former then the AMB parallelizes the request and sends it off to the DDR2 chips on the module, if the request isn't for this specific module, then it passes the request on to the next AMB and the process repeats.
33 Comments
View All Comments
mesyn191 - Wednesday, August 9, 2006 - link
They already said it was a supply and not a performance issue that made them go with Intel...hmurchison - Thursday, August 10, 2006 - link
IBM wanting more money to develop the PPC 970 didn't help either. Moving to Intel was good.michael2k - Wednesday, August 9, 2006 - link
Why not?The only thing closed about the Mac Pro is the motherboard; every other component can be replaced (CPU via socket, memory via sticks, video cards via PCIe, HDD via SATA, ODD via IDE), and the thing boots Mac OS X, Windows XP, and Linux.