NVIDIA nForce Professional Brings Huge I/O to Opteron
by Derek Wilson on January 24, 2005 9:00 AM EST- Posted in
- CPUs
NVIDIA nForce Pro 2200 MCP and 2050 MCP
There will be two different MCPs in the nForce Professional lineup: the nForce Pro 2200 and the nForce Pro 2050. The 2200 is a full-featured MCP, and while the 2050 doesn't have all the functionalty of the 2200, they are based on the same silicon. The feature set of the NVIDIA nForce Pro 2200 MCP is just about the same as the nForce 4 SLI and is as follows:
- 1 1GHz 16x16 HyperTransport Link
- 20 PCI Express lanes configurable over 4 physical connections
- Gb ethernet w/ TCP/IP offload Engine (TOE)
- 4 SATA 3Gb/s
- 2 ATA-133 channels
- RAID and NCQ support (RAID can span SATA and PATA)
- 10 USB 2.0
- PCI 2.3
The 20 PCI Express lanes can be spread out over 4 controllers at the motherboard vendor's discretion via NVIDIA's internal crossbar connection. For instance, a board based on the 2200 could employ 1 x16 slot and 1 x4 slot, or 1 x16 and 3 x1 slots. It cannot host more than 4 physical connections or 20 total lanes. Technically, NVIDIA could support configurations like x6 which don't match PCI Express spec. This may prove interesting if vendors decide to bend the rules on anything, but likely server and workstation products will stick to the guidelines.
Maintaining SATA and PATA support is a good thing, especially with 4 SATA 3Gb/s channels, 2 PATA channels (for 4 devices), and support for RAID on both. Even better is the fact that NVIDIA's RAID solution can be applied across a mixed SATA/PATA environment. Our initial investigation of NCQ wasn't all that impressive, but hardware is always improving, and applications in the professional space are a good fit to NCQ features.
This is the layout of a typical system with the nForce 2200 MCP.
The nForce Pro 2050 MCP, the cut down version of the 2200 that will be used as an I/O add-on, supports these features:
- 1 1GHz 16x16 HyperTransport Link
- 20 PCI Express lanes configurable over 4 physical connections
- Gb ethernet w/ TCP/IP offload Engine (TOE)
- 4 SATA 3Gb/s
Again, the PCI Express controllers and lanes are configurable. Dropping this down to add those plus another GbE and 4 more SATA connections is an obvious advantage, but there is more.
As far as we can tell from this list, the only new feature introduced from nForce 4 is the TCP/IP offload Engine in the GbE. Current nForce 4 SLI chipsets are capable of all other functionality discussed in the NFPro 2200 MCP, although there may be some server level error reporting built into the core logic of the Professional series that we are not aware of. After all, those extra two million transistors had to go somewhere.
But that is definitely not all there is to the story. In fact, the best part is yet to come.
55 Comments
View All Comments
Dubb - Monday, January 24, 2005 - link
You should probably specify that the Iwill DK8ES is NOT a dual x16 board. it's x16+x2, with the x2 on a x16 connector. the DK8EW that will be released in a few months is x8 + x8.the tyan is the only x16 + x16 I know of so far...
feel free to correct me if I'm wrong, but the folks at 2cpu.com are pretty sure of this.
henry - Monday, January 24, 2005 - link
> #32 ... heh ... that's only 4 x1 lanes not 5 ;-) the config i mentioned is not possible.Check this: 1x16 + 3x1 / 1x4 + 2x1 (+ 1x8 for the fun ;-)
DerekWilson - Monday, January 24, 2005 - link
#32 ... heh ... that's only 4 x1 lanes not 5 ;-) the config i mentioned is not possible.And the Intel PCI-X idea is definitely funky :-) I suppose that would work. Rather than use an HT link for AMD's tunnel, that could interesting in a pinch. No matter how unlikely :-)
henry - Monday, January 24, 2005 - link
Hi DerekJust two remarks:
> On the flip side, it's not possible to put 1 x16, 1 x4, and 5 x1 PCIe slots on a dual processor workstation.
Why shouldnt this be possible: Just partition the PCIe lanes in this way: 1x16 + 3x1 on first nForce (one lane wasted) and 1x4 + 1x1 on second chip (still 15 lanes and two controllers left)
Regarding PCI-X: As you said mainboard makers can choose the obvious way and directly attach AMD's PCI-X tunnel chips.
Nevertheless there is a more insane option: Use a spare x4 or x8 PCIe link to hook up a PCI-X bridge chip (e.g. Intel 41210).
DerekWilson - Monday, January 24, 2005 - link
NCQ is native tagged command queuing for SATA ... TCQ is tagged command queuign for SCSI. WD called the Raptors initial support TCQ because they just pulled their SCSI solution over. This served to confuse people. SATA command queuing is NCQ. People call TCQ sometimes, and maybe that's fine. Really, they may as well be the same thing except that one is for SCSI.#25, SDA
I meant PCI-X -- NVIDIA didn't build in legacy PCI-X support to their MCPs. In order to support it it must be paired with AMD-8000 series. Intel has PCI-X support off MCH. If many PCI-X slots are required, the Intel solution must sacrifice some of its PCIe lanes for the 6700PXH 64-bit PCI Hub. This hub hooks into the E75xx though either a x4 or x8 PCIe lane to provide additional PCI/PCI-X buses. I know, it's a lot of PCI/PCIe/PCI-X ... sorry for the confusion.
Cygni - Monday, January 24, 2005 - link
btw, i was kidding about the windows thing...Cygni - Monday, January 24, 2005 - link
Nvidia is also releasing a new videocard that does all of that, plus the GPU can run windows!Countdown to the point where the video card becomes everything and the motherboard is a tiny piece of plastic that holds everything in place....
tumbleweed - Monday, January 24, 2005 - link
#26 - rumour has it that SS will be showing up in future NV 'video' cards, rather than on motherboards. With the ridiculous bandwidth overkill that is PCIe x16, that's a good place to put it, IMO. Save a slot, save mobo space, and put unused bandwidth to use.tumbleweed - Monday, January 24, 2005 - link
Derek - Dissonance over at TR says he specifically asked NV about it, and was told it supported TCQ as well as NCQ, so somebody is confused. :)AbRASiON - Monday, January 24, 2005 - link
I've made myself a little saying which I now apply to nvidia motherboards,...It's "no soundstorm, no sale"
Until they re-impliment it, I'm not buying one, period.