NVIDIA 680i: The Best Core 2 Chipset?
by Gary Key & Wesley Fink on November 8, 2006 4:45 AM EST- Posted in
- CPUs
DualNet
DualNet's suite of options actually brings a few enterprise type network technologies to the general desktop such as teaming, load balancing, and fail-over along with hardware based TCP/IP acceleration. Teaming will double the network link by combining the two integrated Gigabit Ethernet ports into a single 2-Gigabit Ethernet connection. This brings the user improved link speeds while providing fail-over redundancy. TCP/IP acceleration reduces CPU utilization rates by offloading CPU-intensive packet processing tasks to hardware using a dedicated processor for accelerating traffic processing combined with optimized driver support.
While all of this sounds impressive, the actual impact for the general computer user is minimal. On the other hand, a user setting up a game server/client for a LAN party or implementing a home gateway machine will find these options very valuable. Overall, features like DualNet are better suited for the server and workstation market. We believe these options are being provided (we are not complaining) since the NVIDIA professional workstation/server chipsets are based upon the same core logic.
NVIDIA now integrates dual Gigabit Ethernet MACs using the same physical chip. This allows the two Gigabit Ethernet ports to be used individually or combined depending on the needs of the user. The previous NF4 boards offered the single Gigabit Ethernet MAC interface with motherboard suppliers having the option to add an additional Gigabit port via an external controller chip. This too often resulted in two different driver sets, with various controller chips residing on either the PCI Express or PCI bus and typically worse performance than a well-implemented dual-PCIe Gigabit Ethernet solution. .
Teaming
Teaming allows both of the Gigabit Ethernet ports in NVIDIA DualNet configurations to be used in parallel to set up a 2-Gigabit Ethernet backbone. Multiple computers can to be connected simultaneously at full gigabit speeds while load balancing the resulting traffic. When Teaming is enabled, the gigabit links within the team maintain their own dedicated MAC address while the combined team shares a single IP address.
Transmit load balancing uses the destination (client) IP address to assign outbound traffic to a particular gigabit connection within a team. When data transmission is required, the network driver uses this assignment to determine which gigabit connection will act as the transmission medium. This ensures that all connections are balanced across all the gigabit links in the team. If at any point one of the links is not being utilized, the algorithm dynamically adjusts the connection to ensure an optimal connection. Receive load balancing uses a connection steering method to distribute inbound traffic between the two gigabit links in the team. When the gigabit ports are connected to different servers, the inbound traffic is distributed between the links in the team.
The integrated fail-over technology ensures that if one link goes down, traffic is instantly and automatically redirected to the remaining link. If a file is being downloaded as an example, the download will continue without loss of packet or corruption of data. Once the lost link has been restored, the grouping is re-established and traffic begins to transmit on the restored link.
NVIDIA quotes on average a 40% performance improvement in throughput can be realized when using teaming although this number can go higher. In their multi-client demonstration, NVIDIA was able to achieve a 70% improvement in throughput utilizing six client machines. In our own internal test we realized about a 36% improvement in throughput utilizing our video streaming benchmark while playing Serious Sam II across three client machines. For those without a Gigabit network, DualNet has the capability to team two 10/100 Fast Ethernet connections. Once again, this is a feature set that few desktop users will truly be able to exploit at the current time. However, we commend NVIDIA for forward thinking in this area as we see this type of technology being useful in the near future.
TCP/IP Acceleration
NVIDIA TCP/IP Acceleration is a networking solution that includes both a dedicated processor for accelerating networking traffic processing and optimized drivers. The current nForce 590SLI and nForce 680i SLI MCP chipsets have TCP/IP acceleration and hardware offload capability built in to both native Gigabit Ethernet Controllers. This capability will typically lower the CPU utilization rate when processing network data at gigabit speeds.
In software solutions, the CPU is responsible for processing all aspects of the TCP protocol: Checksumming, ACK processing, and connection lookup. Depending upon network traffic and the types of data packets being transmitted this can place a significant load upon the CPU. In the above example all packet data is processed and then checksummed inside the MCP instead of being moved to the CPU for software-based processing that improves overall throughout and CPU utilization.
NVIDIA dropped the ActiveArmor slogan for the nForce 500 release and it is no different for the nForce 600i series. Thankfully the ActiveArmor firewall application was jettisoned to deep space as NVIDIA pointed out that the basic features provided by ActiveArmor will be a part of Microsoft Vista. We also feel NVIDIA was influenced to drop ActiveArmor due to the reported data corruption issues with the nForce4 caused in part by overly aggressive CPU utilization settings, customer support headaches, issues with Microsoft, and quite possibly hardware "flaws" in the original nForce MCP design.
We have found a higher degree of stability with the new TCP/IP acceleration design but this stability comes at a price. If TCP/IP acceleration is enabled via the control panel, then certain network traffic will bypass third party firewall applications. We noticed CPU utilization rates near 14% with the TCP/IP offload engine enabled and rates near 26% without it.
DualNet's suite of options actually brings a few enterprise type network technologies to the general desktop such as teaming, load balancing, and fail-over along with hardware based TCP/IP acceleration. Teaming will double the network link by combining the two integrated Gigabit Ethernet ports into a single 2-Gigabit Ethernet connection. This brings the user improved link speeds while providing fail-over redundancy. TCP/IP acceleration reduces CPU utilization rates by offloading CPU-intensive packet processing tasks to hardware using a dedicated processor for accelerating traffic processing combined with optimized driver support.
While all of this sounds impressive, the actual impact for the general computer user is minimal. On the other hand, a user setting up a game server/client for a LAN party or implementing a home gateway machine will find these options very valuable. Overall, features like DualNet are better suited for the server and workstation market. We believe these options are being provided (we are not complaining) since the NVIDIA professional workstation/server chipsets are based upon the same core logic.
NVIDIA now integrates dual Gigabit Ethernet MACs using the same physical chip. This allows the two Gigabit Ethernet ports to be used individually or combined depending on the needs of the user. The previous NF4 boards offered the single Gigabit Ethernet MAC interface with motherboard suppliers having the option to add an additional Gigabit port via an external controller chip. This too often resulted in two different driver sets, with various controller chips residing on either the PCI Express or PCI bus and typically worse performance than a well-implemented dual-PCIe Gigabit Ethernet solution. .
Teaming
Teaming allows both of the Gigabit Ethernet ports in NVIDIA DualNet configurations to be used in parallel to set up a 2-Gigabit Ethernet backbone. Multiple computers can to be connected simultaneously at full gigabit speeds while load balancing the resulting traffic. When Teaming is enabled, the gigabit links within the team maintain their own dedicated MAC address while the combined team shares a single IP address.
Transmit load balancing uses the destination (client) IP address to assign outbound traffic to a particular gigabit connection within a team. When data transmission is required, the network driver uses this assignment to determine which gigabit connection will act as the transmission medium. This ensures that all connections are balanced across all the gigabit links in the team. If at any point one of the links is not being utilized, the algorithm dynamically adjusts the connection to ensure an optimal connection. Receive load balancing uses a connection steering method to distribute inbound traffic between the two gigabit links in the team. When the gigabit ports are connected to different servers, the inbound traffic is distributed between the links in the team.
The integrated fail-over technology ensures that if one link goes down, traffic is instantly and automatically redirected to the remaining link. If a file is being downloaded as an example, the download will continue without loss of packet or corruption of data. Once the lost link has been restored, the grouping is re-established and traffic begins to transmit on the restored link.
NVIDIA quotes on average a 40% performance improvement in throughput can be realized when using teaming although this number can go higher. In their multi-client demonstration, NVIDIA was able to achieve a 70% improvement in throughput utilizing six client machines. In our own internal test we realized about a 36% improvement in throughput utilizing our video streaming benchmark while playing Serious Sam II across three client machines. For those without a Gigabit network, DualNet has the capability to team two 10/100 Fast Ethernet connections. Once again, this is a feature set that few desktop users will truly be able to exploit at the current time. However, we commend NVIDIA for forward thinking in this area as we see this type of technology being useful in the near future.
TCP/IP Acceleration
NVIDIA TCP/IP Acceleration is a networking solution that includes both a dedicated processor for accelerating networking traffic processing and optimized drivers. The current nForce 590SLI and nForce 680i SLI MCP chipsets have TCP/IP acceleration and hardware offload capability built in to both native Gigabit Ethernet Controllers. This capability will typically lower the CPU utilization rate when processing network data at gigabit speeds.
In software solutions, the CPU is responsible for processing all aspects of the TCP protocol: Checksumming, ACK processing, and connection lookup. Depending upon network traffic and the types of data packets being transmitted this can place a significant load upon the CPU. In the above example all packet data is processed and then checksummed inside the MCP instead of being moved to the CPU for software-based processing that improves overall throughout and CPU utilization.
NVIDIA dropped the ActiveArmor slogan for the nForce 500 release and it is no different for the nForce 600i series. Thankfully the ActiveArmor firewall application was jettisoned to deep space as NVIDIA pointed out that the basic features provided by ActiveArmor will be a part of Microsoft Vista. We also feel NVIDIA was influenced to drop ActiveArmor due to the reported data corruption issues with the nForce4 caused in part by overly aggressive CPU utilization settings, customer support headaches, issues with Microsoft, and quite possibly hardware "flaws" in the original nForce MCP design.
We have found a higher degree of stability with the new TCP/IP acceleration design but this stability comes at a price. If TCP/IP acceleration is enabled via the control panel, then certain network traffic will bypass third party firewall applications. We noticed CPU utilization rates near 14% with the TCP/IP offload engine enabled and rates near 26% without it.
60 Comments
View All Comments
yyrkoon - Thursday, November 9, 2006 - link
From my little experience with an Asrock board that can use this program, it WILL adjust clock frequency on the fly, however I think that voltage changes need be done only by rebooting. Reguardless whether I'm remembering correctly, I'm fairly certain atleast one possible change needs to be done during, or after a reboot, could be thinking of clock multiplier maybe ?Pirks - Thursday, November 9, 2006 - link
that sucks. guess I'll have to wait till nVidia makes 100% nonreboot-OC mobo, or on-the-fly-OC mobo where you just click a couple of buttons in Windows and voila - your machine turns from quiet office machine to a Crysis fireball, and vice versa - I can dream, can't I? ;)ssiu - Wednesday, November 8, 2006 - link
Since NVIDIA claims the 680i has better FSB overclock than the 650i's, and the 680i results are on par with the mainstream P965's, I am afraid that the 650i's would be significantly worse than the DS3s/P5Bs. In other words, I am afraid that the 650i's are not really a new competitive option for budget/mainstream overclockers.yyrkoon - Wednesday, November 8, 2006 - link
I dont think any true enthusiast is going to be buying a mid range board(chipset) to begin with. If the Intel numbering shceme is anything like the AM2 numbering scheme, the 650i will probably have less availible PCI-E lanes as well, and would be a major factor in my personal decission in buying any such hardware, and I know I'm not alone ;)Jedi2155 - Wednesday, November 8, 2006 - link
I don't think your definition of enthusiast is wholly correct but rather the Manufacturer idea of enthusiasist. I personally think many enthusiasists do indeed have a limited budget, and after seeing the pricing of Asus 680i board, I think mid-range is the way to go...hoping for a cheap < $250 680i board >_>.yyrkoon - Wednesday, November 8, 2006 - link
Yeah, He wasnt talking about true enthusiasts though, I realize this after re-reading his post.One a side note, that if my board brand of choice suddenly went away (ABIT), I would seriously consider buying a Gigabyte board, but the DS3 doesnt seem to be making a lot of people happy in the stability category. What I'm trying to say here, is that perhaps the board MAY not OC as well, but that according to what I've read (reviews, forum posts, and A LOT of newegg user reviews), it couldnt do much worse than the Gigabyte board in this area.
The second question I'd be asking myself, is WHO THE HELL is EVGA . . . we all know they make Video cards (probably the best for customer support for nVidia products).
I'm definately interrested in the 680i chipset, but i think my brand of choice for MANY years now would remain the same, and that I'll be sticking with ABIT :)
Gary Key - Wednesday, November 8, 2006 - link
1. The reference board is designed and engineered by NVIDIA. Foxconn manufactures the boards for the "launch" partners that include BFG and others. Asus, Abit, DFI, Gigabyte, and others will have their custom designed boards out in a few weeks.2. The Abit board is very interesting, here is pic of it - http://img474.imageshack.us/img474/2044/in932xmaxy...">Abit 680i - ;)
yyrkoon - Thursday, November 9, 2006 - link
Didnt even know there was one this close to release gary, lol thanks for the link. Judging by the 5 SATAII connectors, previously released ABIT boards, and what LOOKS like an eSATA connector on the back panel, I suppose this board will support eSATA, and possibly a SATA PM ?Stele - Friday, November 10, 2006 - link
That Abit 680i board looks very interesting indeed... if nothing else because it looks like it sports a digital PWM power supply circuitry similar to that used by DFI in the latter's LANParty UT NF590 SLI-M2R motherboard (the Pulse PA1315NL coupled inductor array is a dead giveaway, as it is designed for use only with Volterra's VT11x5M digital PWM circuitry).Unfortunately more information on such circuitry is proving very difficult to find (Volterra themselves restrict their product details and datasheets to design partners only) ... it'd be great to know how such a power circuit compares in performance and capabilities over the traditional PWM-MOSFET-based ones.
Curiously, the Abit 680i seems to have dropped the AudioMax daughter board.
yyrkoon, I'm guessing the 5th SATA II and the eSATA port are there courtesy of an SiI3132 controller - which is likely the little square IC under the upper heatpipe, just beside the audio connector block. As such, the usual capabilities and features of the said IC would apply, I think :)
yyrkoon - Wednesday, November 8, 2006 - link
I'd just like ot point out that DualNet technology is NOT true NIC Teaming, or rather Link agrregation(802.11a/d I think).When I first heard about DualNet I was extremely excited, since I had been doing TONS of research on NIC bonding etc, but after doing some homework, I found that DuelNet only supports out going packets. It was my hope that you could link two of these boards via a regular GbE switch, and get instant 2GbE connections, but this is not the case(unless they've recently redone DualNet).
Now to the question: Since SATA port Multiplier HBAs require a specific SIL chip(s) on the device they communicate with (to give full speeds of a true RAID), what are the chances that nVidia boards will work with these devices ?
In the past, I've seen two AM2 boards that have a built in SIL chip with eSATA connectors on the board back panel (ABIT, and Asus), but onboard SIL 'chipsets' seem to be rather limited(as in only supporting PM support on two SATA connections). I'd personally REALLY like to see this technology standardized, so it doesnt matter WHAT SATA controller chipset you're using. I also think that once nVidia realizes that PM support onboard is a major plus, and once they implement it, they COULD be taken seriously by many Intel fans.
Also, some Intel chipset fans believe that Intel chipsets are best for a rock solid system (for the record, I'm not one of these people), I guess we'll see if nVidia will change thier minds.