Intel's Dual Core Strategy Investigated
by Anand Lal Shimpi on October 22, 2004 3:09 PM EST- Posted in
- CPUs
Been hearing conflicting dual core information lately? Here's a compilation of everything we have and know about Intel's dual core plans for the next two years.
Dual Core for Desktops in 2005
Intel has yet to determine what brand they will market their first desktop chips under, although we'd expect them to continue to use the Pentium 4 brand but with some sort of appendage like Extreme Edition or Lots of Cores Version. Intel has, however, already determined what the specifications and the model numbers of their dual core chips will be.
Currently set for release in Q3 2005, Intel has three dual core chips on their desktop roadmap: the x20, x30 and x40. The only difference between these three chips is clock speed, with the x20 running at 2.8GHz, the x30 running at 3GHz and the x40 running at 3.2GHz. All of the chips are LGA-775 compatible and run off of an 800MHz FSB. Hyper-Threading is not enabled with Intel's dual core chips.
As far as architecture goes, the x-series of dual core CPUs from Intel are built on the little talked-about Smithfield core. While many have speculated that Smithfield may be Banias or Dothan based, it's now clear that Smithfield is little more than two 90nm Prescott cores built on the same die. There is a requirement for a very small amount of arbitration logic that will balance bus transactions between the two CPUs, but for the most part Smithfield is basically two Prescotts.
But doesn't Prescott run too hot already? How could Intel possibly build their first dual core chip out of the 90nm beast that is Prescott? The issue with Prescott hitting higher clock speeds ends up being thermal density - too many transistors, generating too much heat, in too small of a space. Intel's automated layout tools do help reduce this burden a bit, but what's important is that the thermal density of Smithfield is no worse than Prescott. If you take two Prescotts and place them side by side, the areas of the die with the greatest thermal density will still be the same, there will simply be twice as many of them. So overall power consumption will obviously be increased by a factor of two and there will be much more heat dissipated, but the thermal density of Smithfield will remain the same as Prescott.
In order to deal with the fact that Smithfield needs to be able to run with conventional cooling, Intel dropped the clock speed of Smithfield down to the 2.8 - 3.2GHz range, from the fastest 3.8GHz Prescott that will be out at the time. The reducing in clock speed will make sure that temperatures and power consumption is more reasonable for Smithfield.
Smithfield will also feature EM64T (Intel's version of AMD's x86-64 extensions), EIST (Enhanced Intel SpeedStep Technology) and Intel's XD bit support. Chipset support for Smithfield will come from Glenwood and Lakeport, both of which support the 1066MHz FSB (as well as 800) and Dual Channel DDR-2 667 and 533. Glenwood (the successor to 925X) will support up to 8GB of memory, making it the perfect candidate for EM64T enabled processors that want to break the 4GB barrier.
59 Comments
View All Comments
GhandiInstinct - Friday, October 22, 2004 - link
So why not test this technology and leave it in the labs instead of wasting consumers times. Obviously it's a waste of money if we don't have any software utilizing it. So, oh wow! KUDOS to the company to release it first, but remember the Prescott? First 90nm.... crossing the finish line first doesn't mean you earn that place.dak - Friday, October 22, 2004 - link
Good points :) Glad we don't use single cpu's at work lolBrian23 - Friday, October 22, 2004 - link
#25 There are several reasons why games aren't written multithreaded:1. multithreaded apps have more overhead so they run slower on single CPU systems.
2. most gaming systems are single CPU.
3. the threads need to communicate with each other to get the frames drawn. Since the threads have critical sections, running them on a single CPU will make the critical sections que up causing major lag and drop in framerate.
Once multi CPU systems are the norm, I'm sure there will be games released for multi CPU systems.
dak - Friday, October 22, 2004 - link
Hmm, I never really saw the big deal about thread creation. Who really cares if it takes a freaking tenth of a second to spawn a thread, if you're only doing 20 or so threads at the startup of a game? I can't think of the last time I used a temporary thread. I usually spawn 'em at startup for a pre-defined role. Can't be the overhead of thread creation, they could split one off for texture loading in the background, and obviously the network clients. Personally I think it would be harder to NOT thread games, but I guess I'm too used to threading...stephenbrooks - Friday, October 22, 2004 - link
Windows thread creation has a bigger overhead than Linux threading, but you can still shuffle them about quite a bit and still get benefits. I'd imagine if they could keep it at one fork per tick or frame, it'd be pretty good.No reason I can think of why video games aren't being designed for multi-processors. Apart from the fact someone should take their shiny FX-55s away and give them quad-2.0GHz things to work on instead - _then_ they'd take advantage of it.
dak - Friday, October 22, 2004 - link
Strange, I'm kinda surprised that video games are single threaded. We write flight sims at work (*nix only), and we thread/fork all over the place. Flight sims are really just really big video games :)I would think with AI, physics engines, network clients for multiplayer, and oh yeah, that rendering loop thingy, that they'd be all over threading. I don't know about winders programming really, is the scheduler too borked for that? I can't imagine it would be, and I'm not one to give anything to microsoft....
stephenbrooks - Friday, October 22, 2004 - link
#19, I assume you mean XP Home. I'm running XP Pro on dual hyperthreaded Xeons and get 4 showing up in task manager.stephenbrooks - Friday, October 22, 2004 - link
I was looking around some presentations on Intel's site - it seems that we're in a dead zone before some fundamental changes are made to their transistors in the 2007-08 time frame (metal gate, tri-gate and high-K something-or-other), which might give real performance and clock speed improvements again (mention is made of reducing leakage 100x, for example). All the weird stuff happens in the 45nm and 32nm processes, with the 65nm one being another "boring" one like 90nm, hence the focus on dual-core for the next few years, I guess.HardwareD00d - Friday, October 22, 2004 - link
Overclocking a dual core would be a waste because until software developers start to write games in a way that uses multiple cores, you're just going to have one OC'd core sitting there looking dumb (and probably putting out a shedload of heat).HardwareD00d - Friday, October 22, 2004 - link
er I mean #15 sorry