The Quest for More Processing Power, Part One: "Is the single core CPU doomed?"
by Johan De Gelas on February 8, 2005 4:00 PM EST- Posted in
- CPUs
CHAPTER 4: The Pentium 4 crash landing
The Prescott failure
The Pentium 4 "Prescott" is, despite its innovative architecture, a failure. Intel expected to scale this Pentium 4 architecture to 5 GHz, and derivatives of this architecture were supposed to come close to 10 GHz. Instead, the Prescott was only able to reach 3.8 GHz after numerous revisions. And even then, the 3.8 GHz is losing up to 115 Watt, and about 35-50% (depending on the source) is lost to leakage power.
The Prescott project failed, but that doesn't mean that the architecture itself was not any good. In fact, the philosophy behind the enhanced Netburst architecture is very innovative and even brilliant. To understand why we state this, let me quickly refresh your memory on the software side of things.
IPC unfriendly software
First, consider that the average code does not allow the CPU to process a lot of instructions in parallel. To give you an idea, we found out that video encoding achieves about 0.6-0.8 instructions per clock cycle (IPC) on modern CPUs. Secondly, note that almost 20% of the instructions are branches, and 50% of them are memory operations. In case of video encoding, you may have less than 10% branches, and about 60% memory operations. Most of the instructions that are not branches or memory operations are additions, or "ADD"s. Some of the memory operations need to make use of the same units that perform the ADD instructions.
You should also know that many algorithms contain calculations, which need the results of a previous one: a dependency. So, you cannot issue the second calculation until the first is done.
Most studies show that realistically, a sophisticated CPU would be able to reach an IPC of a little more than 2, about twice as much as CPUs today.
Up close and personal
Now, take look at the scheme of the Prescott architecture below. Let us see how Prescott solves all the problems mentioned above.
Fig 7. Prescott's architecture.
Click to enlarge.
First of all, you want to make sure that memory operations happen quickly. Therefore, the Prescott doubled the L1 (data only) and L2-cache. It has also two dedicated Address Generation Units, one for stores and one for loads.
Build for 4 GHz and more, accesses to the main RAM are going to be costly in terms of clock pulses (latency), considering that DDR-II 533 runs at a 266 MHz clock. So, Prescott tries to minimize the damage of waiting for cache misses by increasing the big store buffers of Northwood from 24 to 32, and doubling the load request buffers. So, Prescott can have a lot of cache misses simultaneously outstanding . An intelligent hardware prefetcher is another way to avoid slowdowns due to high memory latency.
To battle branch misprediction, the Prescott Branch predictor has been tuned and predicts 10% of the mispredicted branches by Northwood correctly. That results in up to 20% better performance! And of course, the trace cache makes sure that a mispredicted branch does not need to restart the decoding stages. As a result, the misprediction penalty is not 39 stages, but 31 stages. The 8 stages of decoding do not need to happen again because in most cases, the Trace cache has the decoded instruction.
65 Comments
View All Comments
sandorski - Tuesday, February 8, 2005 - link
While reading the article I couldn't help but think that when Intel states something it becomes all the buzz in the Industry and is accepted as fact. OTOH, AMD has been way ahead of Intel concerning these issues, adopting the Technologies in order to avoid the issues while Intel ran ahead right into the wall. Given the history between the 2, I'd hope that AMD's musings on the future become more relevant as they seem more in tune with the technology and its' limitations. Likely won't happen though.Mingon - Tuesday, February 8, 2005 - link
I Thought originally it was reported that prescotts alu's were single pumped vs double for northwood et al.segagenesis - Tuesday, February 8, 2005 - link
Heh heh heh, good timing with the recent news. Very well written and good insight on low level technology.It is starting to become obvious to even average joe user now that computer power for pc's has plataeu'ed (sp?) over the past year or so. You can have a perfectly functional and snappy desktop in just 2ghz or less if you use the right apps.
I think the recent walls hit by processor technology should be an indication for developers to work better with what they have rather than keep demanding more power. We used to make jokes about how much processor power is needed for word processing, but considering MS Word runs no faster really than it did on a P2-266mhz in Office 97... urrrgh.
sandorski - Tuesday, February 8, 2005 - link
hehe, you said, "clocks peed" hehe :D(Chapter 1)
good article.
Ender17 - Tuesday, February 8, 2005 - link
Interesting. Great read.