Understanding the Cell Microprocessor
by Anand Lal Shimpi on March 17, 2005 12:05 AM EST- Posted in
- CPUs
Usage Patterns
Before getting into the architecture of Cell, let’s talk a bit about the types of workloads for which Cell and other microprocessors are currently being built.In the past, office application performance was a driving factor behind microprocessor development. Before multitasking and before email, there was single application performance and for the most part, we were talking about office applications, word processors, spreadsheets, etc. Thus, most microprocessors were designed toward incredible single application, single task performance.
As microprocessors became more powerful, the software followed - multitasking environments were born. The vast majority of computer users, however, were still focused on single application usage, so microprocessor development continued to focus on single-threaded performance (single application, single task performance).
Over the years, the single-threaded performance demands grew. Microsoft Word was no longer the defining application, but things like games, media processing and dynamic content creation became the applications that ate up the most CPU cycles. This is where we are today with workloads being a mix of office, 3D games, 3D content creation and media encoding/decoding/transcoding that consume our CPU cycles. But in order to understand the creation of a new architecture like Cell, you have to understand where these workloads are headed. Just as the types of applications demanding performance today are much different than those run 10 years ago, the same will apply to applications in the next decade. And given that a new microprocessor architecture takes about 5 years to develop, it is feasible to introduce a new architecture geared towards these new usage models now.
Intel spoke a lot about future usage models at their most recent IDF, things like real time voice recognition (and even translation), unstructured search (e.g. Google image search), even better physics and AI models in games, more feature-rich user interfaces (e.g. hand gesture recognition), etc. These are the usage models of the future, and as such, they have a different set of demands on microprocessors and their associated architectures.
The type of performance required to enable these types of usage models is significantly higher than what we have available to us today. Conventionally, performance increases from one microprocessor generation to the next by optimizing single thread performance. There are a number of ways of improving single thread performance, either by driving up the clock speed or by increasing the instructions executed per clock (IPC). Taking it one step further, the more parallelism you can extract from a single thread, the better your performance will be - this type of parallelism is known as instruction level parallelism (ILP) as it involves executing as many instructions out of a thread at the same time.
The problem with improving performance through increasing ILP is that from one generation to the next, you’re only talking about a 10% - 20% increase in performance. Yet, the usage models that we’re talking about for the future require significantly more than the type of gains that we’ve been getting in the past. With power limitations preventing clock speeds from scaling too high, it’s clear that there needs to be another way of improving performance.
The major players in the microprocessor industry have all pretty much agreed that the only way to get the type of performance gains that are necessary is by moving towards multi-core architectures. Through a combination of multithreaded applications and multi-core processors, you can get the types of performance increases that should allow for these types of applications to be developed. Instead of focusing on extracting ILP to improve performance, these multi-core processors extract parallelism on a thread level to improve performance (thread level parallelism - TLP).
It’s not as straightforward as that, however. There are a handful of decisions that need to be made. How powerful do you make each core in your multi-core microprocessor? Do you have a small array of powerful processors or a larger array of simpler processors? How do they communicate with one another? How do you deal with feeding a multi-core processor with enough memory bandwidth?
The Cell implementation is just one solution to the problem...
70 Comments
View All Comments
Poser - Thursday, March 17, 2005 - link
There were moments while reading this article that I expected there to be a "Test Yourself" quiz at the end of the chapter ... er, article. Which isn't to say that articles like this are too textbookish, it's to say that they're wonderfully educational. And very, very cool for being so.I'm half joking when I say this (but only half) -- a real "test" at the end of the article would be fun. I could see if I really understood what I read, and even get to compare my score to the rest of the, uhm, class.
drinkmorejava - Thursday, March 17, 2005 - link
very nice, how long did it take to write that thing?Eug - Thursday, March 17, 2005 - link
#42,That's an interesting page, cuz everyone on OS X already knows that Word is slow on the Mac. It brings us back to the original statement that some ported software may be problematic performance-wise.
And the generic comment on the Mac side about Premiere is, well... use Final Cut Pro. :) Here is a test that seems a bit more useful, since it tests Cinema4D and After Effects, two apps that people use on the Mac and both of which are reasonably well optimized:
http://digitalvideoediting.com/articles/viewarticl...
That's a good point about the memory scaling though. The IMC with AMD's chips is a definite advantage. I'm sure the G5 970MP dual-core won't get an IMC either.
Anyways, as far as this article is concerned, the G5 is kinda irrelevant. The interesting part for Apple in Cell is the PPE unit. It's also interesting that Anand says the original SPE was supposed to be VMX/Altivec. But the current SPE is not Altivec so it's less applicable for Apple, at least in the near term.
It would be interesting to know how fast a dual-core 3 GHz PPE would be in general laptop-type code, and how much power it would put out.
MDme - Thursday, March 17, 2005 - link
#39, 40, 41http://www.pcworld.com/news/article/0,aid,112749,p...
remember that the athlon 64 chips scale better at higher clock speeds due to the mem controller scaling as well.
Eug - Thursday, March 17, 2005 - link
Well, one example is Cinebench 2003:The dual G5 2.0 GHz is about the same speed as a dual 0pteron 246 2.0 GHz, with a score at around 500ish.
http://www.aceshardware.com/read.jsp?id=60000284
BTW, a dual G5 2.5 GHz scores 633.
suryad - Thursday, March 17, 2005 - link
Hmm that is interesting what you say Eug. I see your point do you have any links on straight comparos between an FX and a top of the line Mac? Or from personal experience folding and such...Eug - Thursday, March 17, 2005 - link
#38. It's a mistake to say an AMD FX 55 smokes a dual G5 2.5. For instance, if you like scientific dual-threaded stuff, the G5 does very well. However, the AMD FX 55 IS faster than a single G5 2.5. It's got a slight edge clock-for-clock, and it's clocked slightly higher too.The real problem is when you have stuff built for x86 ported over to PPC. It just isn't great on the Mac side performance-wise in that situation. And Macs aren't tweaked for gaming either. The AMD is going to smoke the Mac in Doom 3 of course.
I think with the performance advantage of the Opteron, I'd put a single G5 2.5 in the range of performance of a single Opteron 2.2-2.4 GHz, depending on the app. The real interesting part though will be the coming quarter, when the new G5s are released. They should get a significant clock speed bump (20%?) and information on dual-core G5s are already out there (like with AMD and their dual-core Athlons). They also get a cache boost. Right now they only have 512 KB, but are expected to get 1 MB L2.
suryad - Thursday, March 17, 2005 - link
Well scrotemaninov I am not disputing that the POWER architecture by IBM is brilliantly done. IBM is definitely one of those companies churning out brilliant and elegant technology always in the background.But my problem with the POWER technology is from what I understand very limitedly, is that the POWER processors in the Mac machines are a derivative of that architecture right? Why the heck are they so damn slow then?
I mean you can buy an AMD FX 55 based on the crappy legacy x86 arch and it smokes the dual 2.5 GHz Macs easily!! Is it cause of the OS? Because so far from what I have seen, if the Macs are any indication of the performance capabilities of the POWER architecture, the Cell will not be a big hit.
I did read though at www.aceshardware.com benchmark reviews of the POWER5 architecture with some insane number of cores if I recall correctly and the benchmarks were of the charts. They are definitely not what the Macs have installed in them...
scrotemaninov - Thursday, March 17, 2005 - link
#35: different approaches to solving the same problem.Intel came up with x86 a long time ago and it's complete rubbish but they maintain it for backwards compatibility (here's an argument for Open Source Software if ever there was one...). They have huge amounts of logic to effectively translate x86 into RISC instructions - look at the L1I Trace Cache in the P4 for example.
IBM aren't bound by the same constraints - their PowerPC ISA is really quite nice and so there's no where near the same amount of pain suffered trying to deal with the same problem. It does seem however, that IBM are almost at the point that Intel want to be in 10 years time...
Verdant - Thursday, March 17, 2005 - link
here is a question...it mentions (or alludes) in the article that having no cache means that knowing exactly when an instruction would be executed is possible, is the memory interface therefore a strict "real time system" ?