Top
230°

Is multi-core the new MHz myth? (Part 1)

Tech.co.uk - Jeremy Laird, 24 Dec 2007:

The PC industry is betting big time on a future built on multi-core PC processors. AMD recently launched its own affordable quad-core chip, Phenom.

Intel is already cranking out second generation 45nm quad-core CPUs. But do multi-core CPUs deliver real-world performance benefits? Or are they merely the latest marketing ruse, a respin of the age old MHz myth that crashed and burned with Intel's Pentium 4 processor?

Part 1 includes nine arguments in favour of multi-core.

The story is too old to be commented.
pacman6153497d ago

CELL PROCESSORS = PWNAGE

demolitionX3496d ago (Edited 3496d ago )

CELL is the future! the future is distributed computing. We saw the benefits of distributed computing in [email protected] PS3's Cell has a dominant performance compared to any other computing CPUs.

think, play, and go B3yond

DJ3496d ago

Dual core is better than single, quad is better than dual, etc. But this isn't an accurate comparison once you look into more important factors, such as cross-chip and cross-core communication bandwidth and how powerful those chips are.

We already have 8-core CPU offerings from Intel and AMD that have yet to even come close to the performance of IBM/Toshiba/Sony's Cell Processor, and the main reason is because they haven't started from scratch. They've been using fairly similar chip technology that's been around since the 1980s.

ravenguard883496d ago

When you compare "performance" you ofcourse mean single-percision floating point performance. In scalar operations it's not tough to beat the Cell. The Cell wouldn't make a very good server processor.

You missed the most important factor: Is the software built to maximize the potential?

Running an application such as Winrar or rendering video, all 4 of my cores are being put to good use. Run a game like Crysis or Supreme Commander, and I get little CPU activity, but it is evenly spread between the 4 cores. Once games are built to take advantage and once the requirements are there (there's no game I've seen that can max out a decently rated Core 2 processor, be it duo or quad) we will see a huge performance boost from multiple cores.

Ju3496d ago

Multicore is a myth. Either massive parallel or 2 (or 3) cores will do. For normal desktop applications it ends at one core. Maybe, you will gain some benefit at a second or third. But no matter what, to really get a huge benefit out of parallel processing algorithms need to be adjusted to new paradigms. And in that respect, I would assume, the final solution is a grid of cores which distributes dynamically.

For Joe average, it won't make a difference. Its a use case problem. You run Word and the Browser at the same time. Fine. Multicore will help. You want to really have a very high end game running. Will help to a certain degree - but, obviously massive parallel is already there: SLI and XFire. Both link GpGPUs into a massive network. Multicore will work around the limitation of frequency for some time, until it will reach the same limitations again in respect to cost/performance ratios. I think, even 4 cores are over the top already. 2 might be a little low, 3 might turn out the user land target market, which is reasonable priced.

Besides that, a "next" gen cell (or the like) will become standard. Probably with some sort of SW abstraction, so the ISA can be replaced, and at its core, what's left, is a transparent system to spawn "tasks" across a grid of nodes (cores).

I give Multicore 5 years, max. A full "node" cpu will probably not appear within 10 years (which can replace a std cpu), IMO. Multicores might remain in place as a local control CPU, though. - Abstract theory - :-)

Merry Christmas

Rice3496d ago

Once programs and applications get more complex we might need those type of processors. Think of it human kind is always changing so does technology and software but right now the eight-core cpu seems so useless for our needs.

Salvadore3496d ago

Is the Cell processor built around mulit-core architecture or has it nothing to do with it?

LJWooly3496d ago

Yeah, it has multiple cores, but I think this article is talking about processors with multiple CPU's and not SPE's like the Cell has.

Lol, I CANNOT believe someone disagreed with you. It was a f*cking question. I swear, they just do it to cause problems...

Guwapo773496d ago

The Cell processor is a multi thread CPU. It has one CPU with 8 mini helpers (1 disabled on the PS3 for yields). The cool thing about the Cell processor is that you can add or subtract the amount of SPE required for the task.

True multi core processors have multiple cores not additional helpers. If I took a Duel Core chip and disabled one core the computer would still run on the other core. With a Cell processor if you disabled the cpu the SPEs could not do the job.

Ju3495d ago

Let me throw in another opinion. Multicores (and their predecessors SMP) are design in a coherent way, where each core is transparent in the way, that its ISA (instruction set architecture) is the same across the CPUs and the data and instructions can be shared dynamically between CPUs. Multi-Core now moves these CPUs a bit closer, actually on to the same die (that's the piece where the actual "transistors" are located), rather then multiple cpus. By doing so, what makes "Multicores" different from SMP is, that they are not only located on the same die, but they (obviously) use local interconnects (cache coherency is external for SMP), and they share the caches (and thus simplify cache coherency problems - but introduce new once with each core added. You can imagine, that two cores sharing that cache is "easy". 4 is a bit more trouble, and so on).

This cache problem is exactly the reason why multicore will reach its boundaries the same way high frequency cpus did. At some point, it is simply not feasible to share that cache across n cores. Well, I have no idea to what level this becomes a real problem. I think Power6 uses 16 cores already and so does some Sparc (Niagra), but that's just what I remember so far. I also do not know if these cpus share the caches, or have per core cache - but non the less, you have to sync those caches thru all cores to guarantee data integrity.

Well, compared to that the CELL is a pretty new idea. It does not have multiple cores. It has one (on the roadmap I think to remember to have seen a 2 core PPU cell with probably 12 or 16 SPUs). "Multithread" has nothing to do with it. A common CPU has usually 2 or 3 special execution units, those being integer, floating point and (sometimes) vector units (SSE or VMX or the like - while early SSE was actually a reconfigured fpu). Multithreading in that respect usually refers, that the instruction units can run in parallel. That's at least the case on the CELL (that said, you can execute an integer and an floating point instruction at the same time). Basically it depends if the instruction pipelines end up in their own execution unit. If so, each pipeline can execute at the same time (e.g. multiple integer units or the like). Maybe somebody has a reference to a more accurate description. This is just basic. These are not full cpus, but rather part of the same cpu can execute at the same time, while "multicore" means real multiple cpus - they always run in parallel.

The idea of the SPUs was, to reduce the risk of cache coherency. That's also why the SPUs do not come with their own cache nor do they allow direct memory access. The SRAM is, what they call a "software controlled cache". You could "abstract" the spus to behave like a real core, but all load/store operations need to be simulated in SW, also, a jump cannot reach outside of the SRAM address space. Now, I have no idea how far these new "spu" kernels are, but they hide a lot of those issues already and allow you to treat the SPUs like real multi cores. This is confusing, because in the traditional sense, the CELL is not a multicore cpu, however, new CELL kernel(s?) make it a virtual multicore cpu.

The second argument for the CELL/SPU was size. Cache and the additional control logic needs space. The spus are pretty small. Not a lot of additional logic is required. This is also the reason why it scales very well, by keeping die size and yields (?) low. There are still nine cpus on that chip, on a size which is smaller then current x86 counter parts.

So, the cell and GPUs share the same approach. A high number of small special purpose cpus. That's what I called "massive" parallel above. The SPUs have the more general ISA, while the GPUs have very specific instructions. Sooner or later both concepts, CELL and/or GPUs will melt, and this will be these (what I would call) node cpus. Intel has something similar, BTW.

GpGPUs showed, that such a solution does not necessarily need to be dependent on the ISA. Both ATI and NVidea have introduced their shading language - well not cross compatible - but still, seems they still work thruout their whole product range. Something like that could be the base for future node cpus, IMO.

Could. We'll see. So, I'd say, Multicore is a myth, same as frequencies. The technology, however is not. I am curious how this will turn out eventually.

Show all comments (15)