Top
760°

Cell is no longer HPC material

As the news broke at Heise.de, you could almost feel the Cell collective - that is, the Sony PS3 developer community and gamers - reel in shock at the sharp, jagged bits of an interview with IBM's Deep Computing VP, David Turek, saying Cell was to be no more. Of course, since he is the VP of HPC at IBM he was just talking about HPC rather than everything else.

Read Full Story >>
theinquirer.net
The story is too old to be commented.
MetalGearRising3183d ago

Cell = Loss of Profit
Cell = Last Gen
Cell = 3RD Place (PS3)
Cell = Multi-Platform titles will suffer
Cell = Sony's Nightmare.

Mista T3182d ago

good God have you nothing to play? instead your pouncing around this site saying how your console is so good when you don't play it at all!

ssipmraw3182d ago

did you know that there have been metal gear games announced for dying consoles,which of course never came out, for example the ngage? see the trend?

HolyOrangeCows3182d ago (Edited 3182d ago )

MetalGearRising = Has no games to play
MetalGearRising = Only has an Xbox360
MetalGearRising = trolls instead
MetalGearRising = Favorite game is the stock page
MetalGearRising = Sanity's Nightmare

MAiKU3181d ago (Edited 3181d ago )

I come home from work. Read an article about supercomputers ps3s. I see a stupid bot who obviously didn't read the article. I smell his fear. And It smells wonderful.

The army bought a bunch of PS3s for supercomputing processes, not 360s... those can't even do that..... it'd be like a big wall of Red and Green lights. hahahahaha!

Raptura3181d ago (Edited 3181d ago )

Sounds exactly like all the Sony fanboys who come here, on every article, commenting about how good the PS3 is and it's exclusives and disagreeing on comments which go against Sony or the PS3. What the hell guys? Have you got nothing to play?

baum3181d ago

MGS:R has no life. Seriously, this hypocritical moron just said before "HALO IS THE BEST GAME EVER YOU SONY FANS HAVE NO SAY" and then he goes on and trolls all the PS3 articles, lmao what a sad life.

Probably a pimple-faced 14 year old retard who is never going to get laid.

n to the b3181d ago (Edited 3181d ago )

the 3rd place comment would almost make sense if I give u the benefit of the doubt on your '1 yr ahead' argument. but "Xenon = Multi-Platform titles will suffer" makes absolutely no sense since the vast majority of multiplats look/run better on 360. [email protected]

@MetalGearRising: happy now? see how behaving like the worst of the sony trolls doesn't really help things?

3181d ago
+ Show (5) more repliesLast reply 3181d ago
darkequitus3183d ago

This was obvious. The flexibly of GPU etc. NOw that OeenCL is standardized, being adopted by everybody, CUDA can go off and die somewhere.

Santa Hirai3182d ago (Edited 3182d ago )

Xenon= Loss of Profit (9 Billion)
Xenon = Last Gen
Xenon = 3RD Place (xBOX360)
Xenon = Multi-Platform titles will suffer
Xenon = M$'s Nightmare

Frankenberg3181d ago

I read an article on this site last week that said Sony was not going with the cell for the PS4 in favor of an easier cpu to program. Something about wanting to made the devs happier.

What this means is, the old Sony that was built on backwards compatibility is forever lost in these new generations.

sikbeta3181d ago

I'm telling this from the beginning, CELL is the way to GO if we want to see an Evolution in Computer programing

gaffyh3181d ago

@1.1 - Doesn't it make more sense for Sony to stick with a Cell-type processor? Devs will be used to the architecture by the time PS4 comes around, meaning they won't have to go and learn a brand new one again.

Nihilism3181d ago

wouldn't it make more sense for them to use universal, scalable, easy-to-program-for, efficient architecture. If they did there wouldn't be a big divide between exclusives and multiplats, mutliplats would look and run better, as would exclusives

FantasyStar3181d ago (Edited 3181d ago )

IT's not so much about Cell, but what Cell represents that we, as gamers, want in our games for the future. The answer being "true multi-threaded" apps.

It's the next evolution of computing and needs to happen sooner than later. We've been at this for 5 years and barely made progress! Not enough support from the industry yet. I was real happy with Cell because it represented the push that the industry needs. But the rumor of PS4 going with a another CPU worries me, but that's years away so my worries might be unfounded. Just no more Tri-Core CPUs....c'mon now.

Nihilism3181d ago (Edited 3181d ago )

most pc games rarely use more than 3 cores, a tri core is more than sufficient, dvd9 is the main limiting factor of the x-box, it also has a more powerful gpu than the ps3

EDIT: disagreeing with fact?

jack_burt0n3181d ago

Wow ignorance seems to be your speciality chalfont, the ps3 structure is completely different. You will see more and more games using 8 threads as the ice team get more and more revisions in.

The cell is too expensive and powerful to be that mass market and the skill involved to work with it is reflected in the hiring salaries for dedicated ps3 dev but it is pushing the industry more than any other development.

6 threads on spe, 1 turned off, 1 for OS security and 2 on the ppe.

FantasyStar3181d ago (Edited 3181d ago )

You think so? I think some of the games look and play pretty impressively for games on a DVD9. I was pleasantly surprised by the length quality of games like Gears of War 2 (except the last boss fight).

I can believe that PC games preferably use 3 cores because they keep the 4th for GP purposes like background apps and such. But when I boot up World in Conflict, I forget the previous stated. That and the fact that not all PC games stress the 3 cores 100%, so there's always idle-processing readily available.

Nihilism3181d ago (Edited 3181d ago )

"The cell is too expensive and powerful to be that mass market "\

That right there is the dumbest excuse i've heard in a long time. You're trying to say the cell is too expensive to be mass market.....even though there are many cpu's that cost more than an entire ps3. The cell is cheap. That's why it's in a $300 console. Maybe the reason is because the cell is flawed as a standard processor. A standard quad core is much better for gaming. If the ps3 had a 'regular' cpu games would run much better.

Keep telling yourself that the cell is uber tech though...I had a good laugh when someone tried to say that a server based cell costing $7000, was the same as the one they use in the ps3, i suppose you think something similar. Dumb sh!t

also SPE'S ARE NOT CORES, by your reasoning I could say an i7 is an 8 core processor..it is not it has 4 physical cores ( the cell has 1), and it has hyperthreading. But the hyperthreading adds more threading capability, but it's still more limited than an 8 core cpu would be.

The cell will NEVER be mainstream tech because of it's limited functionality.

kwyjibo3181d ago

"The cell is too expensive and powerful to be that mass market"

I'm just going to join in with chalfont in calling you an idiot. If your statement were true, Sony wouldn't be using it in their machine.

It has not been pushing the industry more than any other development. The future is clearly symmetrical multi processing, coupled with a GPU. The CPU is staying SMP, it's the GPU that's evolving to take onboard GPGPU functions, such as OpenCL and Intel's Larrabee project.

I hate it that people think that just because an architecture is more obtuse to program for (a disability), it makes the thing better.

ProblemSolver3181d ago (Edited 3181d ago )

@sikbeta: "I'm telling this from the beginning, CELL is the way to GO if we
want to see an Evolution in Computer programing"

Couldn't have said it any better. It seems that the world isn't ready for real
parallel programming utilizing an explicit memory interface (DMA'ing). That's
odd. The way to program the Cell processor is the way parallel programming must
have been done. But this requires real knowledge. The current state of affairs
is that most programmers lack real parallel programming knowledge such that the
industry has to fall back on implicit memory architectures and user-space
unified architectures that do rely on good compiler back-ends to squeeze the most
out of it (via auto-parallelization). But this will never be as efficient like
the other way around since those back-ends can only parallelize rather trivial
constructs. In general, a compiler back-end on a unified architecture essentially
tries to mimic how one would program the Cell processor. But they do this
without any domain information and hence can only parallelize trivial workloads.
As a result, the give hardware / program won't perform as efficient as it could
be, leading to a waste of resources. That's the price to pay for making
programming on parallel processors easier. The Cell processor has shown (shows)
how the 'real world' has to be threaded. From an architectural point of view
this processor is far ahead of its time. What lacks behind is software (intelligent
libraries) and knowledge. A long way to go....

Ju3181d ago

It doesn't really matter. Sony had the right idea at the time. A main general purpose core and offloaded special processors for all kind of computing task (vector processors for gaming). At the time, the GPUs were just that - graphics chips - , now they are getting closer and closer to become those vector processors (with more general purpose performance due to shader processors).

What's more, with OpenCL there is basically a "virtual" platform for exactly that purpose. It doesn't really matter, which chip executes OpenCL code. This could even run on the CELL/SPU (or even mixed on SPU + Shader-Processors), or just the CPU or mixed on CPU/GPU.

The computing model is far closer to what the CELL is today, than what CPU+GPU was in the past. But the GPUs are taking over the SPUs number crunching tasks. So, basically, the CELL idea was very successful. But as a programming model, rather than a HW solution.

Its just understandable that IBM moves further in that direction (and especially, because IBM does not own the SPUs, only the PPC core of the CELL).

Also, OpenCL is "OPEN", not a AMD owned property (and that said, its basically CUDA for AMD, since they had no C like programming environment). On NVidia this was already there (close to as in CUDA which is a full C type interface to shaders), OpenCL more or less just wraps around CUDA on NVidia.

And, well, even OpenCL needs to be optimized for the proper HW below (bandwith differences, alignment restrictions, etc). But over all, its a virtual ("SPU")vector-cpu-cl uster, you can even run OpenCL code in "simulation mode" on the CPU.

I can imagine, however, that we won't see a follow up model of today's CELL cpu. Pitty, that is. Somebody has to write an "SPU" emulator in OpenCL at some point...

DaTruth3181d ago (Edited 3181d ago )

I have to side with Problem Solver on this. The problem is not the hardware, but the lack of software to program for it. I think IBM realized that if getting software developers to move over to the Cell 8 spe processing was difficult, moving to the 32 spe would be impossible.

This tech has been here for 4 years and has proven its worth in benchmarks and application and we still see the programming community fighting it tooth and nail!(not just gaming) This is the reason scientist are creating PS3 clusters with Linux and lone programmers are blazing trails on their own; They are not willing to wait for the giant corporations and rest of the world to play catchup!

IdleLeeSiuLung3181d ago (Edited 3181d ago )

I like to point out this 2-year old article:

http://www.ddj.com/architec...

It in details explains how the programmer wrings out 22x performance on Breadth First Searches compared to a single core Pentium IV and is comparable to a BlueGene/L Supercomputer.

The problem? It took them 20x as much lines to write the code comparable to x86 i.e. the code on Pentium IV took 60 lines and the PS3 took a massive 1200 lines, because the problem had to be split into bite size calculations for the SPEs.

Despite the 22x performance gain, there is an explosion in code complexity on a very simple problem. The cost is shifted over to man power instead. So take it with the good and the bad!

Guitarded3181d ago

40 GIG PS3, $400. 120 GIG PS3 Slim, $300.

Watching the MANY fervent PS3 supporters finally realize the CELL is not the future, priceless!

Ju3181d ago

@IdleLeeSiuLung

You pointed out a 2 years old article. This has nothing to do with the CELL but on how you'd need to redesign the algorithm to run effectively on a parallel system.

This will not change. This is the way to go. No matter if CELL/SPU or OpenCL. You need to split vector and scalar parts, make sure some parts can be highly run in parallel and implement those on the proper programming environment.

CELL was ahead of its time. The only simplification is OpenCL which you can see as some sort of "parallel OS" to simplify "vector processor" access.

But its a smaller step from SPU to OpenCL, than from scalar to vector.

OpenCL is considered by some a solution to the non-coherent programming model introduced with things like CELL/CUDA and other shader languages. It gives you one unified C/C++ interface.

Pain3181d ago

So why would programming Cell be any different? You want rewards? you put effort in to it and thats results in greatness.

Proof is in the pudding and the pudding tastes great.

frostypants3181d ago (Edited 3181d ago )

I find it amusing that you love to troll these PS3 threads when your own platform, the PC, really IS dying as a gaming solution.

And yes, I do game on a PC as well. But your rabid anti-console bias is childish. Move on.

3181d ago
3181d ago
The Lazy One3181d ago

Like Ju said, there will still be huge opportunity for multi-threading and parallel processing with a GPGPU architecture. That isn't being abandoned as that clearly has huge potential strides in performance. There is purely a change in the hardware model they're using to accomplish it.

To be frank, I'll take the Dr.'s at IBMs word on which is a more productive model over the word of a bunch of forum goers.

MGSR THE HD VERSION3181d ago

simplicity, efficiency, and power is the future.

and the technology as we know of this gen will no doubt look last gen compared to 2013 stuff, that's moore's law for you.

and I'd rather see these console developers investing heavily into new grounds in research and development.

a gaming console should be made easy for developers to develop for and gamers to get into.

hay3181d ago

Sony no longer puts Emotion Engine in consoles. Is it doomed? Do you think they will be doomed even if they'll stop using Cell? Think again.

There are experts at Sony who know what to do a bit better than a bunch of zitted fanboys. I'd suggest you to worry about your getting rid of your virginity than getting into Sony's business.

Be it Sony, MS or Big N, they all know what they're doing and none of them's gonna drop out of this generation against all your fanboy hate.

LostDjinn3181d ago

How many of you read the article?
David Turek, makes mention of OpenCL being part of the new focus. A big selling point of OpenCL is when coding you have the ability to write once execute anywhere. (It doesn't quite work that way in practice but it's still a better system than the one it's replacing.)
Large numbers of cores don't require mad coding skills when using OpenCl. Parallel Compute workloads can be tasked to the CPU or GPU, it doesn't matter. You starting to see how it works yet? New hardware is required though. What's IBM going to produce? I don't know and neither does anyone else outside of IBM at the moment.
That said, who here thought IBM would create the Cell and say "alright, we can stop now. This is as good as it gets"?

3181d ago
ProblemSolver3181d ago (Edited 3181d ago )

@Ju:
Yeah OpenCL is actually a good move after about 25 years of trying to build
a computer language / framework for parallel architectures. I've never been with
Nvidia's CUDA since it is proprietary and, even worse, it's an augmented
language serving the only purpose of their (Nvidia's) products. IBM recently has
released a back-end for OpenCL that compiles on the Cell processor, which is
nice. However, just throwing code at OpenCL in the hope that the units get
utilized efficiently won't work as easy as one might think. Good knowledge of
parallel programming is required as well. But OpenCL will make the transistion
much smoother for most programmers. I think Cell could have done much better if
such frameworks like OpenCL existed before the Cell processor was released.

Programming the Cell processor direcly can be compared to dynamic-assembler
programming on an dynamic processor, meaning, one has to build the load/store
instruction oneself. The Cell processor can be viewed as an extension of an
native single core processor where the load/store operations for various units
are fixed. Hence, Cell takes the load/store concept to a higher level. The
'problem' with Cell is that it exposes to load/store concept explicitly to the
programmer similar like assembler programming does on a general cpu, which
makes it harder to program for.

The reason for high-level programming languages like C was not only to get away
of hardware dependencies, but also to abstract the load/store mechanism via
simple assignments like a = b. The compiler then takes care of generating the
given load/store instructions, and its up to the processor to load, assign, and
store the data efficiently. The cache concept was introduced to enhance those
operations. But there is a drawback of the cache concept when implemented in
hardware; it makes the whole memory interface implicit, i.e. each load/store
operation (data) has to go through the cache (on Pentium processors for
example). Depending on the code, this can work out quite well, but it suffers
when one knows what data is actually needed next -- which is most often the
case for graphics code (streaming large chunks of data etc.). The problem here
is that one cannot explicitly call-in a chunk of data ahead of time since the
caches are not programmable. This has lead do deficiencies for graphics code
on implicit (cache-) memory architectures. Intel tried to circumvent this
problem by introducing so-called memory 'prefetch' instructions, such that
one can prefetch a chunk of memory ahead of time into the cache. However, those
prefetching operations are not executed all the time, they are only a hint for
the processor.

The most efficient programs on an Intel processor are those that are programmed
like one would do on an explicit memory architecture. If one looks at efficient
graphics C code than one can see that the data is organized in such a way that
it can be loaded ahead of time (+ cache aligned) by tweaking the implicit memory
architecture.

How does this compare to Cell?
On Cell the memory interface is explicit. One can load and store data whenever
he wants. For example, one can load new data into the memory of an SPE while
the SPU (the computational unit of an SPE) does compute the current data. This
technique is called Overlays, Double-buffering, or Load/Store-latency-hiding.
This becomes possible because one can explicitly program the memory controller
which runs independently of the main computational units. Further, the small
memory of an SPE doesn't work as a cache. Hence, one can exactly tell what data
to put in and what not. This really enhances the computation for graphical and
scientifical workloads. For sure, there are some workloads (mostly integer
based) where a cache fits nicely. No problem. One can program a cache in
software and use the SPEs local store just a cache. This was already done by
some programmers. There are even some libraries for Cell that allows to program
the Cell as a standard processor, abstracting away the explicit memory interface
in trading an ease in programming for speed / efficiency. OpenCL, MARS, and
other libraries do such things.

One can compare OpenCL as a high-level language, similar to the C <->
assembler example I've given above, for a parallel processor that do exhibit an
(not necessarily) explicit memory interface. The efficiency of OpenCL depends on
its back-end and how the programmer lays out his data. So there is hope that
programming with OpenCL makes life much more easier on sophisticated hardware.

But the Cell processors architecture will always be the model for on-chip
parallel systems. It can't get any easier as a load/store architecture. Each
processor, at its core, is a load/store architecture (RISC). The differences are
only given by how much each vendor has abstract the load/store concept away.

So we have seen that a general processor is of type load/store. We have seen
that a parallel processor, at its core, is of type load/store. What about
cluster computing (multiple computers linked together)? The same load/store
concept also applies here. Cluster computing can be views as an extension of
Cell, or Cell can be viewed as cluster computing on a chip. In the world of
parallel computing on clusters a similar thing between explicit and implicit
exists. At it's core, data transfer in cluster computing is always done
explicit. Messages / data are transfered from one node to another. There are
libraries / frameworks like MPI that do allow do incorporate explicit message
transfers to utilize a cluster much more efficiently for the expense of making
programming a bit more difficult. On the other hand there is also a library /
framework, i.e. OpenMP, that tries to hide the explicit nature allowing for an
ease in programming.

As you can see, we always have the following pattern
(1) a general processor : type load/store
(2) a multi-core processor : type load/store
(3) a cluster computer : type load/store

The Cell processor was actually the first multi-core processor that poses the
explicit load/store architecture to the user. He closes the cap between point
(1) and (2). If you look at it, point (1), (2), and (3) are a cascade. That
is to say, once you have understood how a load/store processor in (1) works at
its core level you will also understand how a multi-core processor and a cluster
computer work at their core level. This knowledge is essential to gain
efficiency on each level and serves as a good guide when using libraries /
frameworks that to abstract the load/store mechanism in favor for an ease
in programming.

The reason why programming multi-core processor is considered difficult is
that most programmer's view is an Intel one. Intel has successfully hidden
away the load/store architecture from the programmer. No matter how one programs
on an Intel processor, the processor (its complex instruction set and the
implicit memory interface) takes care of everything. However, there are reasons
why Intel processors don't perform that good on graphical algorithms. Given the
Intel view, it becomes quite difficult to apply the same view to point (2) in
the hope to get any good results out of the box. For example, Valve's programmer
were actually so fixed on Intel's view that they didn't knew how to program a
point-(2) processor, the Cell processor, without any help that would abstract
it's explicit nature away. So it's clear that most programmers got a lot of
issues when being faced with the Cell processor, since most of them have never
understood how even their single core processor works internally. Hence, the
jump from point (1) to point (2) is rather big for many of them. Point (3) is
actually the realm for scientist. They got no problems to adapt their view for
the Cell processor. Hence, it was no wonder that the Cell processor was
considered as a nice gift for many scientist to do complex computation on it,
since they knew from their cluster programming experience how to treat such a
processor. These programmers have actually shown the out-standing performance
of the Cell processor documented in various articles. But there are also some
game developer studios who have overcome the Intel viewpoint of point (1).
Naughty Dog (Uncharted 2), Guerrilla Games (Killzone 2), STUDIO Liverpool
(WipEout HD), etc. have shown what one can get when leaving the old view of
programming a single core processor (with an implicit memory architecture)
behind. They have successfully shown that one can overcome the gap between
point (1) and point (2). Frameworks like OpenCL will smooth out the transition
even for many other programmers. But anyways, knowledge about parallel
programming concepts and parallel architectures is key. Libraries and frameworks
are just what they are; tools.

3180d ago
+ Show (25) more repliesLast reply 3180d ago
Kalowest3183d ago

Let me get this right,IBM is teaming up with AMD to make some amazing CPU+GPU type stuff, I'm not even at the funny part yet. IBM left Sonys side and the Cell,to help AMD who is working on the so called rumored Microsoft's "Phoenix/console Project", i find this kind of weird, no it all make sense now, MS wants some of that Cell tech to use.

P.S This is just my crazy ass Theory

RememberThe3573182d ago (Edited 3182d ago )

I'm sure in Sony's mind their saying, "go for it, look how DVD worked out."

Look how Sony does business, they don't just let themselves get screwed over. GTA4 broke exclusivity and Sony got 3 more exclusives in it's place. Nintendo screws them over and they make their own console. Every time it seems they've gotten the short end of the stick they turn it around and come out on top.

I'm sure IBM was by no means obligated to continue development of the Cell, but I'm also sure both IBM and Sony saw this coming and made the necessary business decision so that it wouldn't hurt either side.

I'm sure Sony gets a cut of any use of the Cell technology just as IBM and Toshiba do.

Plus, everyone seems to be pissed at Nividia because they continue to screw people over. Word is they've been pissing off Sony as well, so maybe Sony doesn't have any issue letting IBM and AMD sucker punch them in the gonads.

RedDragan3181d ago

Sony are one of IBM's biggest customers, if not IBM biggest customer, both sides of party had probably decided this over a year ago and if Sony was to keep with it's apparent history in consoles, where the next one is rediculously more powerful than the last then they knew that IBM needed to be cut loose of Cell and develope something else that is truly special.

Is Sony guuted by this? Ofcourse not, whatever comes of this new IBM/AMD partership will end up in the PS4 should the recent reports of the Intel chip be wrong.

sikbeta3181d ago

Knowing HOW Powerful IBM is when we talk about TECH, they can insist with the CELL, even I can think a way to keep getting better the CELL architecture

They can MAKE an Hybrid CELL-GPU architecture, since that's what Sony thinking when they put the CELL + Nvidia GPU or a Quad-PPEs architecture, kind of emulating conventional Process Programing for whiners and let the other people use the SPEs

The Lazy One3181d ago

That wouldn't make any sense. a GPGPU and the Cell serve practically the same processing purpose.

You'd get very little gains doing it that way as you'd make it excessively complicated for the sake of being complicated. You may as well just use 2 Cells.

Kalowest3181d ago

AMD is working on the next Xbox hardware again, so it can be BC with 360 games. So IBM and AMD working together, means whatever they make will have a good chance of being in MSs next console.

+ Show (2) more repliesLast reply 3181d ago
Magnus3182d ago

So what does that mean in English can someone translate it for me plz?

ssipmraw3182d ago (Edited 3182d ago )

HPC = high performance computing
so basically super computer stuff, not really ps3 related, unless your one of those people who create clusters from several ps3s to create a supercomputer

LeonSKennedy4Life3181d ago

I'm one of those people. : )

I hook up a good 45 of them up in my room sometimes. It's just something fun to do on the weekends.

Guitarded3181d ago

Cell cost to performance ratio FTL!

baum3181d ago

It's actually the best on the market as far as CPUs go dipsh1t, that's why it's on a 299 console. Your name
suits you, GUI-tard

ChrisW3181d ago

How about,

It take approximately 8-months for just about any electronic technology to become obsolete (replaced by newer technology).