Top
120°

AMD avoiding Larrabee route on road to CPU/GPU "Fusion"

At a financial analyst day on Wednesday, AMD gave out a little more detail on its "Fusion" plans, making the word "Fusion" the centerpiece of its marketing push for the post-honeymoon, post-GlobalFoundries, AMD/ATI relationship. The first product to feature both a CPU core and a GPU core on the same die is codenamed Llano, and will appear in 2011. Llano features one or more Phenom-derived CPU cores, combined with a GPU that supports DirectX11 and OpenCL.

Read Full Story >>
arstechnica.com
Oculus Quest Giveaway! Click Here to Enter
The story is too old to be commented.
Nihilism3683d ago (Edited 3683d ago )

The only CPU of interest to me is the sandy bridge iteration in q1 2011, in 2010 all we'll see from Intel is the 32nm dual core westmeres and the 1 extreme edition i9 clocked at 2.4ghz and costing $999u.s... ( no die shrink for quads in 2010)

AMD will have their 6 core offerings rumoured to clock at 2.8ghz, but an i7 with o.c potential of 4ghz will offer much better performance than a more core, lower clocked cpu, the dragon age benchmarks are a great indication, games are becoming more cpu bound again as multiplat has slowed the graphical push down. But quite often only 3 of 4 cores are used in games ( they can't make too many threads or dual cores would crash and burn), until quad core is standard we won't see too many improvements

Sandy bridge is rumored to have up to 4ghz along with the 4-8core/8-16 thread and massive cache's etc. 2010 will be the year AMD plays catch up, while Intel does little to nothing, and 2011...Intel will blow them back to the stone age, 2011 will be my next cpu upgrade...it may even be on a new chipset so i'm not going to rush out, especially seeing how fast we went from DDR2 to DDR3, by 2011 we may have DDR4 etc for those cpu's, or just massively improved Ram and motherboard fearures - PCI-E 3.0 ;)

The second gen of X58 motherboards are rolling out now ( gigabytes ID7) which have sata 3.0 and USB 3.0. The future of intel is looking amazing, sadly for AMD, not so much. It would be devastating for all pc users if AMD/ATI were to go bankrupt, but it looks like ATI is holding out till 'bulldozer' in 2011 for some real competition...a long painful wait. Hopefully ATI's RV900 coming mid to late 2010 will also win them back some of the ground Nvidia will have swept from under them. Competition = smiles and sunshine.

EDIT: Future AMD server CPU's and notebook 'bulldozer' cpu's seem promising. But the they are a minority in the pc market compared to desktop cpu's.

kaveti66163682d ago

Well, right now I'm writing a paper on the evolution of transistors and I'm at the part where I talk about Integrated Circuits, and I will provide my prediction that indeed, the centralization of components into a single die is something that will eventually happen. Once the GPU and CPU occupy the say chip, it is more likely, as you say, that the CPU will be able to take up the bulk of the graphics processing, leaving the GPU to maximize on special graphical procedures.

Nihilism3682d ago (Edited 3682d ago )

I'm not convinced that they will ever merge completely. I'm absoluteley sure they will merge, but only for mid level graphics requirements/ laptops etc, because a combined unit could never offer the same level of customisability etc,

for example: someone is doing 3d rendering, they of course need a powerful cpu, but more importantly they need a great GPU, and if that meant they had to buy the best combined CPU/GPU money can buy to get the right GPU power, they would pay a huge premium, whereas buying a moderate cpu, and a separate high end GPU, will always be cheaper.

More and more as time goes on they will trade off tasks between the two and better complement each other, ( or both with similar capabilities but with one superior to the other, and they compound)

But a complete merger is not good for consumers or for performance. Only in the low/mid segment will it see usability or even demand. Of course that's the majority of the market, but not good for my needs.

Intel is having enough trouble cramming 12mb cache onto their cpu's, forget GPU memory and shader units as well, it will be a long time (10 years maybe) before we see full current ( at the time of release) gaming CPU/GPU's, but it will start at the end of next year...the baby steps.

iMad3682d ago

Billdozer wll be used in the next XBOX in 2011...the advanced version of it...Larrabe aprouch will be skeeped by devs,because MS and AMD got all majour developers and male good eco-system for them to stay no-Larrabe architecture.

Nihilism3682d ago (Edited 3682d ago )

"iMad - 4 days 23 hours ago
360 > PC versions
"

Forgive me if i don't take you seriously you console retard. Not only are you playing a console, but your playing the worst console there is. You don't know anything about PC's, or the components so exit stage left.

Thumbing down things you know nothing about, tisk tisk.

"Billdozer wll be used in the next XBOX in 2011...the advanced version of it."

Proof of your stupidity, the first versions will only be out in q1/q2 2011, there will be no 'advanced version' in the next xbox...

also lern 2 grammarz frenchie

LostDjinn3682d ago (Edited 3682d ago )

Back to your corners boys. Round one goes to dchalfont. That was brutal.
dchalfont, the only thing I see this being useful for is mid-low end laptops, netbooks and indeed anything running on limited battery power. It'll be great on power because the CPU and GPU share the same silicon real-estate. Due to the architecture, overclocking will be tricky though. Different clock speeds on processors sharing the same bed are really going to mess with one's tweaking. They'll have to use side porting on the GPU or they're very likely to get yesterdays output at tomorrows price point. I see this as being AMD/ATI's answer to Intel’s "onboard" graphics BS.

Anyway, I'll let you two get back to it. Sorry to interrupt. ;)

Nihilism3682d ago

haha cheers, and I agree about the laptop point, I can only ever see it taking off with either low end buisness pc's that need some GPU element ( even OS's need a GPU these days) or laptops, like I said, with integrated you would either have a CPU bottleneck, or a GPU bottleneck, and it would be an expensive waste to get rid of essentially your CPU and GPU just to upgrade one or the other.

I can see Nvidia's idea of offloading the CPU onto the GPU being far more efficient than Intel and ATI's mixed version's. CPU's just aren't equipped for graphics yet, whereas a GPU just need to be programmed for ( and Nvidia is progressing this way with their Fermi cards). The main reason being because current GPU's are basically massively parallel processors ( the next nvidia card will have 512 cores :D ) so given time for the coding to be written for it it could obliterate the need for a cpu altogether, well i'm sure there would need to be a minuscule one to power basic functions, but for the main part the GPU could take all the weight, this is what i hope to see also. GPU's are progressing a lot faster than CPU's are, like I said at the top, my next CPU upgrade isn't coming for more than a year, there isn't a lot of point, I could upgrade to an i7 from my 3.2ghz C2Q...and gain a whole 1 frame per second in one or two games...but this difference is negligible for what I use it for

LostDjinn3682d ago

The only thing holding GPGPU's back at the moment is the damn mobo bus speed. As you'd be aware, bus speeds dictates CPU GPU interaction. You wont be able to squeeze the most out of the upcoming cards until we get better bus speeds.

After that it's all gravy!

Nihilism3682d ago (Edited 3682d ago )

yeah cpu's can pretty much take anything you can throw at them, voltage is irrelevant when you have proper cooling, only the heat from the increased voltage damages the cpu, not the voltage increase itself. i7's showed such good oc-ing potential because the higher multiplier alleviated the low FSB wall. The next motherboards will hopefully take the standard to well over 2000mhz FSB, i think the last nvidia boards already had 2000hmz standard.

PCI-E 3.0 should also increase the max PCI-E speed to 200 mhz from 100, clock speed is currently the slowest GPU element but, and also the most irrelevant.... but needless to say though, faster is always better...

+ Show (5) more repliesLast reply 3682d ago
LostDjinn3682d ago (Edited 3682d ago )

I'm hoping for above 2000mhz on the FSB. A lot more. No bottleneck is a good bottleneck. Casting parallels from the CPU to the GPGPU on top of the current workload through anything smaller is going to start limiting performance. The fact the PCI-E is 100mhz or 200mhz really doesn't mean crap. It's CISC based so it's large packets at slower clock speeds. If they keep the packet size the same and up the mhz then that would be something to see. You sort of have to go at it differently by looking at throughput rather than clock speed.

You're put of bubbles bud. If you want to keep talking we'll have to take it to the open zone. O.o
No! Not like that. :P

Nihilism3682d ago (Edited 3682d ago )

haha, I get what your saying, but my knowledge is limited, i am going off things I had read, I read people claiming that GPU clock speed was limited by the PCI-E frequency ( not directly, and i know there is no multiplier with GPU's), but as the clock speeds are increasing so little compared to every other spec I thought it possible. It's like increasing a GPU bus size but not increasing the clocks or core efficiency. I thought it was strange that Nvidia lowered the bus width for the Fermi cards, but the MIMD instruction set is mostly responsible I believe, no more brute force processing of the days of old, now a delicate dance between the core clusters. I have every reason to believe that the fastest single GPU card in the next Nvidia line will double the performance of the gt200 series, there's supposed to be a demo of it on Tuesday, so hopefully we get some leaked info....desperate times call for desperate measures... I have a money box for saving for new computer components....that's what I get for quitting my job I suppose...but a service station worker wasn't exactly fulfilling work ;) fingers crossed I have enough for a new GPU come January

LostDjinn3682d ago (Edited 3682d ago )

Your actions do that. I enjoy the fact your comments make sense.
Good luck with your savings. I hope you get there.
As for your knowledge, you seem to have more than certain others I could name.
Now to what you were saying. Yes I'm very much looking forward to fermi. It has the chance to really make a big difference. Letting your CPU have Tflop sized legs on top of it's current performance is going to bring tears to the eyes.

2DXtreme3682d ago (Edited 3682d ago )

Maybe you could clarify something for me.

Didn't they essentially get rid of the FSB limitation with the Core i7?
If I recall, when they first came out, one of the things they use to point out was that FSB was no longer an issue on core i7 systems. In part due to the memory controller integration.

For example, on my Rampage II Extreme Mobo, the FSB is listed as QPI 6.4GT/S. With a value that high, how can the FSB be an issue?

Also, regarding the i9, isn't that suppose to be a 32nm, 6-core CPU that is suppose to be released sometime in 2010? I thought that was suppose to be the last top-of-the-line chip for the X58 based systems, basically replacing the i7 975. After that, sandy bridge will be introduced.

did you mean 3.4GHz for the i9?