Top
200°

HD 5870 Performance Comparison Tips Up

Some new photos as well

The Czechs did it again. Some HD 5870 performance numbers have popped up on czechgamer.com, and it seems the new Radeon is quite a performer.

In Hawks DX10.1 it outperforms the HD 4890 by around 60 percent, just like Fudo predicted a few weeks back. Things look even worse for the GTX 285 which ends up twice as slow as the HD 5870. However, bear in mind this is a DX10.1 title, so take the numbers with a grain of salt.

In 3Dmark Vantage the HD 5870 is around 40 percent faster than the GTX 285 and around 60 percent faster than an HD 4890. The HD 5870 is around 17 percent faster than the HD 4870X2 in both tests.

The HD 5870 certainly offers a major improvement over the old RV7xx generation, but we're expecting Nvidia's so-called GT300 to offer an even bigger boost over current Nvidia cards.

You can see the benchmarks here, and a few photos of what is supposedly a stripped down 5870 here.

Read Full Story >>
fudzilla.com
The story is too old to be commented.
Major_Tom2956d ago

It's gonna be affordability versus raw power again.

Tekton0142956d ago

and the 295 is a dual-gpu too, lulz, waiting to see what nvidia has now.

Samer3052956d ago

wow Amd/Ati really stepped it up.

Waiting to see what Nvidias got.

OpenGL2956d ago

It looks like the 5870 is basically two Radeon 4890s on the same die, sharing a memory controller. It is a much smarter design than your typical dual GPU solution since it behaves as a single GPU. Let's hope Nvidia releases something competitive so prices are low!

Pandamobile2956d ago

The beast they call the $5870 is only going to be $400 at launch, too.

Quite frankly, I'm scared of what the 5870X2 will be capable of. It's got more processing power than like 3.5 Play Station 3's, on one add-in PCI-Express card.

FantasyStar2956d ago

Oh Panda, you just had to mention the PS3. Prepare yourself.

Pandamobile2956d ago

It's a good metric for console gamers to be able to grasp on to.

OpenGL2956d ago

If you're comparing GPU performance alone, 8800GTX was already 3x+ faster than the G71 based "RSX" and the R500 "Xenos" back in late 2006.

Pandamobile2956d ago

No, total PS3 system performance.

This GPU is 13x faster than the PS3's GPU.

OpenGL2956d ago

They aren't really comparable when you bring the CPU into the equation though. That's like saying that the 4870 is faster than the Core i7 975, despite the fact that they perform different functions.

Pandamobile2956d ago

I'm using the total system output as my metric. FLOPS. FLoating-point operations per second. The PS3 has a peak output of about 1.9 to 2 teraFLOPS. And the 5870X2 should be able to peak at 6.5 teraFLOPS.

OpenGL2956d ago

That PS3 figure is so grossly exaggerated it's not even funny, and the ATI one is exaggerated as well. ATI is still using their super scalar shader architecture so hitting their peak number of instructions per clock cycle per shader is very difficult, which is what they used for their multi-teraflop performance estimate.

When talking about OpenCL or CUDA like development, it is even less likely that performance can really be utilized. [email protected] would still be insanely fast on the 5870, despite the fact that it wouldn't push the GPU to it's limits.

aGameDeveloper2955d ago

I agree that comparing GPU flops to CPU flops is, at present at least, pretty useless - different strengths, different application domains. It would be tortuous to get the original Unreal or Doom to run on nothing but this new GPU (though you would still need the CPU to interface with the OS and system hardware), yet 100 Mhz CPUs with about 1/50 - 1/100 the horsepower of modern CPUs were able to - without any GPU.

aGameDeveloper2955d ago (Edited 2955d ago )

Suppose I design a computer (call it "Deep Thought") to compute the Meaning of Life, the Universe, and Everything - and set some FLOPS records... I could create an integrated circuit that consisted of 50,000 floating point adder circuits (at about 20k transistors each, and none needed for memory, this gives us 1 billion transistors, less than modern processors) that computed the same operation (using the same input, "20" and "22") in parallel. It should run anywhere from 5 Ghz to 10 Ghz (due to their simplicity). So we're looking at a minimum of 250-500 teraFLOPS of raw processing power! But will it run "Crysis"? Of course not, it only computes "42".

BTW, I am not an IC architect. I got the 20k and 5-10 Ghz numbers for my "design" from http://www.giacomotto.com/d... (scaling up the frequency due to improvements in process technology since 2007).

commodore642955d ago

@ above...

lol

I wonder how many people will get it?

+ Show (12) more repliesLast reply 2955d ago
FantasyStar2956d ago (Edited 2956d ago )

Those numbers are impressive, but it all means nothing if people can't afford it. Doesn't matter what Nvidia does better and what cards they have if it's not priced properly. When 8800GTXs came out, it was all fair. But when the 8800GT came out: they sold like hotcakes, demand skyrocketed and prices went up as a result.

EDIT: Ya, $250 and below sounds about right.

evrfighter2956d ago

$250 and under is probably what the ideal price point is for pc gaming. I'm sure that price point goes out the window if Valve says Half Life 3 is Dx11 tomorrow.

FantasyStar2956d ago

at least in the GPU segment anyways.

Pandamobile2956d ago

The 8800 GTX was like $700 CAD at launch :I

FantasyStar2956d ago (Edited 2956d ago )

8800 Ultra was $750 CAD, it was a horrible time. We should be thankful that AMD/ATI forced Nvidia to drop their prices, otherwise we'd be paying for GFX Cards the price of complete Mid-range towers.

+ Show (1) more replyLast reply 2956d ago
Xi2956d ago (Edited 2956d ago )

Glad to see that the card is still scalable against AA/AF.

http://img12.imageshack.us/...

the performance drop on L4D from 4AA&8AF to 8AA&16AF is commendable. 5fps for double the AF and AA speaks volumes.

I can't wait to see the performance of 2 5870x2's in xfire, along with a game that takes full advantage of dx11.

Nihilism2956d ago

hmm that is a decent gain, but when the gt300 comes out it had better be more than a 50% improvement, seriously, from 30frames average in crysis warhead on very high (and that's in dx9 mode) to 45, and still not have any antialiasing...that's just weak, especially seeing as both cards will cost $500aus +

...we must wait my precious....soon we will know

Nihilism2956d ago (Edited 2956d ago )

something tells me that these are either fake........or just don't make sense, gtx295sli only gains about 5 frames, and i know they scale better than that, the 5870 scales as badly as well if these bench marks are anything to go off

@GIJeff

that was my only thought too, maybe their using a p55 motherboards with only 8 pci-e lanes per card, it wouldn't be the first time i've seen reviewers do something stupid like that, this series of cards will really get everything they can out of dual pci-e 2.0 16*....better start saving for that new mobo

@STONEY4

i can't believe microstuttering still exists, my last 2 cards= 8800gt 512mb, and gtx280 1gb, i've never gone sli,and that is the reason, i'd love to, but the throught of laying down that much money and seeing a major graphical glitch would driver me insane...you should check the video's on youtube, it makes me cringe

GIJeff2956d ago

its a CPU bottleneck. It's hard enough to find a CPU that isnt the bottleneck when using todays cards...

STONEY42956d ago (Edited 2956d ago )

The GTX 295s from what I've heard usually either scale terribly in SLI, or have massive amounts of microstuttering. I was gonna go 3-way SLI with GTX 275s since I already have 2 of them, but I'm definitely getting one of these DX11 cards. Either the HD5870x2 or the GT300 if it's any good, since I'm a Nvidia fanboy ;)

Show all comments (29)