DX12 support has just been added to 3DMark showing unbelievable results with DX12 delivering up to 20 times faster performance than DX11.
The performance differences between DX11 and DX12 is astonishing. But what is also surprising is that DX12 is showing to be more beneficial to ADM's GPU's than Mantle. This should explain why ADM is no longer pushing mantle for windows. It also debunks some posters saying they do the same thing. Similar should be the word of choice when talking about the two.
Still.. AMD is like the ugly step child.
Ok, i am not the good at this kind of stuff but if it as fast as Titan X, Will the games perform as good as Titan X on amd r9?
Don't quote me on this one man, but I'm fairly certain that is heavily dependent upon the games in question and how well the devs optimize said games, specifically for AMD cards. It mostly comes down to optimization and with Nvidia being the lead GPU choice and the most common, you're most likely gonna still seem more GPU utilization/optimization for Nvidia cards over the competition.
It seems unlikely. This is a synthetic test designed to measure only api performance. To best avoid other bottleneck locations affecting the result, the gpus really aren't doing much work in comparison to typical usage (eg. simple shaders, low resolution, minimal vram bandwidth usage, etc.). Even though I'm not certain why the api scores differ (especially when looking at the gtx 960 stock/oc'ed numbers), reaching that point again in games will be unlikely. A quote from the '3DMark Technical Guide', page 86: "This an artificial scenario that is unlikely to be found in games, which typically aim to achieve high levels of detail and exceptional visual quality". It's worth reading the pages surrounding if you're interested in more detail. http://s3.amazonaws.com/dow... Edit: Provide a reference to official literature along with a short summary and still someone doesn't like the info. Good ol' n4g.
The big difference is that earlier Mantle tests were games written to perform well on DX11 (like BF4). This new test is written to perform well on DX12. The difference? With DX11 games you had a limited number of objects when they are sent the CPU can't find more work to do and have to wait for the frame to render. Result: mostly rendering limited. This test tries to draw as much as possible within one frame time, ie keeping frames per second constant. Result: mostly draw call limited. This benchmark should also suite Mantle/Vulkan very well. Edit: Note that Mantle scales better to more CPUs and that DX12 flattens out earlier... The future will bring more cores...
How is it astonish ? what is 33% in gaming ? it's almost the difference between the PS4 and the Xbox One but Xbox fanboys can't see it...
Why is the xbone tag in this when there is no mention of it at all ?
Amazing things will happen on pc! Low level console like guts tinkeing api on a pc? It's what I dreamed about but was always told it's not possible on pc. Now the dawn of HBM & HBM2 next year! Pc gamers will have memory coming out of their arses... All we need now are oled freesync 144hz monitors. I would pay $1000 for a 27inch. I have little time to care about what will do for xbox's apu when all this power is coming. @maxor Well mantel will benefit every pc gamers who will enjoys half life 3! Mantel laid an egg and Vulkan was hatched. Mantel 2.0 will be for experimental and innovative efforts. Amd gave the industry a boost! They forced MS's hand, made khronos group leave openGL behind & I am glad mantel existed and is still being used intetally. Now linux gaming will also get a boost. Not to mention AMD was the company that helped SK Hynix develope HBM. They have changed the pc gaming industry as a whole going foward. What did nvidia do? Oh some new AA algorithm for gtx cards? I like the green team to but damn give some credit to AMD. In fact say thank u to be polite if u love gaming. Anyway if I can live to see cyberpunk 2077 with dx12/Vulkan life will be complete.
nVidia will lose sleep over this. watch how they will spin this news if true.
Prices would go down and everyone would win. But the people who should lose sleep are the ones who bought into AMD's Mantle hype.
Mantle runs great its not hype. It runs at least 20 fps more in BF4.
No they wont. There is a massive notification on that benchmark that says it is not representative of real performance and not to use it to compare graphics cards. They will probably just continue to work on their DX12 drivers.
heard that before, like MS finally cares about PC gaming.
Lovely lovely...this is news i like to hear.
Pretty cool. I'd like to see benchmarks with mixed GPUs, that thing they were talking about.
Titan X is massively overpriced regardless, AMD r9 390x/390 will win alone in price/performance.
Yeah, i want to get it so bad but that price is just brutal.
This also clearly proves "coding to the metal" is indeed beneficial: it's something that the PC elitists scoffed at when console owners claimed higher performance compared to pc graphics card with similar specifications. It's really good that those gpu cards will use their potential to a fuller extent!
Anyone who scoffed at coding to the metal was foolish. They should have been longing for the day when their powerful hardware could actually be used to its full potential.
Coding to the metal the way to go. As primary a PC player I've never scoffed at coding to the metal. I've only wished the consoles still had better hardware so more games were running at 60fps.
The boost in draw calls isn't achieved by coding to the metal, it is achieved by rewiring the API to share the bandwidth more efficiently, and therefore allowing parallel draw calls from the individual CPU core to the GPU. I have been trying to explain this to you guys for months, and you still don't get it. Coding to the metal gives developers more control over manually adjusting how much info is sent in in each call and the stress on the CPU Vs the GPU. This multi-core management is done automatically in the API. That is why you can download and run the benchmark test, and you have no idea how to code, let alone code to the metal.
Way to mis-read my comment. Coding to the metal also implies, I hope you understand, that you are not hindered by an API and an API CAN drain performance, and that you CAN get more out of same number of transistors. Consoles also have API's, but you have the advantage of fixed hardware so the programmers can "bypass" the API as much as the API allows it or use hardware specific implementations made available by the API without having to worry about multiple hardware that don't have that implementation.
I'm really confused guys why is the 290x performing better than the 980 and titan x in dx12, i can't figure it out, if someone knows explain please.
Optimised drivers is likely the biggest part of this. Nvidia cards were ahead in DX12 benchmarks until AMD released these new drivers. Expect to see the 'lead' flip-flop a few times. Plus, synthetic benchmarks are no substitute for real game testing - and we won't see comparisons for a while. I am definitely not saying AMD (or Nvidia) are doing this now - but there have been examples in the past where drivers were released that were specifically optimised to give good synthetic benchmark scores, but in-game the reality was quite different. In other words.. it's to early to call a 'winner' here. And the more important news is that DX12 gives a huge boost - and it'll be interesting to see how that translates in to actual games. But just maybe, AMD have hit the jackpot and have a GPU architecture that works extremely well with DX12... or Nvidia's architecture is not suited to this API. We will see in time.
Thanks man, appreciate the reply.
but put the 980 and Titan X on DX 12...
Hopefully DX12 boosts my R9 290 to new levels too :D
Guys this is draw calls only. It's very reliant on compute FP64 performance. We all know that Nvidia has massively neutered this part of its cards. That's why AMD always was the choice for Bitcoin miners. Draw calls will only impact games with tons of things on screen at at the same time. This is also why DX12 isn't going to boost the Xone 100x. It will be more like a 10%-30% improvement depending on how it's used and the needs of the specific game.
they're Draw Calls, not FPS - real world performance can be way different
this time nvidia will be the ones to suffer lets face it direct x12 and windows 10 will be more for amd hardwares.
N4G is a community of gamers posting and discussing the latest game news. It’s part of NewsBoiler, a network of social news sites covering today’s pop culture.