Over 300% improvement on 3 year old hardware and more details shared.
On asked how the new API will impact the Xbox One, he stated, “The problem there is the Xbone is hideously GPU bound and while multithreaded performance SHOULD be good, the CPU is operating at a painfully slow clock rate.” I really hate the CPUs used in this gen consoles.
Yep,that's my main problem with 8th gen.
It was the main problem with 7th gen as well. 6th gen was the last one to release with then on par PC tech.
7th gen had decent CPU's,hell it was top tier when they released.
@nionai Do you have any idea how horrendously powerful the cell in the ps3 was? The problem with it was that it was difficult to develop for but in no way was it underpowered.
The 360 was state of the art when it launched. Ahead of pc hardware in its time of launch.
The cell was not "difficult" to program for and I find it funny as hell people still believe this nonsense.. And yes, Both the 360 and the Ps3 cpu released behind PC tech at the time, the 6th Gen was the last to release with then current tech.
@Nio It wasn't hard to develop for, but it did require using techniques that weren't common in game programming, or even regular programming. The biggest obstacle with it was that multi-plat development became much more of a thing last gen, and many of the things that could be done on a PowerPC architecture couldn't be done directly on a CELL, so a lot of things had to be rewritten. Nowadays, many of the techniques from CELL are actually in modern processors, so developers learned to take advantage of it. It wasn't that much different with prior PS consoles as well. The Emotion Engine was somewhat unique, but at least it was built off of what was already being done. However a lot of the specialized things it could do took a few years for developers to get a handle on.
So many articles telling us how a new system update or SDK will dramatically improve performance. But stop using words to try and persuade us this true, just show us.
Replying to your comment about CELL being difficult to program for, we've had it for, what, 9 years? I don't think that holds true anymore. Developers got a great handle on it and it's much easier now. The results of late generation games are a testament to that. Still, I can agree that it's better for the industry to have gone for x86 architectures. Everything is uniform now and multiplatform titles can be released faster. This benefits people playing on consoles to PC and vice versa. It also helps out indies quite a bit. I imagine the porting process is much cheaper and faster now.
@rainslacker "many of the things that could be done on a PowerPC architecture couldn't be done directly on a CELL" Cell *is* (a specialized) PowerPC architecture. PPC code runs on Cell, but not necessarily the other way around. What made the Cell difficult is that is was only a single core CPU with satellite SPEs, which is uncommon. It wasn't only uncommon, these SPEs weren't general purpose cores so they weren't good for everything. Also, there was no direct language support for it, so developers had to learn how to go low-level on Cell to get the performance out of it. That usually involved going specialized assembly and really knowing about the limited memory available in an SPE core. There were some C++ APIs for the SPEs, but you couldn't directly code the SPEs in C++ as you would a general purpose CPU core. Last but not least, in the first years there wasn't really any development tools support or good documentation for the SPEs to speak of. Mark Cerny has gone on record saying that in the early days, the PS3 was this board of exotic components with some sparse documentation, in Japanese. Go figure... Gabe Newell had to say something about PS3 as well, and it wasn't about how easy it was... So in short, yes, the Cell *was* difficult for the average developer. And it still is. "Nowadays, many of the techniques from CELL are actually in modern processors, so developers learned to take advantage of it. " Which ones are those? X86 doesn't have something comparable to Cell SPEs, does it?
@funk The main core was a PowerPC core. But it wasn't a full powerPC core. It used a subset of the PowerPC architecture, and there is not single PowerPC instruction set to begin with, just like there is no single x86 instruction set, although all x86 processors can run legacy code, but sometimes older processors cannot run newer code. It's rare that you see this though, as most of those things are rather specialized and wouldn't exist in general purpose computing. Anyhow, the idea behind CELL is that some of the things that a regular PowerPC could do, could be done more effectively on the SPE's instead of the PPE's. Those SPE's required they be programmed to, because there was no high level way to make it happen early on. The API's to make it work existed in rudimentary form for regular development, and got better over time. Sony's own PS3 API's most certainly had libraries which handled many SPE functions. I don't know how robust it was out of the gate, but it was pretty well refined later in the generation. Otherwise, you are correct, assembly was the better way to go with the SPE's. Assembly though isn't exactly rocket science. You simply figure out your algorithm and find the instruction to make it work, then program the instruction with the variables and derive a result. That's all assembly really is. That being said, the first few years I can concede that the documentation would be lacking, although I found a ton of it when I was looking into CELL programming. It wasn't terribly well organized though.
For the average developer, I can say you're probably right. The principals involved in CELL would be extremely foreign. However, AAA devs aren't average developers. They're well versed in assembly, and know how to manage data. But the time to take to at least get to learn the new stuff could be seen as difficult either way. "So in short, yes, the Cell *was* difficult for the average developer. And it still is." I'd have to disagree here. I wouldn't consider myself a genius level programmer, and I had no issue finding and understanding how to utilize the CELL. I never got up to the advanced stuff like you would see in games due to it being more an area of interest and not a job, but CELL is hardly foreign once you understand the basic principal of how to make it work. I can understand there may be a lot of resistance towards programming for CELL, but that likely comes from programmers who learn one thing, and then don't adapt well. Most programmers I know find one thing they like, then have a hard time moving onto other concepts because it's just not what they're used to. I don't think a quality game engineer would have this issue though. "Which ones are those? X86 doesn't have something comparable to Cell SPEs, does it?" Many modern processors actually use a powerPC RISC instruction set for internal processing. This allows more to be done with less need to feed instructions in sequence. This actually started in the Pentium chips(can't remember which one), but was rather rudimentary, and didn't involve offloading tasks like the CELL design can do. The EIB bus is becoming the standard in modern CPU's, as it removes a level of hardware for interconnecting various parts of the system bus. Hyperthreading also become relatively common on the CELL chip, while past ventures didn't really get utilized due to restraints on the memory bus, and was reserved for more specialized applications. Nowadays, hyperthreading is made more mainstream due to the EIB listed above. This particular thing isn't really a CELL trait, but CELL was the first one to utilize it for general purpose computing. CELL is mostly defunct now outside of some specialized applications, but there are things from it that have improved modern computing. There's a good scholarly article on it somewhere on the net. When I have some time to hunt it down I'll link it to you.
Buy a PC then and you can choose your own CPU ^_^
The thing is most people are truly computer illiterate outside of using them to type documents and get online. Most people don't know hardware specs and differences, don't know how to download driver updates, etc... All things a simply Google / YouTube search can bring up. And then most people just don't want to deal with having a PC as a gaming platform when they can have everything bundled right there for them in a console, that ensures that most of the gamers have an identical gaming experience. Just two different mindsets. I enjoy both personally, but I also don't expect my consoles to compete with my PC, they both have their uses.
Build yourself a PC and and you can choose your mother board, CPU type, memory, graphics card. hard and/or solid state disks not to mention a decent monitor, power supply, keyboard and mouse. Buy a PC and you get what it has and if you get one with really good specs then you you are going to pay for it. BTW before anyone chimes in with "But keyboard and mouse are cheap" I would say you really aren't serious with PC gaming.
last gen it was good CPU, decent GPU and BAD RAM. Now we have great RAM, ok GPU, but bad CPU. Will they ever get it right
It's ridiculous to expect all 3 of those things to be great for $400. Something needs to be sacrificed somewhere.
True,butcpu should not be an afterthought,like they shouldn't have done it with the GPU on the PS3 while it still worked out it was everything but a popular move for third-party devs.
Well next gen should get the right balance, but they won't be high end at all. The following specs should be the minimum if they stick with AMD. FX 8320+ (they need single core performance boost) R9 390x equivalent GPU (8+ TFLOPS) 8GB - 16GB DDR4 System RAM (most likely 8GB) 4GB - 8GB HBM (Probably 8GB) $399 - $499 So a mid range CPU (even without the single core boost), top end GPU, a good amount of System RAM from a much better source, and a much better source VRAM. By today's standards. By 2020 when the consoles release, it'll be a solid CPU still (with the single core boost, it'll still be mid-range), a mid range GPU (in 5 years time GPU tech will be much better, R9 770x), and the RAM should still be good if a bit on the smaller side pool wise (although 8GB of VRAM is still a good considering the most demanding games of today are capping at 4GB, and 8GB system RAM is still plenty).
They should have kept the blu-ray drives out and added extra horse power.
Yep, X1 - 1.75ghz per core, PS4 - 1.6ghz per core, meanwhile PC's are pushing over 4ghz easily :/
Pcs have been running at 4ghz for 15 years.iit was originally believed that the intel pentium 4 could hit 10 ghz. However the reason cpus havent surpassed those marks is because its not beneficial. We r forced to overclock our cpus because they arent being used correctly. The speeds arent the problem. Its just a problem of them not being used the way they should be.
@shloobmm3 Im pretty sure in 2000 we barely hit 1ghz ,I know because i had a pentium III that was one of the first along with AMD Athlon , No way we had 4ghz Cpu's on 2000 , atlas in the consumer market
So what? Consoles hardware should not be compared to PC hardware. It's like comparing PC hardware to this of space rockets... Look at PS2, it had freaking 300MHz prcessor (single)! It played every multiplatform game liek a charm, and at it's end of lie it had ganes like Black that looked just like PC games At the time. And the PCs at that time were 3.0GHz up and core 2 duos were around with 3,2 GHz on 2 cores. PCs need that extra memory, frequency etc. to not only play games but also do 9412412789 other things at the same time, that consoles just simply don't do. This, plus the opened platfrom that PC is. No game developer will optimise their game for 4124152 PC builds that each have just one hardware part that the others (and as a soon to be IT engineer believe me, it MATTERS), so that' s another reason consoles can have "slower" CPUs.
You can dislike my previous comment how many times you like, I don't care, but these are the facts. And I know butthurt pc fanboys don't like them ;).
Ive run the api test thru the preview build. Its significant. Anyone with the win10 technical preview and 3dmark can run the benchmark. However is answer about the xbox one is ludicrous. Since when do care about not only a random persons benchmark but also his assumptions of what it will do for the x1.
So we're supposed to care about your benchmark, but not his? Let me guess, your benchmark says DX12 turns the Xbox One into a high end PC right?
@dk Did you read the article? neither the guy in the article or shloobmm3, has benchmarked the X1 with DX12.
Well, all someone has to do is benchmark a rig with a similar CPU and DDR3 GPU
but who will pay for a higher spec cpu in consoles? it doesnt come for free a £500 console simply wont cut it
what console was £500? ps4 i got on launch was £379 with 2 games. i would of happily pad £400 or £450 if it would of made a huge improvement. i love my ps4 but i would of been happy to spend upto £450
in the past gens console makers would make a lose on each console sold and recover that with the premium fee they add to games, this gen it's been all about making profit from day one hence why the machines are massively underpowered. You have to remember these console manufacturers attach huge fees onto the games sold not to mention the other stuff like subscription fees and advertising revenue they make per console.
I will. Not wasting my gaming time playing crap at 30fps. Screw this gen
I think this is also probably why we don't see the PS4, with it's presumably faster GPU and GDDR5 memory, blowing the Xbox one out of the water in multi-platform games. The CPU's are probably the bottleneck in many benchmarks. In the PC gaming world 3.4 is now the base, low-end speed for a CPU. Raw CPU horsepower matters. A lot.
@MonkeyOne I think this is also probably why we don't see the PS4, with it's presumably faster GPU and GDDR5 memory, blowing the Xbox one out of the water in multi-platform games. huh??? where at??? i think our understanding of blowing something out of the water is tooootally different from one another. the ps4 and the x1 both has the same everything when it comes to multiplat games, with a slight bump of resolution from time to time in favor of the ps4, but 900p to 1080p to the naked eye (depending on the size of your tv and the distance that you're sitting from your tv) wont allow you to see the difference (this is why most people rely on machines and companies like digital foundry to tell us what the true performance of a game is). now if it had better textures,better lighting,more polygons,better particle and liquid effects, then you might be able to say that 1 blows the other out of the water, but 900p to 1080 isn't enough to say that sir. 720p to 1080p isn't even enough to say so. you need way more than a resolution bump on a multiplat game to blow one console out of the water. now you can say that in the resolution department the ps4 has a slight advantage (at the moment), but that's about it and even someone with half of a brain wouldn't contest that.
@ head black man I think you need to read his comment again.
OK, now show me a game that has 330% boost over Directx 11.
So, I see you do not understand the statement's meaning OR the fact that DX12 games aren't out yet.
I know the games arent out yet, they need to stop saying how much of a performance boost DX12 is an actually show us a game with real time performance, not just some benchmark.
i was going to built a new gaming rig but if this is true then my actualy pc should be able to play all games maxed out once directx 12 is out
"DirectX 12 Gives A Boost of 330% To Draw Calls On 3 Year Old Hardware" What this means is ONLY the draw calls are 330% faster. It doesn't mean the frame rate is 330% faster.
Very true. But if the draw calls are that much better, and the game is optimized to use the CPU in that manner, then the frame rate with increase. They are by no means directly proportional though. Draw call with do more on the screen at a specified frame rate for DX12 API as opposed to DX11.
A draw call affects the complexity of a given scene, including things like geometry and texturing. More texture layers applied to the same surface increase the number of draw calls, resulting in a more detailed image. 330% more draw calls, however, does not make the whole system 330% more powerful.
They aren't saying it's a 330% fps boost. You do know what draw calls are right? Well that's what they are talking about - Geez.. do some more reading before you make simpleton comments.
So my laptop's 820M is gonna become a 750TI? Joking asside I'm sure this will make gaming quite easier for the low to middle end PCs
Avoid clicking it. Click bait. It is an article focusing on PC hardware, yet the N4G pic shows an Xbox One making gamers think the benchmark tests are for Xbox when it was really PC tests.
Gamingbolt at their finest...
Know Xbox doesn't have PC hardware and Xbox does run dx12. Microsoft made dx12 not Xbox. Wft
"It is an article focusing on PC hardware, yet the N4G pic shows an Xbox One making gamers think the benchmark tests are for Xbox when it was really PC tests." "Avoid clicking it. Click bait." This^^^ Nothing more needs to be said.
now i can finally play the witcher 3 at ultra 4k on my old laptop!! thank you gamingbolt! you're always right and you always state true facts and never fake ones :)
Does Witcher 3 support DX12? I think the game has to support it first right?
Why did we ever buy new consoles when we can just download an update?! /s
I doubt a game engine to really have an 300% increase in RL peformance, but hey it should help significantly.
Okay. I just read the article. Can someone here help out with this math problem? Where did they get 330% from? I have 1.5 million draw calls. It gets increased to 8.5 million with DX12. Isn't 8.5 ÷ 1.5 around 566%? Am I missing something here?
2.5 million multithreaded = 100% 8.5 million with DX12 makes around 330% But it's actually only an increase of 230% (to 330%), if I get the english language convention right.
Are they serious about it giving the XB1 a boost of 330% with DX12? i would like to see what it would do, but damn. Oh look, it's the graphics loving website gamingbolt at it again. Not gonna even bother reading it. Can't wait to see what it can do with the games (first-party) that'll be releasing.
no they arent , its about pc as usual, the xbone pic is to give false hopes like always.
As usual, it's just GamingBolt/Rashid trying to make some info about DX12 look like it's about the Xbone when it is vehemently not.
I believe Jaguar cores of AMD was designed to work at high performance at low power requirements. It's max clock is at 2.7Ghz, I wonder why MS and Sony didn't keep the default clockspeed of their console's CPU. While their 7 year old PS3 and Xbox 360 were clocked at 3.2Ghz on its small boxes without any problem.
Any increase or improvement is good in my opinion... one way or the other XB1 is good as it is right now so any upgrade is welcome... allways.
I believe this is going to affect PC most, that's why i recently went x99 and got a 5820k, 6 cores with 12 threads clocked @ 4.4ghz.. beast of a cpu.
Why are they showing an Xbox one picture lol.
trying to get peoples hopes up I guess, they do it with every dx12 pc article pretty much.
Once the ps4 was reveal Mark Cerny said that ps4 was inspired by gamers and developers from the gaming industry and this gen will not suffer from hardware limitations and we gonna see wonderful things Did someone told him about the weak CPU in the ps4? or maybe he know something we dont know? if the price of ps4 was 450$-500$ and have a better cpu(not talking about the xbox one cpu becouse they are the same) Sure we were still buying the ps4.
N4G is a community of gamers posting and discussing the latest game news. It’s part of NewsBoiler, a network of social news sites covering today’s pop culture.