"Trailers and feature-length movies simply have a much higher budget per second than what the full game can afford," says Filmic Worlds boss John Hable.
The idea is tempting but simply offloading assets to cloud won’t work, says Filmic World's boss and Ex-Naughty Dog developer John Hable.
I can’t comment on any specific applications
In other words I can't really comment what going on at MS because I have no ideal what they are doing. -___-
Well, the last demo they showed was cloud based physics calculations. They never said anything about graphics processing in the cloud. But they have demonstrated other uses like cloud based software updating and cloud based physics.
But indeed, the issue is that Microsoft hasn't actually shown anything new or compelling yet. This is part of the problem. They made a lot of comments at the beginning of the gen about the potential but it hasn't come to fruition yet.
http://www.pcgamer.com/nvid...
He must have forgot nvidia was all so messing with cloud computing with lighting effects in games. So I call B.S on his statement.
@vega275 That is not graphics though its just doing the calculations then sending the data back to be rendered.
“for computing indirect lighting in the Cloud to support real-time rendering for interactive 3D applications on a user's local device.”
says so right there
And he would know about everything that's going on behind the scenes at MS. I mean he does work for Naughty... Oh wait. He wouldn't know a d@mn thing would he? lol
This is not a specific comment on MS... It's to do with offloading graphical computations to remote servers.
Did you even read and think about the article or did you just see the thumbnail and go into "console war mode"? Pretty low of the submitter to do that but I would expect almost everyone to see past this media manipulation by now. RTFA.
Of course there are limitations in trying to perform low-latency graphical computations over a network - this is all that has been stated and is common sense.
It will be cool to see what does emerge from cloud computing in terms of gaming. Maybe some AI can finally make big leaps, esp. as the generation unfolds and there are more servers and power.
Tbh it is hard to think of something that isn't required almost immediately in gaming though - I don't know what the average would be but something like a >100ms round trip time seems reasonable... and is an awfully long time. But perhaps there are tricks and techniques to overcome this. Maybe some things can be calculated in advance for the next few frames.
It doesn't make sense to even talk about offloading graphical assets to the cloud since that was never the goal of cloud compute in the first place. It was meant to take care of some of the CPU tasks (most very small) in order to give the CPU more room to take on other things. The cloud is not rendering graphics, the console is still doing that. All it's doing is taking bits and pieces of data that the console needs for things like AI or physics, and sending it to the console. Not rendering it in the cloud.
The paper that Nvidia provided with their demonstration has a lot of great information on what is going on, as well as the requirements to make it work.
http://graphics.cs.williams...
It's a pretty technical read, but good none the less.
Nvidia claims that the standard ping time in latency is sufficient in most cases, however, the actual real world results would be highly dependent on a lot of things. Certain applications work better with lower ping.
This of course only took into account 3 different types of lighting systems, but they are fairly common approaches in today's games, but often don't happen on the same scale as it'd be resource prohibitive.
This is of course only one implementation of this kind of tech. There are others out there, and I'm sure there are some that have yet to be revealed.
@Lennox
That's actually true. However, Wardell, in his infinite wisdom, decided to discuss offloading the lighting, which many in the media took as MS actually saying it. MS has mentioned this procedure, but never specifically mentioned it in regards to X1. Much of the expectations from some people on the tech comes from misrepresentation from a 3rd party who was only talking theoretical possibilities and not actual intented application.
Cloud compute to MS was 3X the resources, however resources does not equate to 3X the power. For example, 3X the resources in a physics engine means that you can calculate 3X the number of physics calculations per interval. 3X the power means that you could likely process 300X the number of physics calculations per interval, or process much more complex physics calculations than what would be necessary for a game.
That being said, whether or not games need to calculate 3X times the amount of physics per game is questionable. I imagine there are times when it could be nice, but not sure the overall practicality of such a feat as there does need to be an object for everything physics calculation.
AI makes a lot more sense, as it can be quite complex, but the results, and variables to derive those results are typically very small and allow for quick and easy transport through standard latency scenarios. Any lag introduced would likely be imperceptible to the end user, unless there was a huge spike or disconnect, and there would likely be backup routines should the data not arrive in time.
It does make sense to talk about offloading graphical assets to the cloud if you're asked the question in an interview though lol. And it's also an interesting topic outwith any narrow console war you may or may not be involved in.
The goal of cloud computing (in particular console gaming) is to provide a better experience to the user by working around the limitations of the hardware in whatever way is feasible.
No doubt work is being done on distributed graphics right now and if it is a feasible option, you can bet it will be tried at some point.
Asking the question to a developer (ex or otherwise) for their thoughts on this subject seems okay to me.
Lennoxb63 says, "It doesn't make sense to even talk about offloading graphical assets to the cloud since that was never the goal of cloud compute in the first place."
Ars Technica interviews Matt Booty.
http://arstechnica.com/gami...
I'm sorry, what was that?
I like how old developers from Sony are commenting on the new things MS is trying to do. lol
I don't see a direct reference to, MS... You just made the assumption that the cloud tech is exclusive to them.
That being said, I think it's actually Sony who are pushing it forward at this point with game streaming. I've seen very little other than a couple demos touting one thing or another from the super bs pr team over at MS.
Game streaming is as different from cloud processing as playing games online.
MS has built the XB1 architecture around this function. Each of the 4 multipurpose DME's can do data movement at 26Gb/sec to/from any 2 locations at no cost to CPU, GPU, or memory bandwidth. My understanding is that server systems use these types of accelerators to communicate between nodes within the server bank. MS will show Crackdown at E3, so they will talk more about the cloud's usefulness in gaming then.
Read the article and to sum it all up it all depends on internet speed and reliability.
Cloud graphic and physics improvement is still too early because of the slow and unreliable internet speed that 80% of gamers have.
MS is onto something but its way ahead of its time.
This, exactly. Microsoft is right, but it's not practical yet, but when it becomes practical it will be amazing
Eighty is way overblown. The Washington D.C. Baltimore region and northern Virginia have over 10-13 million people. The average internet speed here is 30mbs easy. I would bet most of the major cities are similar. I would bet a third of this country has around that speed or higher. The problem is low population states like North and South Dakota and Montana bring the average down. And we have too many states like that. But more than half of us live in the major cities.
On topic I think this guy is too busy with his new business to be up on what Microsoft is doing. Some of his comments make that clear.
He mentioned server cost.
It's like he doesn't know what Azure is. So the answer to who will pay for all those servers is the companies that use Azure. Microsoft offers them to developers for free. He has a business, he knows business' pay a premium for every service they use.
Bandwidth.
Microsoft and Duke university have already cut back bandwidth need by 83%. Plus if he looks up some of the information on Project Orleans a big part of that is instantaneously hydration and dehydration of information to reduce bandwidth needs.
Wi-Fi?
Not even sure that is a real problem. If your Wi-Fi sucks buy a long cat-5.
Server goes down.
Again Azure has protocols in place to switch anything running instantly over to another server. He has to remember Azure is being sold as a business tool with Quality of Service guarantees. They want to use this to create mobile disaster infrastructure that can quickly switch over to a new host if needed. Look up agents in Project Orleans.
Latency is an issue.
Some of the other things will help address that. But the predictive technology Microsoft has been working on may play a part in that. As well as lost packets.
The reality is it may be really, really hard. But there are people thinking outside the box working on it. One of the things they were working on was the console handling everything in a certain area around the player and the cloud handling the things further away. Plus I think people should remember some of the rumors we've heard over the years. It could be a matter of the cloud just adding more details to things the console draws.
I think people need to wait a couple of more weeks.
I think the round-trip time is the big problem though, not bandwidth or reliability. And unless servers are going to be in your country and close to your location you might not be able to benefit from cloud computing.
It's a really interesting problem to overcome. So many variables that will affect everyone's experience differently.
50ms RTT vs 250ms RTT.
Internet cutting out.
Internet traffic at peak times.
I wouldn't like to be the guys programming that... actually that's a lie it would be awesome but very very difficult!
It's always funny seeing people talk about RTT, but never bring up how MMO's receive data, update state and game logic, apply data and then return said packet.
Why, if a Cloud Compute approach is held back by RTT, would any MMO be playable if it requires a similar approach for data?
That's a good point... also any online game in general must do the same.
I'm not an expert on this at all but I think there are a lot of tricks to achieve this and that's why the experience is not always optimal and varies per game.
For example: Client-Side Prediction and Server Reconciliation discussion here ( http://www.gabrielgambetta....
This kind of workaround to lag may not be possible because you would need the result (at least for graphical computations) immediately. But maybe things like AI could lag a few frames behind and be better than local AI.
It could be a great thing for some people, just like game streaming could be, but in my experience with my internet connection the lag sometimes creates noticeable problems.
In terms of physics, beyond a feasibility standpoint, the developers still have to want to implement these things on such a grand scale. Implementing all these little extras takes time and resources that honestly could often be spent better elsewhere. What's the point in having a billion pieces of a destructible window, when one million will be sufficient?
Over a great period of time, obviously things will become available which make this kind of stuff more feasible on a development level, but there comes a point of diminishing returns. No matter how much a computer system may be able to do something, that something still has to be implemented at some point, and that takes time and money. One of the basic tenants of AAA game design is to make the complex out of the simple, because the simple is cheaper and more flexible across multiple implementations.
In graphics there is a term called "Level of Detail", or LOD. The premise behind this is that objects that are very close to the user's view have a higher level of detail applied to them, whereas things that are very far away have very little detail applied to them. The same is true with physics calculations. How many objects can a user reasonably have within their immediate view that requires such vast amounts of physics processing to make move? Again, diminishing returns for what amounts to lots of work on the development side.
On top of all this, for graphics rendering, GPU's are becoming more powerful faster than the internet infrastructure is becoming faster and more ubiquitous, so over time, the idea of rendering on the cloud may actually hinder the designed almost routine abilities of a graphics processor. The idea of remote rendering makes sense on certain types of devices, say mobile due to the issues with heat in a very compact device, as one day, those devices are not going to be able to go any further based on today's technology. Because of this, Moore's law is actually coming to an end at twice the speed of the average PC component. However, should a device have the ability to have a rather recent GPU, then chances are that it's abilities are going to far outpace the abilities of what the extra cloud rendering could provide.
Here we go. An actual developer, you know someone who's made games and has experience with tech necessary to do so, is saying that this is going to be really hard. And true to form, the Defense Squad will try to undermine the statement by saying "oh he doesn't work for MS" or "oh he doesn't know what's going on behind the scenes" and focus on stuff like specific applications. Because you all know better than him right? I mean, MS says it it must be true, developer says it would be hard must be false because he's never made games before right?
So many heads in the clouds.
"hard" =/= "impossible"..
If Microsoft has figured out a way for this to work, I'm sure it was not only hard but it was VERY hard...
Point?
We just had a article on here the other day where a dev said cloud gaming could be amazing if done right and it seemed to get dismissed by a bunch of people on this site, but we should all of the sudden listen to this guy. Funny how that works, huh?
Wow. Your argument is flawed. First of all, I don't recall any MS developer ever claiming improve graphics with the cloud. They are saying things like bigger world, improve AI, offloading processing and nothing regarding graphics upgrade. If anyone have seen from the Crackdown 3 demo with the cloud that was at display last May, they show a version without cloud and a version upgraded with the help of the cloud engine, the speed and destruction are the only difference to show what off loading processing with the help of the cloud could do. The graphics are the same, the only difference is physic and AI.
Thia Naughty Dog dev said offloading asset to the cloud is hard but the crackdown demo show it can be done. sony don't have the skill people or money to run a cloud server like MS and maybe that is where the "hard" part of a Sony developer came from.
Here is the Eurogamer part describing the new engine for Crackdown 3.
http://www.eurogamer.net/ar...
"WE also know that Dave Jones, who directed the first game for Xbox 360 at his now closed studio Realtime Worlds, is on board. Jones' latest company, Cloudgine, is working with Microsoft Studios on the game. Cloudgine specialises in cloud gaming technology and using data centres to boost computationally intensive game components such as PHYSICS and AI."
But wouldn't a software team that only makes software know more than a game developer. Just curious
@Dudebro: So this dev is wrong but a dev you agree with is right? Classic.
@Antwan3k: "Point?" That's what I'm trying to figure out from your post. Was it your goal to make an irrelevant post? You're taking the position that MS has already made this work. Well, they haven't really shown that in uncontrolled conditions now have they? But not only are you taking the position that they've made it work, you're also being asinine with your "Hurr durr, I'm sure it was very hard" nonsense.
@marlinfan10: "Could be amazing" is not the same as "da cloud will make Xbox One have 4 times better grfx" which is the general attitude coming from you Defense Force people. "Could be amazing" is an open ended, uncommitted statement. It creates no actual position, and isn't a concrete opinion. Basically that dev was not taking a position in the debate, he was saying it "could" be great. This dev is not saying it's going to be great or not, he's saying it's going to be hard to do. And immediately the inundation has begun to dismiss what he said by all the veteran developers here on N4G.
@Rookie_Monster: My argument that the Defense Force is dismissing what this dev is saying because he doesn't work at Redmond? That argument? Because I didn't say anything about the cloud being able or unable to improve graphics. Read what I said again.
@castillo: When you're trying to use that software in game development, you're making it to be used by game developers.
Dragonknight
"Because I didn't say anything about the cloud being able or unable to improve graphics. Read what I said again."
LOL, then why are you commenting of what the article is about? The article is about a naughty dog developer talking about the difficulty of improving Graphics with Cloud. Agenda much?
"LOL, then why are you commenting of what the article is about?"
I'm sorry but are you really this stupid? Why am I commenting on what the article is about? Did you seriously just ask this question? *sigh* Alright then.
1. This is a public website. Part of this public website is the ability to make comments. The reason people make comments on what the article is about is literally because they can.
2. You and the rest of the Defense Force are flat out dismissing what the dev says for no actually legitimate reason other than to kiss Microsoft's big green posterior. Ranging the gamut from "He's a former Sony dev" to "He doesn't work for Microsoft" absolutely NONE of you have actually been able to refute what he has said. All you've done is chant "In Microsoft We Trust" and left it at that. My initial comment reflects that attitude.
3. Who the hell do you think you are to dictate who can comment on what, where, when, and why? Do you honestly think that the only people who should comment on news are those with an affirmative/positive attitude toward the news being reported? Imagine what would have happened if no one criticized Microsoft for their B.S. DRM at the reveal/launch of the Xbox One. If everyone acted like the you and the gang and insisted that only people with agendas could ever say anything bad about the Lord and Savior Microsoft.
Get over yourself and take off the green goggles.
Everything about programming on XB1 is really hard compared to Sony. What's your point? While Sony went the easy route, walking around AMD with a shopping cart and said, "I'll take an extra bus and some more of those aces," MS went with specialized processors up the wazoo, 15 in total, modifying the system for things like massive data movement between components. Besides, I'm sure MS wouldn't invest in 300,000 servers, or be laying their own internet infrastructure just so their lies would seem more believable.
If everyone took the Sony approach to life, nothing new would ever get accomplished and we'd eventually end up living in a world resembling Idiocracy. Conversely, taking risks sometimes results in making mistakes but is also how new ideas are born. These fanboy conflicts are much like Edison vs. Tesla, or Newtonian Physics vs. Quantum Science when it was trying to gain traction some years ago. New ideas often get shunned by those content to walk the same outdated approaches in the name of tradition. Win10, DX12, and the Cloud are just getting started and MS predicted a 10 year life cycle for XB1, which if they are to be believed, means this debate will carry on for a long time.
"Everything about programming on XB1 is really hard compared to Sony."
Anyone with any remote knowledge about the general makeup of both the PS4 and Xbox One would stop reading your comment right there. You don't know what you're talking about.
"While Sony went the easy route, walking around AMD with a shopping cart and said, "I'll take an extra bus and some more of those aces," MS went with specialized processors up the wazoo, 15 in total, modifying the system for things like massive data movement between components."
LMAO! Do you even read the stuff you post?
"Besides, I'm sure MS wouldn't invest in 300,000 servers, or be laying their own internet infrastructure just so their lies would seem more believable."
The majority of those servers are not used for Xbox, they are used for business applications and renting them out to other businesses. You can throw out "300,000 servers" all you want to, it means diddly when most of them aren't even for the Xbox One.
"If everyone took the Sony approach to life, nothing new would ever get accomplished and we'd eventually end up living in a world resembling Idiocracy."
Do you remember how earlier you said that it was "harder" to program on the Xbox One? Assuming that you're not completely wrong, which you are, guess where they would have got that idea from? The PS2 and PS3, both consoles being the most difficult to program for of their respective generations. Who pioneered game sharing, share play, and DVR in consoles? Hint: It wasn't Microsoft.
"Conversely, taking risks sometimes results in making mistakes but is also how new ideas are born."
Which all 3 of the companies have done. What's your point?
"These fanboy conflicts are much like Edison vs. Tesla, or Newtonian Physics vs. Quantum Science when it was trying to gain traction some years ago."
Hmm, Edison tried to either completely steal, or discredit, Tesla's ideas so I suppose that fits Microsoft to a tee right? I mean, I already listed what Sony pioneered this gen, and all you're doing is defending unproven cloud tech like it's been proven and is the most glorious tech ever.
"Win10, DX12, and the Cloud are just getting started and MS predicted a 10 year life cycle for XB1, which if they are to be believed, means this debate will carry on for a long time."
And there's the pamphlet speech. Win10 and DX12 are not going to do what you think they're going to do for the Xbox One. I'm content to let that 10 years prove it to you because I'm very patient. The long game is always more entertaining than the present boasting and cult like mentality coming from brainwashed followers.
@dragonknight
Lol. "The wheel on the bus goes round and round, all the live long day."
Replying to a reply is just a courtesy, it doesn't necessarily denote having an interest. I know I just used some complex words for you to understand, but let me see if I can put in words you'll understand.
"Me no care about you song."
Azure was put in place first and foremost as a customer cloud for OneDrive and other MS services.
You're implying that all those servers were built exclusively for Xbox with cloud processing in mind. That's ridiculous.
@TheCommentator, you really really need to lay off mrxmedias site.
And comparing fanboy arguments to Edison and Tesla, wtf ? Talk about hyperbole. Gotta be the funniest thing I've read all day lol.
Its common knowledge xbone uses pretty much bog standard components because, as MS themselves admitted, they didn't aim for graphical fidelity with xbone as the market they were targetting doesn't really care about it.
Time to wake up n smell the coffee.
@midget_gem
A GPU with 2 GCPs and 2 CCPs, 4 DMEs, and eSRAM now qualify as bog standard? Where did I mention anything about graphics fidelity? About "secret sauce"? All of the info can be found on sites like DF, extremetech, and anandtech. It's common ignorance that you "no sauce" guys just fling poop like monkey at the zoo. You're not even smart enough to see that PS4 uses only standard PC parts while XB1 is hardly standard at all, so it's no surprise you have trouble understanding analogies either.
I have not moved on to next gen as yet becuse both the PS4 and XB1 don't offer me anything for me say that is truly next gen.
Before anyone starts to throw out a list of games let me explain my self.
Yes we have had some games with top notch graphics like Drive Club and the Order but I am talking about games where Physics are taken to the next level, where we can enter or distroy any building or vehicle, where NPC'are not copy and paste, where every play through is different and not scripted, where interaction and decision s actually mean something...I could go on but you get the point.
Hopefully this E3 second wave of next gen games prove me worng and offer games that will make me a believer.
No man's sky is something to that degree and has the potential of greatness.
Meh. Im a gamer. The best versions of all the new games are on them, and lots games have already been released and are coming that arent on the old consoles. Waiting 2 years to save myself 100 dollars at the expense of a top notch experience seems pointless. It was a no brainer for me to make the jump immediately.
He does speak on a general technical level. For PS Now to work, you have to have a consistent and solid internet connection. As long as you have that then you can offload things.
It'll be interesting to see the approach MS takes with this. Do you go for a full offload of graphics to the cloud, so the console only has to communicate reference locations and input, or do you go a hybrid approach with some local and some remote...or even both.
If we want games to run at 60fps which is ~16.7ms a frame, then communicating the data, processing, and receiving the frame needs to roughly take that long (in an ideal world). IMO, it's doable, but I agree it's a hard problem.
He happened to opine that works with or is it just a dog dropped wanting attention ??
If he were really responsible for at least this was the ND, but not for it serves.
I don't think Cloud Computing was ever intended to improve graphics performance... It's a way to calculate things like small AI code and a way to store things that aren't entirely necessary for the game to function. Things you could use it for would be something like a dynamic accident in GTA that happens in front of you. Instead of your PC calculating that AI, the server could do it and send the response to you. It's difficult to do for major things though due to latency. So until optimization is made or they figure out some tricks. It won't be much more than Cloud saving data and Drivatars pretty much.
No tricks will every get around network issues. Without a stable, low latency network connection you're screwed.
I know Microsoft had suggested some nonsense about sending multiple pre-calculated frames for the various possible user inputs but that's going to massively increase the bandwidth requirements and would likely be detrimental rather than helpful.
Let's not disregard the fact that MMO's could fully utilize cloud-computing for special aspects of their games. It may not be viable for a singleplayer game or even a general multiplayer to use it, but some things could get use.
I'm sure it will be if Naughty Dog uses it to the perfection it usually does. All for the better though, they'd do amazing things on it, I'm sure.
People who know their stuff have been saying this since day one. Worldwide internet speeds are way too bad today.
it's gonna be hard for naughty dog because they are a playstation exclusive company and sony doesn't have that type of technology yet for the playstation brand, so of course it will be hard for naughty dog, but not for any first party developer for xbox. love it or hate it, microsoft is light years ahead of sony in terms of serevers, networking, and even cloud processing (offloading graphical tasks from the network). so im sure xbox will see this as a possibility before sony does. they have the brains and the money to make it happen and a lot sooner rather than later.
Well said but you will probably get disagrees because of your avatar and you said something negative about Sony.
I stated above why I don't believe we are ready for cloud gaming but if MS can manage to somehow make cloud gaming where it does not 100% rely on the internet then yes MS will be a pioneer of cloud gaming.
Edit:@PS4our I do agree that MS still has to show us instead of telling us, the demo with the building falling apart was very impressive and its said to be part of the New Crack Down game.
This E3 will be critical for MS as they have no choice but to show us what all the hopla is about and they are not just blowing smoke.
Greatvtime to be a gamer.
This is not the only person to debunk graphics processing in the cloud . A simple google search will show you that. But whatever keeps xbox fans happy I guess. MS has after all shown tangible proof of actual work in progress games that cloud can increase Xbox One performance 3 fold as they so stated at a Japanese stage presentation...oh wait...
http://www.pcgamer.com/nvid...
So nvidia showing off global illumination using cloud computing is a lie now. Because it was debunked yet nvidia actually showed what it can do.
like the cloud is a MS idea.lol
It was an idea put forward long ago by many people, Sony were even going to use Cell processor's linked up worldwide to produce power.
It's not possible and won't be for 10years
I was under the impression the cloud was to assist in cpu based functions like physics, animation, etc. I never heard anything about the cloud improving graphics. Even that demo they showed a while back was all about how many destruction objects were being calculated realtime with the cloud, split screen with a computer not using cloud. That demo was all about frame rate vs object count not graphics and in that case th frame rate was vastly superior. Seems like a straw man debate article to me.
“What graphics features are in a typical game where you could live with getting the results a few seconds after starting the calculation? Not many. I think that a compelling cloud rendering technique would have to be an amazing new feature that no one has really done before. If we think outside the box there might be some really cool things that we could do. But simply offloading existing work into the cloud is hard to justify because of the roundtrip latency and all the things that can go wrong on a network.”
Read more at http://gamingbolt.com/impro...
Well he isn't really criticizing Microsoft. He's just criticizing offloading computations via the cloud and he does have a point.
Streaming is a heck of a lot easier to do than cloud computations that's for sure.
Question that you should all be asking is that will developers make the extra effort to use the cloud to obtain a minimal gain in performance?
In my opinion I don't think so. However I do believe the cloud is useful for other things like streaming for example.
That quote? Been saying it since day 1. Apparently it makes me a Sony fanboy and Xbox hater.
I thought I was just using common sense.
What bothers me the most is that some people believe that the cloud is only Xbox Live compute when the reality is the cloud includes streaming services and storage as well.
From what I've seen the cloud is pretty much proven for streaming and data storage. But as for offloading computations to be honest I have never seen that done before from an average joes system. Microsoft still has to prove that and I thought they would last E3 but so far they haven't done it yet.
@MasterCornholio Actually you have almost every MMO uses cloud compute.The Server calculates the damage done then sends that data to the player.
@Dark_king: Yeah, but that's almost entirely streaming. And that's sorta the point.
How many flawless MMOs are there that don't run into lag issues and such? And this is for games generated almost entirely server-side. If they still suffer due to internet limitations, what makes you think games that do more locally won't have the same limitations?
It's the same inadequate internet being used, after all.
MMO's actually are a perfect example of cloud compute, as MMO's do exactly what cloud compute is said to be intended for.
You have a local client application, which does all the work of displaying the final picture and receiving/sending input commands, and holds the resources to display such picture, as well as the code to depict the calculations that the servers send back for the client to interpret into something for the end user. That is what asynchronous computing is to a tee. It's not some magical invention of MS that recently came about.
Almost the entirety of the game logic of an MMO, outside perceptible things like animation or whatnot, are entirely calculated on the cloud. User position, user action, enemy action, damage calculation, event triggering, etc, are all handled on the server, and those calculations are processed into the calculations of every user on the server to provide the MMO experience.
In the end, it doesn't really matter how much of the game is processed server side, or what those server side calculations are, the premise of cloud compute is the same.
The idea of applying it to graphics has only gone mainstream since the reveal of the X1, although the theory has been out there for a few years before that.
@Spotie specifically
Yes, MMO's are often hindered by server or internet lag. And that has been a pretty constant criticism throughout this whole cloud debate. Only the most ardent fan boy says that latency isn't an issue, or that no one will ever experience issues, or that everyone has good enough internet to take advantage of cloud compute. I imagine these are the same people who don't have issues with the MP portion of their exclusive games on day one, or months after.
Cloud Processing helps the CPU not GPU. This has already been explained many times before.
But Physics etc can be done efficiently on GPU these days. Everyone seems to have forgotten how GPGPU will grow in usage and be way faster than any cloud application.
What annoys me Microsoft has announced a game that will offload destruction to the cloud and then you have people claiming its not real?
Crackdown not real?
Gamingbolt a game in development is real it not nothing. We even know the name of the studio doing it Cloudgine and some of the devs developing it names are known.
PS4 fans are still waiting on Last Guardian who is making that who are the devs developing it and name the studio. Sony stuff is all fake.
Microsoft plans are real. Sony plans?
Exactly MC, well said. Actual, real world, in use today ~ like right now. Not later, in the future, we're working on it etc...
It's not that it's not real, but it's just not practical for tons of players to use at once, and even then after they buy all the servers to do this who's going to pay for it? Does it just come out of the profit they make from LIVE (what's the benefit to them?)?
Next, think about the level of interaction with this content, because of network latency and bandwidth, you can't just use this as you would a local processor. It can only be used for things like big set piece destruction, and in that case it's not much better than pre-baking the physics calculations like games today already do.
I admit, I don't know exactly how they plan to advertise this to developers/ how they intend for developers to use it, but looking at it from a developers viewpoint (developer in training here) I don't see the point other than a little more eye candy.
Deja-vu. I remember comments like:
"What annoys me Microsoft have announced a game that will show how well Kinect interactivity will work and then you have people claiming it's not real?
Milo not real?
They even had a live demo, Microsoft plans are real"
That worked out real good for guys like you didn't it?
GamingBolt is stupid and should be banned from this site. That said, it's true. We won't see any significant application of this tech on the Xbox One (or the PS4 for that matter). I guarantee it. It's just a PR unicorn, at least for this gen. Believe me or not, whatever. I'm supremely positive about this. Watch.
Waiting to till crackdown 3 is out to form a judgement about the cloud. If crackdown succeed in the destruction off loading and it's what we think it is then the hype is real.
Only if other devs use it. If it's only used in one game because it's too difficult to be practical, that's still hype.
I think one of the main issues with using servers to store game data is the risks they would need to take in order to take advantage of the extra storage.
If something happens to those servers, does it effect everyone? And if so, how long would it take to fix? I mean, there's server security, support team(s), and developers who need to manage it all the time in order to keep the servers stable so they don't impact users.
Well people doing the tests disagree. Naughty Dog is a fine studio, but they are no the end all be all of studios...and they are DEFINITELY not known for their work with cloud resources. Now true, anytime you start a new tech it will be difficult, but if we are looking at the tests that just came out a week or two ago...it's a VERY really possibility. I think one of the misconceptions is that the actual graphics will all be offloaded to the cloud...that doesn't seem to be the approach at all. The stuff off loaded to the cloud is stuff that isn't needed immediately, leaving ALL the resources in the console itself to focus on the immediate need aspects.
I agree with you. ND are a great studio but they have ZERO experience with this kind of tech. They are not testing it and they are not making any games that push this kind of tech. So I dont think his opinion holds much weight here. I wouldnt let a rally driver lecture me about the intricacies of driving an F1 car. That doesnt mean the rally car driver is not good. It simply means he doent know the finer detail even if he is the same relative field of expertise.Cloud compute is a different beast to simply wringing performance out of a static console.
MS are the company with REAL experience and resources invested in this area( sony's game streaming is NOT the same thing)...so who do you listen too? the guy who playing on the field who has real experience or the guy who is just shouting "it ont work" from the sidelines, knowing damn well he has no experience in this area. Its laughable that we even have people here agreeing with him. Its just lip service on sony behalf playing down cloud tech as a slight to MS. This guuy should just stick to giving us updates on UC4 and ND related news.
Graphically it has zero application,.. they could do something with ai and even that only on some turn-based stuff,..even physics computation does not make much sense,..
you need 33ms for a 30 fps game,.. don't see how that is applicable for a gaming purposes,..
Maybe same as cloud saving profiles of characters and perhaps faster machmaking,.. but that is about it,.. (even uncharted 2 did that on ps3,..
Improving graphic performance with cloud is bullshit, what i want to see from cloud is lightning speed loading times in gaming.
Oh you kids. I consider your a kid if your under the age of 25. I have seen these game consoles come and go for the last 35 years. Evolution happens in this industry it doesn't remain stagnated.
Did someone WAY up there say that the Internet has achieved the Speed of Light?
That is just..... awesome.
Cloud is good and I did notice a difference playing Ai however that's where it ends there for this generation.
Xbox One will not improve graphical wise with cloud.
FACT.
... once again.. anyone who knows how tech works or how to program knows this XD. It's the ignorant people who don't understand this and fall for this ridiculous gimmick. Streaming a game from a supercomputer is one thing... cloud computing... that's stupid ridiculous.
John Hable on how DX12 will impact consoles and PC.
Why would a random ex-naughty dog dev who isn't working with DX12 on XB1 or PC or gaming at all currently (film graphics) be a reliable source to quote from? Seems pointless.
Not to mention both consoles are strongly CPU bound as their core speeds are terrible. Not saying he is wrong or right, but someone who is actually using the software would make much more sense to quote, these comments from him are essentially guesses.
I feel like gaming bolt decided to interview this guy for no reason other than he worked for Naughtydog and wants to start a flame war.
"The short answer is that newer APIs will make the CPU faster, but will probably not have much effect on the GPU,” Hable said to GamingBolt. “The improved APIs change how long it takes for you to tell the GPU what you want it to do, but they have no effect on how long it takes the GPU to actually execute those instructions. “ - ex naughty dog dev
"
They might be able to push more triangles to the GPU but they are not going to be able to shade them, which defeats the purpose. " cd projekt red
http://www.cinemablend.com/...
Same thing?
An apu is gonna suck no matter what you do to it.
Sorry.
What will dx12 do for a toaster?
What will dx12/Vulkan do for a gtx 960 and an i3/i5? Or an I7 with a r9 390x with HBM?
Now that is a worthy discussion.
You're getting disagrees for being right...the fanboys are strong with this article.
The reason the CPU's are struggling in the new consoles is not because of the overhead...it's because the CPU's are weak.
A $229 CPU from 2008, the i7 920, at stock 2.66ghz let alone the easily achievable on air cooling 4 ghz runs circles around the consoles.
That is why the CPU's are struggling...it's because they are weak.
8 cores doesn't mean much, when each is pretty weak.
Or else some people might think you can take one of the new octacore chips in a phone and think it's powerful.
I love my PS4, but a $329 GPU added to a near 7 year old PC will run circles around it.
I don't see why people still are in denial, we've had these discussions for nearly a year that PC's will get a much bigger boost then any console, because PC's have the overhead wasting the power they contain.
Why people don't think Naughty Dog or even a layman can't know this and needs further proof is the height of ignorance.
So when developers who never worked with DX12 say DX12 will do nothing for the GPU, their speculation is magically right? Brad Wardell, on the other hand, has worked with DX12 and he is repeatedly discredited by people here. Love the double standard.
We'll see at E3 who was right when DX12 gets its XB1 debut.
Well I'm pretty sure both AMD and Microsoft were well aware of the GPU bottleneck with the old DDR3 GPU prior to them building the custom chips for the XB1. I'm pretty sure M$ prepared the XB1 to handle that bottleneck that Mr. Hable discussed. The XB1 was never built to run the low level DX11 API, which is pretty obvious by the earlier games. It was build with DX12/Win10 in might. The system launched some 18 months early. So we'll see what happens after the key is inserted into the system.
LOL Quoting CinemaBlend as a source, pathetic that site is garbage and is run by fanboys
And I'm supposed to believe some random guy that "USE" to work for Naughty Dog? Nah, Pass! Microsoft isn't going to "cheerlead" directx12 unless it actually does help out the Xbox in a noticeable way. If it didn't it would come back to haunt them worse than E3 2013 and Phil Spencer isn't that stupid.
You know whats pointless? Microsoft's cheerleaders talking about how dx12 will change the world and up the graphics and framerate... even though dx12 isnt released yet. Neither is Windows 10 or any games withdx12. If anybody can put in their 2 cents in so can he. Deal with it.
I agree, those people are also stupid. Are you suggesting fanboys in a comment section somehow make an article based on an interview with someone who has no experience with the software they are talking about is validated by such?
If so that is just as stupid. In fact, you claiming it will do nothing is also equally stupid as you have no clue, and it has been proven that there will be improvements. (Just how much is the question) The reality is, the software is neither released, completed or implemented yet. How about we wait and see instead of pumping more into it. People are going to believe what they believe until shown otherwise so these articles are pointless unless given by someone using it.
Funny that you think your assumption is better than everyone elses... to the point of chastising those who have different assumptions, and excusing those that don't.
@Tsubasa
That would be true......if there weren't dozens of benchmarks already shown to the world...
...but there are.
And we can already see a massive improvement from DX11 to DX12.
There will be less cheer leaders if there is less haters.
*logic*
MS fanboys arent just console gamers which mean there the ones getting the best out of DX12.
@Nicksetzer
And yet you have no idea, what experience he has with DX12 or what he knows about it. You do know that a lot of these guys all know each other right? And that they all talk right? Im pretty sure that by now everybody who is anybody in the know, knows a whole lot about DX12. Its not like a low level API is even some new mystical thing. Its been done in consoles forever now. So let's not fall back on that default fanboy argument of "how would he know if he hasn't used it yet"? How you dont know he hasnt?
@Out
"And we can already see a massive improvement from DX11 to DX12"
On the PC! repeat, PC, not X1. You are making assumtpions. We don't know what it going to do for the X1.
well of course they did ,i'm just surprise the click bait went the other way this time its normally asking every indie dev why the xb1 sucks lol he really has no idea what tools are in dx12 for xb1 .
lol, both are strongly cpu bond, lol. The weakest part of both is the CPU, average mobile cpu...
Well... not "average mobile CPU". More like the latest smartphone.
The Galaxy S6 (which was released recently) has both a Quad-core 1.5 GHz and a Quad-core 2.1 GHz. The PS4 and Xbox One Octa-core CPUs are respectively 1.6 and 1.75GHz per core.
Sure does look comparable, though.
He's a high level developer who worked at one of the most difficult game companies to get hired at. He sure as hell knows what he's talking about, whether or not he's currently developing something on it or not.
People seem to be fine taking the word of MS employees who aren't even developers regarding DX12. I'd hold the opinion of a talented, unbiased developer a little bit higher.
He maybe a high level developer but he still hasn't had anything to do with dx12 and why does it matter if those ms employees aren't devs as at least they actually had hands on with dx12 yet your saying that his opinion on something he has no first hand knowledge of is more valid than those who have
Many take Wardell's word for everything, despite him clearly stating that he doesn't know the X1 well enough to say for certain. I can respect Wardell's comments because he does at least have the knowledge to reason out what is likely to be the case, and I also realize that many of the things he says simply get attributed to the X1 despite most of the time he's only talking about PC.
This guy could probably get picked up at any MS studio if he wanted to, and go in without more than a couple days to get up to speed on DX12 specific syntax. People really don't realize how talented game developers have to be to get jobs at studios like Naughty Dog. It's not like he was some sort of intern who worked there for 3 months working on linking the menus to different parts of the game.
I've done both console programming and DX12 programming(for PC) and I can tell you there isn't a major difference in how operations are handled between the two. Syntax is different, implementation is different, but DX12 operates pretty much the same way consoles do.
It's funny though. This is a great break down of a major difference in DX12, and it's a great thing, but some people are more concerned with discrediting the statements without the knowledge or the research to refute it with something factual.
@rain c++ and visual basic are the primary syntax languages for dx3d of any kind, very few changes in that. Weird you claim to be some god-like programmer but don't know that...
Not to mention crytek, unreal and square have all had tech demos showing there is an inpactful change. So should people believe you (the random self proclaimed pro) or the people who actually presented something with the software?
http://m.windowscentral.com...
http://wccftech.com/king-wu...
So if you want to believe it does nothing, enjoy your misbelief. The only question is the effect it will have on XB1.
I'm with Nick! It would be dumb to take serious the words of those (with even huge reputations) that have absolutely no experience dealing with dx12 over those that have some.
By learning new sytax I meant the new functions that exist within DX12. For console programming, they're either going to use C, C++, or the assembly API. There are also some game engine scripts that they will likely use, and many many 3rd party tools which will get licensed to make things work. Visual basic won't be used because it works off a framework which isn't suitable for AAA games, but can be used for simpler games.
When I say there isn't a major difference, what I mean is that overall, the differences are on levels that aren't actually programmed in individual games. To the average developer, they're just going to use the engine, and then provide special functions through the low level API if necessary. It's EXACTLY the way console programming is done now. Not much will change. Sorry for being unclear.
I never claimed to be some god-like programmer. In fact, that's my point. It doesn't even take a genius programmer to go from one to the other if you know the basics of one of them. No people shouldn't listen to me, but they should at least verify or research what I say to determine for themselves if what I say has merit. I don't often dismiss other people's comments without at least trying to verify if they may have some merit.
Did crytek, Unreal, and SE show off anything for the X1? because that's really what this discussion is about. This guy is discredited for his work at ND, yet you point to all those developers who haven't made any DX12 games for X1? Seems legit.
On PC I have said many many times the differences in how it operates are substantial. I can attest to this based on my own work, and I am very impressed at what it can do, and I'm a little miffed that PC's have been gimped for so long due to this kind of stuff not being available years ago without 3rd party tools.
And that's what I'm saying. DX12 brings to PC exactly what has been on consoles for decades. It's a touch more higher level, but it has an extremely efficient low level API, just like consoles.
Let me know if you want to misread and misrepresent my comment to discredit me some more. I'll be happy to respond.
If you wish to continue on with your eyes closed, and fingers in your ears going lalalala, while ignoring anyone with a comment contrary to your own, and can't bother to provide me with any kind of response that actually does address anything I say with factual information then please just put me on ignore. Let my comments be used by those who want to take the effort to learn more, and not be like yourself where you throw out a few computing terms hoping that you seem knowledgeable enough to discuss the topic properly.
I don't care what your perceptions of what it will do are, but I do care when you make it out to be something that it's not. You set an expectation for others which MS can not possibly match. At least do me the common courtesy to respond with actual facts pertinent to my comment, and not with more exaggeration and PR nonsense which only validates what you already want to believe.
What??? MS built DX12, but some guy who worked for Sony is more credible? That's just stupid.
I'll tell you what. I've got some gold to sell you... yeah, I know the Periodic Table says it's lead but they don't know what they're talking about. Trust me.
Come on now. It depends, but for the most part those MS employees actually work at Microsoft and are communicating with and/or have had experience with Dx12 or those working on Dx12, unlike the ex Naughty Dog dev. I mean geez the difference is that simple.
Man, I wonder how you all would react if some ex Halo Dev started talking about Nintendos new API or a mario dev started talking about Vulkan. Tune would change here.
And there is a reason no one else is talking about this besides Gamingbolt lol.
Why indeed? Probably because gamingbolt asked him. He's not wrong with what he says though. It's not like you need to know each specific API in and out to be able to work with them or understand what they'll be doing. Good game developers don't live in a bubble, and they all have access to forums or documentation or personal contacts where they talk about this stuff, and if they're good they keep up on everything that may be available to them.
I knew nothing about DX12 when I started working with it, there wasn't even documentation available to me until after I started work, but when I had it, I was able to see what it does different from both OpenGL and DX11, and adapt my algorithms to implement what I needed to in DX12. Since that time, MS has provided a lot of that information more publicly to developers if you know where to look. It doesn't make their PR rounds as much though.
Also, consoles are not CPU bound. They haven't been for a while now. PS3 was designed to be GPU bound with offloading onto the CPU/SPE's when needed, and the 360 had a similar setup but did have some overhead with it's memory controller in the cache, nothing major though. PS4 is most definately supposed to be GPU bound as stated by Mark Cerny himself, and I'd imagine the X1 is supposed to be as well, as it's the best way to get the most performance out of these machines.
Otherwise, I didn't notice this was a gamingbolt article before clicking on it, but it is a pretty good break down and simple explanation of a basic difference between current API's and the new ones coming in DX12 and Vulkan. His bunny example pretty much nails it, but he doesn't really explain what overdraw is to help understand why it's not optimal.
This quote is pretty much what a lot of people have been saying for a while.
. “The improved APIs change how long it takes for you to tell the GPU what you want it to do, but they have no effect on how long it takes the GPU to actually execute those instructions. “
In other words, you can't make hardware do more than it's designed to do.
Even Wardell has said something similar.
With all due respect, I'm pretty sure that guys knows a little more about programming than you do.
let me explain what he was trying to say. and im a computer scientist, so please consider that.
imagine you're trying to send a set of order to your employee. you have two options, you can do it via a middleman: asking someone to send the order to the employee, or to send it directly to your employee. naturally the second option is way faster because.. well it's just naturally faster.
programming is a set of order, employee is your hardware, and those middle men is any middleware, such as directx 12. So what directx 12 trying to do is to fasten the middle men, but it got nothing to do to fasten the employee. Does it make the game to be rendered better? of course! but does it make the hardware better? no!
So to sum up, direct x 12 WILL make xbox one's game better, but ps4 is already a better hardware, so it's impossible for xbone to top ps4.
and considering game dev (especially the triple A games) is much closer to the hardware (which minimize the role of middleware), the effect would be very minimum in console dev.
Hmm, I find that hard to believe. Especially considering "computer scientist" is the most outlandish job position description I have ever heard. It would be like calling the people at nasa spaceship guys.
That all said, you contradict yourself regardless. Directx 12 is a kernal and it does transmit data between the hardware and an application, but bettering that process can actually benefit the quality of a game. Will it make the physocal hardware more capable? No. It does however have the same result as it makes the hardware work more efficiently.
A better example would be a helicopters propellers. If you make the propellers directly horizontal they have little effect, but if you skew them slightly they allow for much more lift. Same wings, same engine, same weight.
Proof of this would be to use a simlar piece of hardware running old software and one running new. (Let's say dx9 vs dx12) there would be an absolutely massive difference, despite the hardware being The same.
Point is, we know DX12 is a massive improvement, we just don't know howuch it expands on the current toolset for XB1. From what we have heard the toolset for XB1 is a bit under par. So this could be a two fold upgrade. Stronger API with a more reliable set of tools. Not to mention if XB1 uses DX12 properly it could allow devs to make engines that will accomadate it more readily. Would be like last gen, PS3 better hardware, but xb360 was easier to create for. I think that is the goalS is shooting for. That said, this gen, the PS4 is streamlined enough AND powerful enough that I doubt any gains will overpower it.
It would appear nick, that you want to discredit anyone who thinks differently than you.
While I can't claim ng's credentials, Computer Scientist is indeed an actual thing. It is more on the theoretical side of computing, as opposed to an engineer who works on the hardware or software. A CS would have to have enough knowledge of computer engineering, but the reverse isn't always true.
Here's a list of jobs looking for computer scientist positions.
http://www.indeed.com/q-Com...
Everything he said is true though, whereas your comment is just you throwing out some computing terms again trying to look smart. So lets look at what you say.
"Directx 12 is a kernal and it does transmit data between the hardware and an application,"
No, DirectX is an API. An API is an application program interface. it is simply a set of protocals and routines which facilitate the building of software applications. There are thousands of API's which are used every day, and their main purpose is to dictate how the software interacts with the OS(emphasis on OS).
A kernel is a program which manages I/O request from software, and translates those request into instructions for the hardware.
A kernal interacts between the hardware and the OS, whereas an API interacts between the user(software) and the kernel. The idea of low level access bypasses the kernel to a degree, but in a PC that's not ideal as it can cause serious security issues, which is why you need a low level API because opening up a system fully to low level access can cause all sorts of unwanted side effects. I don't even think consoles allow for complete bypassing of the OS kernel, unless it's through the hyper-visor. Outside of closed systems, I can't think of any API that directly interacts with the hardware. Almost everything is done through a level of abstraction, and DX is no different. The whole idea behind DX was to remove the hardware from the equation to make games more compatible and easier to program for PC.
" but bettering that process can actually benefit the quality of a game"
Yes. And DX12 does that. that's what the article says, that's what I've said, that's what ng said. It can benefit the quality of the game, and it will, but the effect is that the hardware can get more data faster, or more efficiently, or spend less time waiting for data to perform it's task.
That isn't the same effect, because better hardware can do more, whereas the current hardware can do what it's designed to do. The efficiency is that it's not waiting on data, or that the data can be output without waiting, or the data can be supplied in a more acceptable manner for processing, but the data itself is still processed based on the rendering algorithm used. I know it's a nuance, and you aren't exactly wrong, but you are making assumptions about the gains.
"From what we have heard the toolset for XB1 is a bit under par"
What we've also learned is that an implementation of DX12 is already installed into the X1 API. It's not the full Dx12 because it's still not finalized, but the core is implemented from what I understand, and the current low level API probably isn't going to change either way, as the current LL API was already built for the X1, never existed in DX11. Same as last gen, the X1 has a custom version of DX.
@nick wow... Are you being serious... A computer scientist is technically anyone with the job position titled that (research based I'd assume) and/or anyone with a degree/studying for a degree which here in the UK anyway is known as "Computer Science" I know several people studying it currently as I myself study physics and therefore know many people at uni studying other sciences because... Well your just more likely to meet them. You seem to like discrediting others comments without any real evidence so I'm going to assume you have no background in any form of science because if you did you'd know. Evidence is key, without some tour credibility is zero. And both the guys above seem to have evidence for their comments and potentially experience in the field. Same goes for the guy being interviewed.
Computer scientist is not a posistion nor a degree. It is a description of such. Anyone who has a degree or position would know that. Generally all it means is a major in mathematics or a computer related field. Really simple to understand. Just like saying you work on broadway. There are lots of jobs in broadway not just one.
http://en.m.wikipedia.org/w...
The xbox one "beta tested in the future" was specifically made with DX12 in mind. The Xbox One is a windows 10 device. DX12 will not only make use of all the cores (Currently only one core being used with dx11) but it will also make much better use of its super fast ESRAM. Tiled resources, etc...The main reason for the difficulty for some 3rd party games not reaching full 1080p. Moving to DX12 is huge in this regard and it's something many conveniently leave out when discussing the true benefits of games developed using DX12 on the console.
Another huge benefit not talked about much, but expect MS to really do a deep dive into this at E3. But the sheer ease for developers when using DX12 on PC and literally with a push of a button, your game is now scaled & optimized for ALL windows 10 devices. Devs will save money, time, etc...And consumers/gamers when they buy a game on ONE windows 10 device, you own it for ALL your windows 10 devices. That to me is the game changer.
Lastly the fact that if a game is built using DX12 & windows 10, cross play will be simple & easy. For years I've always dreamed of a day when PC gamers could play with console gamers. Well after July that dream will be a reality.
Dx12 & windows 10 is definitely a game changer for xbox one. The xbox one will finally have software for developers that the xbox was built for from the beginning. It's amazing to me how with the low level dx11, the xbox one has been able to keep up with the Ps4 and in some cases do better. Fanboys say that xbox one can't do 1080p. Strange, because I have over 10 xbox one games that are native 1080p.
Dx12 will open up all kind of doors for the xbox one. Soon enough, gamers will see.
he probably knows a thing or two about PC programming numbnuts. what do you think the PS4 and xbox1 basically are?
Oh gamingblart. Playing both sides off against each other again. Classy stuff.
Strange that anyone with even a modicum of API knowledge has been saying what's posted here (and more) for months. The whole while you've been posting BS about massive gains to Xbone, pc and even mobile.
Serious question? Do you ever feel bad for the hit whore tactics you employ? All they do is cause division and angst.
What was that old chestnut about the ends justifying the...something?
Edit: Lol at the damage control attempt by nic (first comment).
This isn't a Naughty dog dev. It's the guy who runs filmic worlds (a graphics solution based company thats open to pc mostly). He worked for Naughty dog at some point. That (the ND reference) was used to generate hits and cause an argument. Seems you fell for it..'coz...you know. Creating fighting fanboys is how this site does business.
Edit 2: nice edit nic. Keep up the good work. *smfh*
Just think, a pact of script kiddies are going to tear this news outlet apart along with Gamingbolt.
Honestly gaming bolt is a farce of a site. Ppl put way too much stock into the differences of direct x and Open gl, when a new version comes out of course it's better, but in the end it all pretty much evens out.
I think the ex-naughty dog dev is mixing old technological advances with today's new technological advances of gaming & hardware/software.
Thing is, MS took a completely different approach with designing the x1's hardware.That said, They simply went above the standard engineering process to make some of the x1's parts custom built. By doing this, they gain a bit more hardware control over the x1 CPU/GPU, and how it should send n process data, back and forth.
The x1 was design with dx12 in mind. That makes a world of difference. The way I look at it we don't know enough about the x1 hardware to talk about it capabilities.
That's what has me interested. We don't know enough about the hardware, and whatever details we don't have I feel like we will around the time windows 10 & Directx 12 comes out. I'm not saying there will be some magical boost, but like you said the X1 was built with Directx 12 in mind. They planned it out from the beginning and all they talked about seems to go together like pieces in a puzzle.
It's all very interesting imo.
It's hardly custom built just because it has esRAM..
They didnt take a completely different approach whatsoever. It's literally just like the 360. Just stronger, obviously with added functionality.
I don't see anywhere where they went above any standard engineering process. It's a console. Of course, it has more hardware control access, but it's no different than any other console. It sends data back and forth no differently, aside from esRAM, which requires work to get anything out of it.
Yes, it was designed with DX12 in mind.. no. Wait. DX12 was designed with consoles in mind. Not the other way around. DX12 gives PC's the lower level access that consoles already have. That is all.
Nothing is revolutionary about it's design.
Nothing is revolutionary about it's design. Oh really?
The x1 GPU is custom built. Its the only GPU in the world that has dual lane. This tech feature its the first of it kind. There no other GPU like it. Beside, MS wouldn't spend 2 billions bucks on X1's custom built GPU for no reason. Keep in mind, you can't test something that isn't readily available on the market. That said, I can clearly see why MS had to wait until x1's custom built GPU is born before writing DX12. I think had MS used a regular GPU for the x1's design they couldn't start writing dx12 way before the x1's hardware were completed.
http://www.reddit.com/r/xbo...
Another thing, Illogically speaking, what I find obvious about the x1 custom built gpu is that it has dual lanes which means that some of the other x1 hardware component has to also change to sustain data communication between the two lane that the x1 GPU has.
I'm guessing, that why MS had to implemented a move engine to take full advantage of those two data lanes. I could be wrong on that note.
Anyways, my point is the x1 has some hidden mystery about the x1 hardware. So, to make claims on what it can do it plain crazy thinking, especially if you don;t have(know) the x1 full specification.
Jhoward...what you've just written and linked to is complete crap.
The "dual lane" you're waffling on about is for gpgpu compute and refers to ACEs. They're used for parallel computing. They each run 8 queues (lanes). It has 2 ACEs for a total of 16 queues (lanes). Pretty impressive huh?
For reference the PS4 (the R9 280 and up as well) run 8 ACEs for a total of 64 queues (lanes).
One of the reasons MS would have chosen to go for the older/fewer ACEs configuration is to accomodate the eSRAM. Said RAM takes up more than half the transistor count on the die. There is no secret sauce in the hardware.
Now that kinect has been 86'd they may attempt to repurpose the move engines. Given their ultra-limited bandwidth with no direct access to the eSRAM their usefulness will be limited.
All (yes all) the xbone's spec's can be found online. Beyond 3D and other sites have all that for you if you like.
Now did you really want to talk about hardware or were you just grasping at straws? Tell ya what. I'll wait for your reply and we can continue this if you like.
Here's a tip though. Don't link to Mr X style reddits if you want to be taken seriously. ;)
Edit: LOL all the info the link you provided proves my point if you just look at the info given. OMG! You couldn't make his stuff up (but someone did).
@jhoward585
For one.. You are talking about complete speculation from a rough translation. that still hasn't been clarified.
Logically speaking.. You're still talking about "new" and "one of a kind" tech that isn't actually confirmed, yet, anyways. So, I don't see why you are trying to explain how it works.
Again, still unconfirmed. Don't harp about secret X1 sauce like everyone else is. Always have to grasp onto something..
And, just as LostDjinn said..
So. Yeah.
Love your console. Don't lie about it or find excuses though. The excuses are what games and features you like. Not what you hope it to be.
@sinspirit
YOU:For one.. You are talking about complete speculation from a rough translation. that still hasn't been clarified.
speculation?LOL
ME:Fact is, the x1 does have a dual lane custom built GPU. No one knows how it works but MS.
-----
YOU: Logically speaking.. You're still talking about "new" and "one of a kind" tech that isn't actually confirmed, yet, anyways. So, I don't see why you are trying to explain how it works.
ME: I brought it up because in my mind I know any piece of new technology will eventually improve as it passes through the trail n error phase. The x1 GPU is the first of it kind so there going to be lot of test run on the software side of things to refine it performance.
You:Again, still unconfirmed. Don't harp about secret X1 sauce like everyone else is. Always have to grasp onto something..
ME: I was thinking the same. Until I get more info on the x1 hardware specification I won't take in another false rumor as fact.
You: Love your console. Don't lie about it or find excuses though. The excuses are what games and features you like. Not what you hope it to be.
ME: for the record I own a PS4 not an x1. maybe in time I will.
Another thing, I do my best to make sense of everything I read or hear, especially when I gather information on the internet.
Lies is always a Lies. But one thing I do know is most major company won't take the blame for another company's false(misinformation) claim.
With that being said, MS has made some claims in the past that involve AMD as far as the engineering & hardware design of the x1 hardware.
And yes, I'm talking about the secret sauce stuff that's spurring all over the internet.
Fact is, AMD has their repetition to protect ,and MS has theirs. one thing that is for sure, AMD would've ended some of those false claim to protect their image.
So, That my friend, is all I need to draw my conclusion on what is fact, and what is not a fact.
@LostDjinn
Ok, the link I provided on my previous post may not be the greatest information as far what the x1 hardware can actually do.
Honestly though, I don't think anyone can say what the x1 full specification really are because there's just too many contradiction regarding the x1 hardware specification on the internet. One sites say one thing while another site says another.
Truth is, I was more interested in MS's past business decision they made to fund & create the x1's hardware more than the physical hardware(and specification)itself.
Fact is, MS spent well over 2 billion dollars on the x1 GPU technology. That alone say a lot to me. To me, It means MS took a chance which could've either worked to their advantage or not.
Truth is, we really don't know if it was a bad or good business call. We don't yet have all the details until then I will remain completely optimistic.
I think what u meant before about dual lanes is that the x1 gpu is split into two sets of CU with 2 individual gfx command processors. Thats what brad wardell meant when he referred to xb1 having dual lanes and how ps4 and pc dont.
@LostDjinn
If my memory serves me correctly, I think brad wardell were the one who stated that the x1 custom built GPU cost close to 2 billion dollars.
They way he explained it...The deal between MS and AMD to engineer & build the x1 hardware cost MS over 3 billion. Two billion dollars went into the development of the custom GPU for the x1 while the remainder 1 billion dollars went in to the rest of the x1 hardware component & design.
http://www.gamespot.com/art...
http://www.vg247.com/2013/0...
Genuine- If it's command processors I think you'll find they pertain to OS task distribution. With the discrete (system) OS and gaming OS requiring high and low priorty access. It's they only way they'd be efficient. The hypervisor would be simply access the discrete level OS priorty solution.
Whats the point?
Well, think of something like the snap function. The game would be given a high priorty while say the browser would be given a low priorty (as missing your render budget on a web page would be preferable to the game doing it).
Edit: Jhow neither of your links say anything of the sort. A deal between MS and AMD is all that's mentioned. Nothing about gpus. Please make sure you provide proof of your claims in future. Otherwise you'll paint yourself in a bad light
You just completely edited your comment to cover the fact you can't provide a link. Jhoward I'm now have a very different view of you. It's not about the truth. You were simply clutching this whole time.
Thanks for that.
From that thread, as much as I cared to read since it kind of devolved after a bit, all I can gather is that no one seems to know what the dual data lanes are for.
For those that don't know, a data lane is just a controller for data, and in this case, the X1 has two supplying it's CU's in the GPU. I'm not sure the move engines are really important for two data paths, as it may just add more overhead since the data controller has direct access to memory and the CPU. In the case of the X1, the different controllers appear to control a split set of CU's.
Anyhow, I'm not going to speculate for now, because I'd have to do some more research. I understand what's being talked about, but not enough in relation to the X1 to be able to fathom an assumption. I do think the article which reported the leak was a bit presumptuous.
To me, the best thing to take from that thread was a comment from iroboto,
"It's very important to not get stuck into conspiracy style thinking. It's like if MS denies the functionality of the second graphics command processor you take that as the opposite. If they agree with you, you take as truth, and if they say nothing about it that means you also take that as admission that you are correct. In all scenarios you are only agreeing with what aligns to wishful thinking and that severely hampers your abilities to make good sound decisions."
Not saying you're wrong or anything, just that it may be wise to temper your expectations and wait until more information is out before postulating a conclusion on what MS did and didn't do. I'll also readily admit that I should probably follow my own advice sometimes. There are a couple people on that thread which seem really knowledgeable about hardware, and while they're offering possibilities, none of them are saying anything definite.
Otherwise, I wouldn't really call it a revolutionary design. It's a different design, done to achieve some task which is currently unknown. One person speculated that the extra channel is for the media and overlay features of the X1, so the extra data path may simply be a workaround so as not to take away from the actual abilities of the GPU.
I can think of several reasons why a second data controller would be beneficial in gaming, but it's not a feature of DX12 that I've seen. It may be specific to X1 though, in which case I wouldn't have bothered looking too much into it yet.
However, even with two data controllers, unless there is some specific gaming purpose for them, I can't really think of any reason why you would need one on the GPU's given the rather low number of compute units. The data controller that came standard with the GPU should be able to handle it perfectly fine, since graphics are serial in nature. But when it comes to GPU compute, it can actually help tremendously.
Edit@LostDjnn
Appears you went into more detail about it possibly being a multi-tasking thing needed. While that's perfectly reasonable, I still wonder if it's actually necessary to have 2 controllers. Overlay has such low overhead, it seems rather unnecessary to split the CU's and memory controllers. I can't imagine that system features would run off GPU compute.
Rain that's not where I was going with it. Overlay indeed takes FA overhead. The efficiency increase I was referring to pertains to running 2 controllers with a conventional overlay as opposed to running them hardware based priortization. It has nothing to do with compute based packet distribution to the gpu. Simply the simultaneous rendering of assets from two seperate OS's on a hardware level.
Nice to chat with someone who actually just cares about the facts though. If I run outta bubbles just pm me. :)
Ah. Yeah I think I missed a bit of your comment there. Indeed it does make sense to do that as it would require hardware to be dedicated to the actual secondary rendering to prevent slow downs with the game render. Since console developers have the ability to control memory controllers, it would be reasonable to isolate a secondary controller that is managed by the OS to perform it's functions. This leaves everything still available to the developers and prevents unintended conflicts.
I think one example of why this might be beneficial is looking at the PS3. While a game can render in the back ground when you hit the home button, it can also have stuttery frame rate should it keep running. It's mostly obscured so it can be hard to notice, but it is there in some games. I'd imagine on a multi-tasking view, the stuttering would be extremely noticeable.
It's an interesting approach to handle what could be done with a rather inexpensive secondary graphics chip running synchronously with the main GPU.
The thing these developers are missing is that GPU bottleneck he's talking about won't be in the XB1. Because the XB1 was designed specifically for the DX12/Win10 API/OS there are things in the XB1 that will eliminate most of the bottleneck that would normally happen to a next-gen console. I expect a lot of PC games will get transferred to the XB1 with little problems. I'm interested to see how The Witcher 3 gets updated to Win10/DX12. Most of these developer who don't work for M$ haven't a clue of how the XB1 will handle DX12 because they don't know everything about the hardware.
What these APIs do is to lessen the strain on the CPU. So if you have a game that is bottlenecked by it, which basically means the GPU cannot unfold it's full potential, you get some gains with a better APIs. But the GPU stays limited to it's specs. There is nothing you can do about it.
Example: BF4 with Mantle
Slow CPU + Fast GPU ... gave you a crappy experience in DX11, since the GPU was bottlenecked by the limited CPU. Mantle helped there and freed up some capacity on the CPU, so the GPU could better unfold.
Fast CPU + slow GPU ... no gains with Mantle.
Fast CPU + fast GPU ... if there is no CPU bottleneck, you get no gains with Mantle.
Slow CPU + slow GPU ... minimal gains or no gains, if the GPU was allready at it's limit with DX11.
DX12 will basically work the same way. On PC and X1 the GPU will limit it's effect. It really only shines on old CPUs combined with mid- to toprange graphicscards.
Can current and ex Naughty Dog employees stop commenting on DX12? It's none of their concern. I don't see MS constantly talking about PS4's API.
I know right ps4 guyz talk more about xbox 1 then the system they make games for it.
Graphics programmers aren't bound to platforms like you somehow think they're supposed to be.
Being a graphic programmer doesn't make you an expert of every graphics API. A dev that only makes iOS games doesn't necessarily know everything about developing for Android OS.
Naughty Dog devs have never worked with DX12, any Xbox game, or PC game. So they would know little to nothing about it. So asking Naughty Dog current and ex employees is useless.
@Pandamobile
While that is true, at the same time it makes no sense to ask a developer of a totally different studio, with totally different programming methods. They still don't know much about it. Even AMD; the company that worked with them on it, doesn't even know much about it's capabilities. And you suspect someone who hasn't touched DX12 at all knows something? You can't really compare previous versions to DX12 either. As it unlocks low level access in PCs that has been dormant for years. So completely different case this time.
Every single game developer on the planet has made games for PC. Do you really think that graphics programmers jump right out of university or previous jobs onto a PS4 dev kit without learning DirectX and OpenGL?
Seriously?
APIs are transparent to graphics programmers. Just because they've never used DX12, doesn't mean their opinion on it is completely invalid. They know all the shortcomings and pitfalls of graphics architectures, regardless of whether their employed by Sony or not.
@Lennoxb63
"Being a graphic programmer doesn't make you an expert of every graphics API. A dev that only makes iOS games doesn't necessarily know everything about developing for Android OS."
True that but in the case of Naughty Dog believe it or not, they have Xbox One and X360 development kits at their office. If you don't believe me, look online.
@Lennox
I'll agree with you if you can agree that Phil Spencer isn't qualified to talk technical specs on DX12. He's not even a game programmer, he's a technical engineer of hardware more, yet if he said something about DX12, you would take it as gospel.
Otherwise, graphics programmers that understand how an API works, regardless of which one they primarily use, are far beyond the level of the less astute programmers who simply use pre-made functions to draw a screen. If you think programmers at Naughty Dog don't attend classes and go to things like Build, then you are sadly misinformed. Naughty Dog is part of the ICE team which makes the PS API's and SDK's. I dont know if this guy was part of ICE Team, but do you truly believe that these guys are clueless on different API's which are doing exactly what console API's have been doing for almost 30 years now? Any console graphics programmer probably knows this stuff more than any PC graphics programmer, because it's what they already do.
By your own reasoning, there aren't many DX12 developers out there at all, so no one we've heard from are actually qualified to talk about it in regards to consoles except for maybe a few privileged devs who got early access to it for the DX12 X1 games. There aren't many of those out there right now, and most of it is isolated to engine makers for the time being. So who exactly should we listen to? MS? That'd be fine, but they aren't exactly unbiased. So that leaves no one to listen to to relate DX12 to X1.
I'm sure you didn't even bother to read the article.
He's a programmer who worked at an acclaimed studio, he knows his stuff. Gamingbolt asked him questions, he answered.
This article was a step up from their usual 'posting a series of tweets' style of journalism.
All the little sub processors your refering to are mostly the equivlent of blast prosessing from the genisis which was a dma controller so yea maybe small advantage in some instances but its hobbled by weaker gpu cores less of them even and even the 6 % overclock cant replace 6 compute units. Slower ram for gpu doesnt help its lower latencey for cpu and makes xb1 faster in opening standard apps and such but costs in gaming compare ddr3 gpu proformence to gdd5
I don't know who your comment is geared towards but I have issues with yours on its own.
Sony uses a 14:4 configuration for their cu's. While dev's can use the extra 4cu's for their games Sony doesn't recommend it, and says it will offer little benefit since the system was balanced to 14cu's(not 18) anyways. You don't even know how PS4 works, let alone how many of the unique elements of XB1 will function yet because their purpose remains unknown. Until MS clarifies, any argument is purely speculative anyways. Data sheets can be misleading.
It may be ballenced for 14 cu but the die contains 18 backed by 8 jaguar cpu cores clocked at 1.6 ghz the gpu is close in numbers to a 77xx series and since ms reduced the number of cus from 18 to 12 to make room for other stuff.
I love how when people bring ddr3 vs gddr5 as an argument they fail to mention that xbox one also has eSRAM
Of course having ONLY ddr3 compared to gddr5 isn't as good for gaming, but throw eSRAM into the mix and its a different story. Its not easy to use which is why there has been varying results in resolution.
But MS aim to end the struggle once and for all. They are giving eSRAM its own API with dx12, and this along with the updated pix tool should see an end to eSRAM under utilization.
Great to see a dev who worked on dx12 give his thought oh wait....SMH this dev has not worked with DX12 so he is just speculating unless he has worked with the new API he can't make any credible claims.
http://stream1.gifsoup.com/...
did you work on DX12 On Xbox one And PC ? no
so shut up
Notice that is less than the pc benefits. I'm all waiting for someone to say that actual gains.
No doubt the One will get a 50% boost to performance just by adding dx12. Then we have the cloud coming probably next year add 50% boost to performance. That would put the One at about 1 petaflop computing power. 4k gaming is going to be sweet one the XboxOne.
But we all know it's true wink wink ;) right right. One developer even said it easier to develop for the XboxOne. Reading between the lines that would mean squeezing out all the performance from 1 petaflop would in reality give you about 5 petaflop computing power. Its just like when squeezing the juice from a lemon. Just imagine when they have squeezed all out of XboxOne. That will probably blow my mind.
@piff, Such incredible gains! Xbox One will become a force to be reckon with in the future. Stay tuned for the Spencer effect!
What a troll, you clearly don't believe that yourself, you are just stealth trolling to identify yourself as some MS drone.
He is being quite obviously over the top...he isnt trying to fool anyone about it except you it seems.
Your passion for all things console wars is inspiring.
You're basically Mel Gibson in Braveheart, console wars version.
Game of Thrones is actually a show about nicksetzer1 winning the Console Wars Throne.
At the end of The Lion King, nicksetzer1 killed his uncle and took back control of the Console Kingdom.
They deny it because they hear about it, read about it, know what it can and does do, but it wont benefit them in any way. Every dev praises it so their only option is to troll, downplay, and deny.
Didn't this guy get interviewed awhile ago on the same topic? Nothing really new learned from this to be honest.
Even based on his comments there will be gains without a doubt, PC has been seeing gains of 300+% if xbox one see just a 50% gain it's still great news.
I don't think people understand those terms though. 50% is no where near enough in performance. 300% is this, 30fps to 60fps in performance gains. 50% would be 30fps to 35fps in performance gains. Some cases in PC games, it's been said to have 500 to 600% performance gains!
Example, if a game like Forza Horizon 2 is 1080p 30Fps and sees a 50% increase it would have a 5 frame performance boost. If it sees a 100% boost it would see a 10 frame boost.400% it would be 40 frame boost which would be 70fps , ect.
If Xbox one can have atleast 300% gains, I would be impressed.:)
PS: I did not down vote you..
Where are you getting these fps numbers from? And when you say 50% increase = 5 fps increase, what's the 50% increase of what?
@kstuffs
Why this is from the two APIs .(DX11 to DX12) Intel ran a game in DX11 at 7 fps, in DX12 they had it running at 42fps!! So that's six times the performance boost,
7x6= 42 (that's a 600% increase)
The 50% is what lastking95 was referring to Xbox one performance increase, but that's not much of an increase. Forza horizon 2 1080p 30fps running on DX11 to DX12 was my example..
Enyone interested fast forward to 23:00 mins of this video.
http://youtu.be/47cnFWK0dRM
So because he worked on PS3, his whole entire career of programming is meaningless and he shouldn't have any opinion or comment on a subject that is related to his line of work? Do you realize how silly this sounds?
I have no idea what people are expecting of these APIs. APIs cannot work miracles, they can only do what developers tell them to do.
Armchair experts coming out of the wood works to shoot down this guy's words.
Just because he used to work for Naughty Dog, doesn't mean he was some sort of fanboy trying to talk bad of Xbox/Microsoft. In fact, the fact that this guy worked for Naughty Dog should suggest to you maybe he has a little more knowledge on the technology. It's not like a graphics engineer who used to work with OpenGL is ONLY going to know OpenGL.
is that good or bad? I never got my chart on which days it is good, and which days it's bad.
Also, bubbles for funny.:)
oh yeah, mine as well. best purchase of the century. I'm amazed I can still run modern games at 60 fps with it :)
There's more internal processors that will be lifted with DX12. Remember now..you'll have more processing power from the CPU to strengthen the offloading processors MSFT have yet to even talk about or unlock for developer use.
In none of the SDKs released notes does it even detail them the slightest.
I remember the Hotchips and I remember the head scratching...don't sleep on Xbox and don't sleep on DX12.
People been hating on dx12 for a while now.I bet if it was for psi then it would be a whole different story smh. my guess is if dX2 wasn't going to bring any kind of improvement, then what would be the purpose of even bringing it out? You people make no sense smh.what'so the point in putting a turbo in a Honda if you know it still won't beat a ferrarri?
The argument is that the improvement is a minimal one and not a secret sauce and that PC stands to gain more, which is true. Also, you are still limited by the hardware and ms brass made a fatal mistake by putting cheaper tech in the xbone (ddr3) and have spent this whole gen trying to make up for it and trying to convince ppl that a secret tech will turn the xbone into a powerhouse. They said the cloud would enable devs to harness 3x the power of xbone and that titanfall would use the cloud for rendering. Supposedly the cloud would handle all the ai to make titanfall have the ai ever seen in a game, fnny bc titanfall both looked bad and had poor ai. Really ms is taking minor features that will bring marginal improvements and telling ppl they are major features that will bring mind blowing ( unrealistic ) improvements.
Hater aid on DirectX 12? With all the stories being posted about DirectX 12 on N4g..and your saying its getting tons of Hate?LMAO
what is Amusing is Vulkan is also a Brand New built from scratch Api, and yet hardly a peep being said about it, but yet when it does get said, many still claim oh Sony will not use Vulkan on the PS4, but yet does that mean No one will be able to use Vulkan on the PS4?
Many gamers have claimed ps4 would not be able to use Vulkan based on what Brad stated "if Sony allows it" but yet those very same gamers see x ND dev. Say what he did and yet oh' he has no experience with DirectX 12 he should just shut up but oh ' but..but..brad knows all about it...
LMAO
all the while GamingBolt is making bank on both sides for stories like this...lol
The fanboyism on the Microsoft side is strong here. All this ex ND dev is doing, is echoing what everyone else has been saying.
The problem is that he was once affiliated with a Sony first party studio, so they're using that to call foul. What's foul is their stupidity.
The guy is an ex developer for a major 1st party game studio. He isn't required to work with DX12 to have some knowledge on it. It's an API. He's worked with them before. He's as qualified as anyone else. It's like saying a Domino's pizza cook isn't qualified to comment on Pizza hut.
I went looking into this guy, and he really knows his stuff. He's no fly by night mobile programmer using some game engine to make games. He knows intimate details of how hardware works, and how software in general works.
People who want to discredit him only do so because they see naughty dog and that somehow instantly nullify his statements.
I can't imagine any developer in their right mind, regardless of which API they work with, or what level of game they make, wouldn't want his expertise on their team.
Seriously people, go read his blogs on technical stuff. I doubt any of the haters here could understand anything once he starts talking technical. You don't know this stuff without understanding how hardware AND software works. Computers work in specific ways, they aren't some magical all knowing sophisticated AI that can adapt to whatever you may throw at it. Software has to be written for hardware, not the other way around.
Is it not possible that hardware can be made to process data in ways that haven't been conceived before? That a programmer looking at this new hardware, even through experienced eyes, still won't comprehend all of the ways that the data can move through that system? Or that DX12 can enable not just some, but all of it's intended hardware functions?
Even Einstein couldn't see past his own data that proved the universe was expanding, years before Hubble, because it conflicted with the Universal Constant.
Dx12 is so great that it will allow xbone to render multiplatform games in 1080p! No more 792 or 900p but 1080! Wait..it's coming..."but The Witcher is 1080p". The Witcher is a game that ms payed a lot of $ for it to be in the xbox camp, it was at their E3 conference last yr on stage. Also, every dev makes a decision as to dumb down their game to xbone in order to achieve "parody" (lazy) or build it up to PS4 and iterate on the xbone version much like they do with cross-gen. Talk is cheap, unless you are paying for positive coverage then it's expensive, but ms would never do that....would they? *coagh* paying youtubers *coagh* non-disclosure *coagh*. Only big greedy American companies would use propaganda and illegal tactics, not macrosoft. Yes i owned the orig xbox (fable) and 3 360's bc they kept breaking and wouldn't read the disks anymore. The only reason i went with PS4 over xbone is bc i couldn't afford to put AA batteries in my xbx controller anymore (batteries are a fortune in my country ).
FFS, this stuff has been reported on a million times now. This is not news anymore. GamingBolt posts the most repetitive stuff.
Why is N4G is already drawing such outlandish outcomes of DX12 when the API hasn't even been released nor have the developers have gotten their hands on it yet. It is interesting to hear an experienced developer's take on Direct X 12 but everybody uses different techniques and philosophies developing games.
The only way to really know if Direct X 12 is the next leap in game development or a steaming pile of lies is to wait and see how developers implement this API in future games.
In no way is this news...and it doesn't matter who said it...even IF it was someone that actually knows about it. We all knew from day one/second one when DX12 was announced that it would benefit PCs more than Consoles...this should be a shock to no one.
Ive gotta say that you xboxwun guys are goin wayyy too overboard thinking DX12 is gonna change stuff on xbone. Youre probably going to see smoother gameplay in PC versions of games but thats about it. From what ive seen of Halo 5 its not looking too hot in the visuals dept. (oddly halo2 anniversary edition looks like it has better graphics). If thats the first DX12 game on Xbox then im not holding hope for Gears 4 to look like anything other than an uprezzed gow judement or whatever.
If you really want to see improvements and changes and all that new stuff just get some cheap gaming PC off craigslist with last years dx12 stuff in it.
"Ive gotta say that you xboxwun guys are goin wayyy too overboard thinking DX12 is gonna change stuff on xbone."
Huh? nobody is saying its going to be doing 4K AAA games all of a sudden. Why do PS fans ALWAYS take everything said to the most extreme of assumptions? Its like you guys need to be reassured by xbox fans that their console wont be more powerful than yours...weird. Furthermore the X1 was designed for DX12. Its a known fact that they launched earlier than intended. Its also a known fact that X1 is currently using a derivative of DX11 and therefore the hardware is not being utilised to anything like its full potential...FACT..but go ahead and read that as 4K gaming. PSfans are uniformed nut jobs, who cant stop talking about X1. You guys can no longer be reasoned with. Why would it even bother you guys that Xbox fans are enthusiastic about improvements to their console? I dont understand that.....
"From what ive seen of Halo 5 its not looking too hot in the visuals dept. (oddly halo2 anniversary edition looks like it has better graphics)"
So you're comparing a completed games visuals to the graphics of a beta of a game that was a year from launch? A beta that ws filled with placeholder textures and assets, A beta that was built on an even earlier build (public Beta builds are always older than the day they are released. Halo 5 probably looked better than the BETA back in december...but we dont get that build for stability reasons) ...You cant really be this stupid can you?
The rest of your comment was pure nonsense based on your own silly, uninformed rhetoric. It doesnt even warrant responding to. It reads like pure nonsense. Seriously. Go get a clue.
"The improved APIs change how long it takes for you to tell the GPU what you want it to do, but they have no effect on how long it takes the GPU to actually execute those instructions."
When are xbox fans going to learn? You are totally deluded to think the system will change. The DX12 hype train really is going for a massive crash.
Like it or not what this person says is totally true.
I'm just about sick of articles talking about what DX12 will and will not potentially do......
Why do they always ask ND staff about there thoughts of DX? Lol what does ND know about something they never use? I don't recall a ND game using DX since... ever
I found it, this is what MS are referring to.
http://www.dolphinfitness.c...
Yeah this is why I stop getting gaming news from gamingbolt. There's only a few websites I read when it comes to gaming news, gamingbolt is so hung up on resolution and dx11and dx12 that's they'll interview ANYONE about that subject, even people thay probably never use it to make games ever smh. They've been in my gaming tabloid list for quote sometime and this article just proved why, just click bait article.
Also, this is the reason why I've stopped coming on this website so much, cause you'll approve this crap article like it's news, it's not news at all. Why don't you pull some developer from Media Molecule and Guerrilla Games also and ask them about DX12 also while you're at it.
Newer API's will make the CPU faster therefore consoles have a shitty AMD 8 cores CPU that is beaten by a I3 which is a Intel dual core with hyper-threading, no wonder the PC would gain more performance.
It wont make them faster .faster indicates an upclock what it will do it allow optimization for better coding which will include proformence gains by doing more per cycle not more cycles.
Enough with the arguing. They say this, they said that about it etc.. How do we know it's going to be a massive improvement for the Xbox one? We will not know until it comes out but for the PC it will help.
Filmic Worlds boss, John Hable also talks about the selection process at Naughty Dog.
they were nt just talented ..they were also not bored (like many lazy others) to work on PS3 which was a beast ... Kojima,FromSoft and Naughty Dog made their own engines specially for PS3 (also Sony helped them a bit too), which maybe was difficult at start but later were benefited alot by that as now they have their own engines perfected and giving some very well made games ...(kojima started with MGS4engine for PS3 and now evolved to Fox engine, Sony gave to FromSoft the phyre engine which evolved to BBorne engine..etc etc
how is the ps3 maxed out with uncharted 2 or TLoU when there is this?.
crysis 3 on console realtime
http://www.gamersyde.com/po...
http://www.gamersyde.com/po...
http://www.gamersyde.com/po...
uncharted 2 pre rendered cutscenes
https://itani15.files.wordp...
http://media1.gameinformer....
yeah I am blind lol...
Third-Party never maxed out the PS3,they had to change a lot about Crysis 3 in terms of attributes to make it even portable.
Crytek uses high end solutions and do not necessarily optimize them
for pretty much anything,Uncharted 2 was developed with the CBE in mind and took full advantage of it's capabilities,well in comparison.
I would love to see a next gen CBE with more bandwidth and a better graphics solution,it's a very underrated chip
Crysis 3 looked so bad when running. The framerate was in the 15-20 range the vast majority of the time and the jaggies were oppressive.
For all of it's effects, Crysis 3 looked worse than Crysis 1 and Crysis 2 on PS3 - Crytek tried to do too much and the horrible performance ruined the experience and graphics.
You can max out a system, but you can also optimize your engine on a maxed out system. You can only get so much power out of a system. Your next steps are to optimize your engine to get more performance. Which i think you're getting the 2 confused.
It was obvious that it was maxed out with Uncharted 2 as it will be maxed out with Uncharted 4 and Star Wars Battlefront
I messed up but meant the PS4 will be maxed out with Uncharted 4 and uncharted 3 as both games are coming out in the systems third birthday.
A system can be "maxed out" and still show improvement, as he is saying (I'd advise reading beyond the title). Despite what Uncharted 2 did/looked like, God of War III, Heavy Rain, Uncharted 3, and Beyond: Two Souls looked better. Why? Maxing out a machine doesn't mean you can't make improvements in other areas.
The latter portion of the console's life had developers (as Hable put it), "squeezing out" what they could, though he admits, there wasn't really much left. I don't expect Uncharted 4 to be Naughty Dog pushing the PS4 to Uncharted 2's point on the PS3, but their second game? No doubt.
Not really. "maxed out" means the limit has been reached. If you've reached the outer limits of what it can do, you can't go beyond that. If they were able to tweak something here and optimize something there to get another 1fps out of it or improve textures or improve loading times, it means they didn't hit the max with U2.
What he wanted to say was worded incorrectly.
If the PS3's limit had been 100% reached, you wouldn't have seen better games than Uncharted 2. That is not what maxed out means. Its RESOURCES can't be pushed any further, but the engine code can be modified in a way that would allow for optimization (as plenty on the PS3 proved). The Naughty Dog 2.0 engine had reached its limit on the PS3, but there was still enough juice to make games look and run better after Uncharted 2 released.
The engine, to that point, was maxed. The PS3's capabilities were not. Something I see people get confused on all of the time.
You can max out any machine due to crappy code. It's easy to do, any idiot can do it.
A computer is only truly maxed out once the software has been fully optimized, and that takes a long time as someone will keep finding clever solutions to certain problems.
You're not getting it. If you can max out something that's it. You keep bringing up optimization. If it can be optimized, it didn't hit max. Max is the end of the road. Like I said above, if they can optimize to get better performance, it means it was never maxed to begin with. Max is the end. Being able to optimize means it wasn't at max.
... and if graphics were the most important thing to me in a game, this would mean something.
I would say Beyond: Two Souls is, by far, the best looking game on PS3. It didn't have the large explorable areas that Uncharted and The Last of Us had, but wow is it a looker.
Yes it was a nice looking game indeed.
It lost me though. The story was absurd and the last part of the story felt so rushed and silly that it ruined the game for me.
I prefer Heavy Rain to BTS any day.
haha you guys are funny and don't understand....a system can reach its limits with a engine but then you optimize and really optimize is when you clean up you engine and start taking bits from what you cant see to use it on what you can see.....background becomes static, lighting becomes baked....etc.....bla bla bla
People in this comments section must have autism because Uncharted 2 did not max out the PS3.
Hell, God of War III looked better than Uncharted 2.
I think The last guardian would've taken the title for the most maxed out game on the PS3.
With over a decade of development time it had better be....
The last games that wowed me were probably, God of war 3's opening gameplay when climbing Gia and then fighting posideon and Uncharted 3's Airplane to desert scene also the big cruise capsizing. I don't think i have seen any game with that polish and scale in a while.
Go play an Uncharted 2 map on Uncharted 3 MP then go back and play it on uncharted 2 you will see a huge difference.
PS4's hardware is 28nm and lowest available in next couple years is 14nm and when foundries are enough experienced with it to reduce initial failure rates to an acceptable level and even on the same as current 28nm then expect two times the performance maximum if you want a console at reasonable performance and power envelope in an acceptable form factor for a home console.
Except everyone is willing to dish out 600$ on a home console.
The Order 1886 already almost look CGI.
This just reminded me how excited I am to see Quantic Dream's PS4 game. What they achieved on PS3 was pretty crazy, I could see their PS4 game getting pretty close to CGI.
I'm fine with ps4s hardware...let's develop great games before we sorry about superficial stuff.
Guy is just butt hurt he doesn't work for naughty dog any more
We all know this and it's 2 to 3 generations away