"When the PlayStation 4 was first announced, several games from the free to play and other genres were ported to the new console. Some of those games reportedly took only a few months to port on the PlayStation 4."
Interesting comment. Is he saying DDR5 isn't ideal for certain memory patterns? Is it a latency issue. @Rashid: From looking at the diagrams the accessibility paths are unique, is that not the case? It is the same memory but it looks like GPU/CPU can access simultaneously, the bridges do not cross. GPU enters from one direction "at full speed", CPU from the other bridge "which is much slower". I would think for Unified Memory to truly work well this would always be the case. Otherwise it seems GPU would saturate the smaller "CPU Bus" bandwidth big time.
I heard about that before but with the way the PS4 is playing multiplats I wouldn't worry about it.
I don't think it is a case for worry, as he said it is very easy to develop for the PS4, I think its more of a developer preference
It's about know how, ND gave a presentation on the best use of PS4 memory, they should check it out.
? This has to do with RAM not Graphic Effects. RAM extensive task such as textures, draw distance, and character models are equal between X1 and PS4. PS4 is superior on GPU resources such as resolution and steadier framerate as well as better graphic effects in general like lighting and shadows. For PS4's RAM to be more than twice as fast, it does not show in texture loading, especially battlefield 4 and infamous as well as watch dogs which all suffer the same texture and object pop-in as X1 does :/
@ Gutz The PS4's RAM is faster than X1's period! The GDDR5 has a bandwidth of 176GB/s vs 68Gb/s for the X1's DDR3 and the eSRAM while helps is still only 32MB which causes its on problems/bottlenecks. And your assumption about texture loading are wrong because the reason for the texture pop up/in is because of the HDD, not the RAM speed
It's old news, that is why Xbox chose their memory solution, the DDR3 works better for their OS. @dantesparda "The PS4's RAM is faster than X1's period!" That is just not true, you like to conveniently leave out the eSRAM, because it shows your statement is false, then justify it by saying it a bottleneck. That isn't what developers are saying. We already know for fact 32MB eSRAM can deal with 6GB of textures. It may be a small quantity, and how you use it is everything. Time is going to tell on this one, we'll see what happens. Thus far though, we are watching a closing gap (Ghost 1080p vs 720p at the beginning of the cycle, to now Wolfenstein 1080p vs 1080p and Watch_Dogs 792 vs 900p) and not a widening one. The fact is the most visually stunning games on both platforms, Ryse and Killzone, both look spectacular. We are going to end up with phenomenally great looking games on both platforms, and it's up to ones budget and taste preferences what they want. I personally own both systems, but I can say at this point (and likely forward because of the known franchises that will see sequels) that I wouldn't be missing anything by selling my PS4. But I do own it so I'll get a few games for it here and there. Go look again on the bandwidth of PS4's GDDR5 vs X1's 8GB DDR3 + 32MB eSRAM. You'll sind out you are indeed wrong.
Wolfenstein isn't 1080 vs. 1080. It uses a dynamic framebuffer up to 1920 (x1080), where the X1 version can drop to 700 columns while the worst on the PS4 is 1720 or something. This in fact half the PS4 resolution under heavy load and pretty much what other games indicate. Of course, hardly visible, because lower resolution is less pronounced in motion - and reached full 1920x1080 in idle frames. esRAM can't keep up with GDDR5. It can match it for small amount of data - and with a tricky tile setup that bandwidth is often not required for the full texture where all this comes into play. But in no way will it ever support e.g. 200MB render targets - with full bandwidth and resolution. Also, these guys - if you port games - this might totally be true. But a engine designed for UMA would simply not move any memory and thus take actually take advantage of the shared memory; while by the sound of those guys, this is not the case with their engine.
GDDR5 is good for graphics which is why this is used on graphics cards, while the other type is better for more general cpu intensive code. As far as i believe
@incendy35 Yes, this is indeed an interesting comment, since in practice developers will prefer different accessibility paths to CPU and GPU for better performance. This is something that Cerny discussed during their PS4 development meetings about have a separate esram memory for the PS4, but they instead went for unified.
It was edram not esram.
This have been debunked so many times, the PS4 memory isn't latency burdened. Upmost reason why DDR3 is used as primary memory in computers is their price.
There are benefits and drawbacks to unified memory. The drawbacks are what he mentioned, the benefits are when CPU and GPU are working together on the same data. That data does not have to be copied into the other device's memory pool first, which saves a heck of a lot of time in the grand scheme of things. In gaming, the CPU and GPU team up like that quite often, so it isn't a "split memory is better, period" situation by any means. Perhaps for their particular game, it works better, which would make sense since the engine was built for that kind of setup. However, as the console cycle continues on, GPGPU functions and CPU/GPU interaction will increase, so the benefits of unified memory will become more apparent.
honestly, people aren't thinking with their brains on this one. If GDDR5 was ideal for all situations, PCs would already have GDDR5 only. But they don't.
And in the end this decision turned out great for both first, second and third party developers.
Best decision from Sony ever for greater looking multiplats.
With how small the performance gain on multiplats I'd say it doesn't matter Most people even gamers can't tell the difference in motion Everyone thought the UFC demo was running at 1080p only to find out it is 900p once digital foundry did their analysis
@ Lifeisgamesok So you are saying that being ignorant negates the power and performance differences between the consoles? Nice to know.
@incendy35 I don't have the diagram in front of me but what I can say is that being unified means there is probably going to be one dimensional way of accessing data [which is not bad thing]. All the calls from CPU and GPU happens at one time through a single memory.
So the entry point is the issue: the two bridges meet and enter at one point but have to wait on each other for access? That makes sense why it would be good and bad. Good in that they can access the same pointers but bad in that they have to share entry even when they are not updating the same bits.
@incendy35 CPU's transfer data through threads, but when a CPU and GPU access RAM they go through a bus. Most bus' nowadays are 128-bit, but the PS4 is 256-bit, twice the amount of data can be transferred at the same time. There is plenty of access to RAM, and both can access this RAM simultaneously.
@sinspirit quad channel memory?
I honestly read it as caused by the architectural changes required to unify the whole system...aka if they had 8gb gddr5, but it was split into dedicated 3 for o.s. and 5 for games...its still the same type of ram but its not having to use the multiple types of patterns on the same pool...thats just what I read into it
"On the other hand even making a PC exclusive game you can’t target high end configurations, so it doesn’t matter in practice.” Which shoots down every "My PC will smoke your PS4" argument... the developers aren't even targeting your PC. That isn't to say I don't like PC gaming, the mod community is something you'll never find on consoles. I just find some of the PC master race comments to be silly.
"the developers aren't even targeting your PC" And the big reason is that they have no idea what "your PC" is. It can be infinite combinations of CPU, GPU, RAM etc. Developers CAN target all of our PS4s though as they all have the same hardware. That is why when someone says "the PS4 is just a mid range PC", it is false. They can't go to the metal on a PC, they can on a console. They just brute force it on the PC by throwing lots of hardware at it. In the end the PC comes out ahead because they are so powerful, but the consoles aren't as weak as the specs on paper would indicate.
many of them are downright stupid imo
" They can't go to the metal on a PC" They actually can. That's why both OpenGL and DirectX are getting an overhaul. Both AMD and Intel have common instruction sets to improve speed. Granted some cpus don't have certain instruction sets. Some do. So it comes down to the speed of data that can be send between hardware. The only advantage consoles have is that they can be tweaked more since you're working with one specific hardware set. This does not mean that PC games do everything trough layers upon layers of high level APIs.
Nearly every top dev had been asking for unified memory. You can't please everyone.
The ultimate (realistic) system that could have been put out would most likely be something like 5GB DDR3, 3GB GDDR5 then 512 EsRam. You can swing the balance between the GDDR and DDR however you like, you will never please everyone, hence the pressure to go unified. We could then just use the main bulk of the GDDR for the graphical work a la PS4 but also have the added advantage of having an actual usefully sized chunk of EsRam to do back buffer work. Sucker Punch were quoted as having a 200mb back buffer in Second Son. Hopefully that will make people appreciate why the Xbox One's 32mb is such a problem.
Does ESRAM have to be included in the APU? Because by doing that they would have to reduce the size of the GPU which results in a less capable system.
Realistically yes it does. And yes size is an issues. There are ways around it such as layering the fabrication but I suspect the tech was just too new and costly at the time when specs are being fixed. If ms knew what they know now they prob would have gone for a 4gb ddr3 4gb gddr and had the 18 GPU cores. This would have made the bone equal, potentially better than the ps4. They cut the graphical power to make way for the esram. They needed the esram because they needed to use cheap, cool and quiet ddr3 memory. They needed cool and quiet memory because of TV TV TV. The picture is quite clear when you follow the breadcrumbs.
But the 32mb esram on the One APU is huge. That's why they get their 5million transistors claim, or whatever they said when they introduced the console. It has been said by various tech analysis outlets that they could not include more even if they wanted to. I'm not sure if having esram off-die would be worthwhile if it is possible. But I don't know, I'm parroting info.
It's 5 Billion transistors
This is true. In the case of the way the xbo apu is made they could not fit any more in there. They could however look at other alternatives such as an additional layer.As I say though the tech just isn't humble enough yet for consoles.
Ok. I knew it was more than 200, but I was guessing from there.
I wonder why they couldn't have just made the die a bit bigger in the Bone? I guess cost as then you would get less yield per wafer, plus more defective chips maybe? It makes you wonder, if they didn't insist on that Kinect maybe that extra $100 could have gone into a bigger/better SoC.
512 EsRam lol and then the whole die will be used for Esam
"512 EsRam" Wut.
There's always a trade off. But it doesn't seem like this decision causes too much of a bottleneck
a well designed console thatll bring great experiences for for the time come.
I made sure that the PS4 will have the power to change the world.
He also spoke about the unified memory architecture of PS4 and although it has generally been praised, Krzysztof believes that different memories would have actually improved PS4′s performance since it would result into different memory access patterns. As the name suggests, memory access patterns are ways that the developers will access memory, whether it will be a recently accessed portion or memory that follows a more predictable pattern. In short, memory access patterns are the most important things to consider in games development. “Unified memory architecture simplifies development, but personally I would prefer two separate memories. It would allow to get a better performance as CPU and GPU have different memory access patterns, which require different memory types,” states Krzysztof. Interesting read, so unified simplifies development, been the case of the PS4 at launch. But having two separate memories he prefers. The x box 1. I guess with dev's get more knowledge how to use both memory types on the xb1, it might not be that bad after all.
"But having two separate memories he prefers. The x box 1." That spin though. Edit: He never mentioned the Xbox One, so you assuming Xbox One is preferred because of it has two pools of memory is a spin.
Its not spin thats what the x box 1 has. He's only one dev doesn't mean every dev prefers it this way. Don't be so defensive this is what this dev said and gave reasons to why he believes it. [email protected] Resolution is done by the GPU its got nothing to do with memory. Devs have full access to 100 per cent of the GPU on the PS4, they didn't till now, it was 90 per cent on xb1. Lot of games have ended up being 900p on PS4 and 792p on xb1. Will ten percent more GPU power been enough to match the PS4 performance. Lets wait and and see.
@KNWS "Will ten percent more GPU power been enough to match the PS4 performance. Lets wait and and see." Seriously? lol...no.
@KNWS Well no, but nice try though. Seems logical 2 memories preferred, X1 has two. But there is one *small* problem with it on xbone, the eSRAM memory is just too small to do that much, it is only 32 mb, that is like half of the RAM in a 3DS. Sure you could load in things, do the processing there the problem is that you need to be shifting data in and out of that small bit of fast RAM. With all of those read and write cycles you lose the advantage of it. Now if it was like 512 mb THEN it would have been a factor.
"Lot of games have ended up being 900p on PS4 and 792p on xb1." Watch dogs Yep *Lots*.... Still spreading BS I see.
@KNWS Metal Gear Solid V: Ground Zeroes are 720p on Xbox One and 1080p on PS4. Gonna be this on Phantom Pain also. Big difference, talking about over 1 mllion pixels from 720p to 1080p. Your comments lately is all about defence Microsoft and hate Sony in every article, spin around facts. Pretty much desperate I must say.
"But having two separate memories he prefers. The x box 1." ummmmmmmmmmmmm So he prefers Xone so much that he couldn't get the game to 1080p on the Xone :/.
Where in the article does it state that he prefers Xbox One?
God damn dude, you are truly stupid
Unified memory is the future, this developer is basically stuck in his ways and doesn't want change.
Unified memory may be the future when it comes to APUs, but until they are capable of throwing out the same graphics as a high-end dedicated graphics card, you wont be seeing unified memory on consumer PCs anytime soon.
Lots of PCs with unified memory - where have you been? I would say that having a separate GPU with dedicated memory is the exception nowadays. Notebooks are PCs too! Most office PCs use unified memory so does entry level ones.
So all those developers lied to Sony when they were asked about this very topic? Interesting.
Some people will say literally anything.
He is just saying so because they've been PC developers for a very long time, and Unified memory is not very popular to PC developers. Most PC rigs are split DDR5 and DDR3 for GPU and CPU respectively. I due time he'll get to love the unique Unified Memory in the PS4... Let remind you all that Shadow Warrior Runs At 900p On Xbox One And 1080p On PS4, Both Versions Locked At 60fps
Yep, split is better. The overall system runs better with DDR3 and very soon DDR4 is taking it's place, it's already been 7 years since DDR3 came out! Probably won't see gpu's with GDDR6 till 1-2 years from now, the AMD R9 390 is looking to be a complete beast when it comes out but it's still gonna use GDDR5. Hell the r9 290 is a nice card, just glad we have AMD to give sensible pricing for high end gpu's unlike Nvidia, and now that AMD have their hardware in the consoles it will help fund future PC gpu's = keep me and many others happy. AMD may be way behind Intel in the processor side of things, but i'm so glad they compete in the high end gpu side of things.
"Most PC rigs are split DDR5 and DDR3 for GPU and CPU respectively." Thats because its PC and it can run multiple programs at onece..which ddr3 is perfect for that PS4 is indeed good desing architecture for gaming..those developers sugestions for Ps4 know what they want..
GDDR5 was meant for gaming as this is what is used in graphics card. DDR3 is for more general purpose cpu task. As far as esram is concerned, putting 512mb or even 128mb of esram would have been far to expensive and taken up way too much space. If you were to look at the die for the APU in the xbox, or any cpu for that matter you will see most of the space is occupied by memory be it esram, l1, l2 caches, etc. these types of memory are very fast but are very expensive to produce. Its just not cost effective to throw memory into everything. Even if they decide to increase the die size to accommodate more esram, this would in turn increase power consumption, increase heat, increase production failure rates, and decrease cpu lifetime.
GDDR5 was there best decision that they ever made just like when they had to sell two of,there million dollar buildings and lay off thousand of workers because they couldn't make a profit gain for 7 years straight. Go Sony.
I think the PS4 was very well build. its the best GAME machine out there and i would even paid more 100 if they had put a stronger GPU, somethihng that could reach like 2.5 or 3tf, but i know that they need to build a machine that all gamers can have. This way millions and millions are buying the console, its number one all over the world and i know the wizards of sony studios will do amazing things, because if they did with that monster «THE CELL»...
Remember that PS4 use ONE integrated chip. Splitting memory in CPU and GPU would require two separate memory buses from that one chip. It is a better use of pins to make the memory path wider. When I first saw the architecture I thought that the CPUs would be memory bandwidth starved, but comparing with actual PCs they are about the same. Normal access patterns are probably not perfect match for the PS4 but game developers can afford moving all mass calculations to the GPU compute: physics, and even AI Left on the CPU: story and higher AI (what MS suggest to move to the cloud), controller interfacing, communication stacks See this presentation written two years ago... http://dice.se/wp-content/u...
N4G is a community of gamers posting and discussing the latest game news. It’s part of NewsBoiler, a network of social news sites covering today’s pop culture.