I know this is an Xbox forum, but I think talking about graphics tech should be interesting to everyone.
The PS3 Cell BE is well known, but I wonder what could have been achieved if Sony had of stuck with it and revised it to better suit graphics.
The SPUs were extremely powerful, yet were a total pain in the arse to code for. They required alot of code to get it to perform an instruction.
On top of that was the PPE was an In Order chips.
So what if Sony had of kept the cell idea, but changed the PPE to an Out of Order one, and made changes to the SPUs to make them far more efficient?
With Die Shrinks they could have made it 3 three PPUs and increased the number of SPUs.
As a long time console gamer, I miss the exotic tech that was used in consoles. Even as a 360 player in that gen, I have to admit that what the Cell was capable of when coded well was amazing. It blew past the 360s tri core CPU and even surpassed what the 360 GPU has over the PS3 GPU.
In other words, those 6 little SPUs had more comput then the additional CPU cores in the 360 and the additional GPU advantage of the 360.
Thatâs impressive.
I wonder what could have been done with Cell if Sony continued to refine and improve it.
Yes, if Sony, Toshiba and IBM kept at it, the Cell could have become a x86 powerhouse. However, for programming, it was akin to getting engineers to hand sew a dress instead of build a bridge. Nothing was fit for function and optimisation was a very big undertaking for PS3 game production.
The other issue that came of Sonyâs work for Cell was pushing a new programming platform out there with next to no tools or documentation. Iâve met a Sony ICE team member when they were in my home town working on a PS3 port of a big third party title. He was critical of how bad things were at the start and that he and many others went from working in a single studio offering support and guidance to being flown around the world, studio to studio every few months to help evaluate the game titles and help plan out how to make them work on PS3.
For BC work on PS5, where there is a will, there is a way but I personally do not see Sony investing the people/engineering hours to make it a reality. I will be happy to be proven wrong.
Just imagine if Sony had of stuck with the initial plan of having two Cell processors instead of the GPU.
I wonder if it would have had more potential than the PS3 ended up with.
I am always amazed at just how powerful the SPEs were when utilised.
One of them was used for Audio, one for physics only 4 to help the GPU and to do CPU work.
But what if they had used the transistor budget for a bigger Cell with more PPE or SPU cores instead for a beefier GPU with a modern feature set? Then nobody had to use this shitty console CPU for normal graphics tasks, what a novel idea.
Keeping the Cell would have killed the Playstation as we know it. They may have found a Nintendo-like niche with 1st party but smaller third party would have likely dropped it.
âexotic techâ was/is basically used as a means to handcuff developers to a platform (exclusives!). Invest time and resources into a complicated platform and youâll have no way of co-developing or porting it to another platform. This worked when development projects were smaller, cost less, etc⌠but as budgets got larger and gamers less tolerant of poor performance they couldnât be fighting with the hardware and development tools.
It honestly would have doomed the PS3 due to developers skipping the PS3 because of how different it would have been. The best thing Sony did was to see sense and stuck a traditional GPU in the console even if it was less capable compared to the 360âs.
It wasnât as good as people think. Yes it could do simple calculations fast, but even in some basic graphical calculations it came in lacking compared to the new âUniversal shadersâ. Basically those things in the 360 are very similar to what they aimed to do with the Cell BE, but from a graphical perspective. It gave rise to why we now have compute on our GPUâs and AMD/ATI had functionality in there that would help you do compute on these more âgenericâ shaders.
Anything the Cell could do, so could the Xenos and it could do it better. (More compute power by far). The Cell BE was hyped, and yes for a server/super computer it made sense, in a time before GPU compute. But since that was introduced in about the same time, the entire thing was âdatedâ when it came out. The industry knew were it was going, and it wasnât the Cellâs complex design.
People donât give the 360 credit for what it was, a system MADE for running games. Yes comparing raw numbers it looks far weaker. But keep in mind: Sony always uses peak numbers, and MS uses more sustained numbers. This goes as far back as the 360 and further. MS knew what it wanted, they knew how to get it and had a laser focus to get it. And that is why the 360 was just a better designed more adaptable system.
Not saying Sony makes bad Hardware, far from it. But the Cell BE was a mistake. At least for a gaming machine. As a Server CPU it had merit, but only while it lasted. Multi core CPUâs were coming onto the market very quickly after the PS3 was introduced and GPUâs (like mentioned) were becoming more and more accepted as the âgotoâ for parallel floating point calculations.
Yes the Cell will cream the Xenon in raw floating point calculations, but the Xenon will still win in real world situations due to it having simply more general purpose flexibility and abilities. That isnât just a âeasier to programâ thing, there just is more to computing than just âmultiplying one FP16 with anotherâ. And the this the Xenon could offload to the Xenos if it was needed.
So no itâs not just a pain int he arse to code for, it was the wrong chip for a game system. It was Krazy Kooky Ken Kutaragi at itâs finest, being a stubborn arse not wishing to see the music of the future being in GPU generalization and not CPUâs doing Graphical tasks to help out.
I am not in anyway technically minded but I can appreciate the gist of the more technical aspects of gaming and everything I have read over the years concurs with your post that the Cell BE was a fork in the road that ultimately led nowhere, because the things it was good at would ultimately be taken over by GPU compute.
In the end the Cell was a fascinating piece of silicon and what it could do, but it was a dead end for the silicon industry and wasted R&D for Sony. Mark Cerny was definitely right when he ditched the idea of a super Cell and went with X86 with the PS4.
Yeah, in that thing Cerny was absolutely right in pushing for a more âtried and testedâ component. The Cell BE was basically dead after about 4 years since introduction. Multicore and GPGPUâs had not only caught up but surpassed it in such a manner, that you couldnât even begin to compete with something like a Cell design. CUDA was mature by then (2007 release) and OpenCL had been released. It had not only been caught up with, it had been passed!
Im pretty sure a Super Cell was never considered for PS4. IBM could not make one, they stopped all development early in 2009. And PS4 would need a radical different Cell design with unified memory and an integrated GPU. This was never on IBMs roadmap.
It would have been a disaster in complexity and transistor count. GPGPUâs can do anything the Cell BE was excelling in and do it better. IBM knew this, especially by 2009. Sony also knew this and I guess that is why Krazy Kooky Ken Kutaragi was told to âtake a long unpaid vacation, and please donât call us weâll call youâ.
IBM also knew this during design, as they also designed the Xenon and it is a public secret that research done on the Cell BEâs technologies (like Cache, ALU, etc) benefited the Xenon as well. Especially concerning a GPU, something that IBM does not develop in house, they knew the writing was on the wall. I mean they knew by then what MS and AMD/ATI were developing at that time.
Again for a server/super computer it might have had merit, especially since CUDA didnât exist yet during the CELL BEâs research stage. But the moment they saw the direction (GP)GPUâs were going into? You can bet your arse that all the engineers went âYeah,⌠we done.â.
Donât get me wrong, the 360 was by far the smarter design for gaming, and was undersold by MS at its launch. Some of the software shown by MS when they announced the 360 was pathetic, while Sony as usual was showing CGI as PlayStation games.
The 360s CPU was not as powerful in real life as the specs suggested. A multithreaded tri core CPU clocked at 3.2ghz sounds like a beast. In reality it was maxed out pretty early on in the 360 lifespan.
The SPEs on the Cell were extremely powerful. They were a horrible thing to code for, and what would take 60 lines of code on the 360 would require some 1200 lines of code on the Cell for the same command. But if you were able to put your full resources into the PS3 it was capable of things the 360 wasnât.
Have a look at the particle effects such as explosions and smoke on Killzone 2. Even today, particle effects hit performance hard. The 360 would have a real hard time replicating those same effects, let alone the animations as well.
These effects were done on the SPEs, not on the GPU.
And then look at TLOU and UC2 on PS3, and you can see what was possible.
My point is that if the Cell could have been overhauled and refined so that the SPEs didnât require as much work to get firing, and the PPE could be made faster and OOO etc, I wonder what it could have been capable of.
But those three or four games on PS3 that were amazing donât make up for the hundreds of multiplat games that sucked hard on PS3.
Please donât just look at it from a âlines of codeâ perspective. For the time the 360âs CPU was actually a very powerful CPU that actually allowed devâs to get better with it as tools matured. No it wasnât Maxed out, maxing out is something I hardly ever do with a CPU. In a development cycle it usually is cost vs benefit. How much more time do I need to further improve this performance and, if I have reached the target, why should I?
The SPEâs were also not extremely powerful, they were good in a limited amount of tasks (like I mentioned), but those tasks are done on GPUâs now. And even the 360âs GPU has the abilities to do similar functions to an SPE on the Cell. And yes, this was used, not just in 360 development but more in the PC sphere of things.
Not long after universal shaders came onto the market, did Nvidia release CUDA. Before that Cg was the main language and it had some limits, but with the arrival of more dedicated APIs, using the GPU for these tasks was just a thing. And yes even in Cg, you can do many of the things the Cell can do. Microsoft already had HLSL (High Level Shader Language) as a part of itâs Dev kits, which also were improved upon.
It also had features to push and pull data directly from system memory. In case you wonder what that does: Pull vector information directly from the memory, set up by the CPU. Then do the things you wish to do on that data, and return it to location. Doing basically the very same thing that youâd do on an SPE. Really anything youâd do on an SPE you would be able to do on the Xenos. You are staring too blindly on the CPU part of the equation here.
Also by using the extra vector instructions you had available to you on the CPU (VMX128) you could still optimize further, but very often it became a âWhy would you?â. There is a lot more to game development that just âpowerâ, costs and time are a major part of the equation.
I donât really get why you only look at it from the âCPUâ perspective, as for the Xbox 360, the architecture itself was why it was shining. Okay, there were some mistakes made (the EDRAM was too small for deferred rendering, but deferred rendering was not a thing when it was designed.). Were as for the Playstation 3, they HAD to bring in NVidia as the Cell by itself simply wouldnât be able to cut it (as they originally thought it would).
What you see with the 360, was a glimpse into the future of GPUs. It was WAY ahead of itâs time some of the features it had didnât make it to PC until the R600 series.
Now to reply about the explosions and particle effects: Anything you could do on a PS3? you can do on a 360. If both were exclusives and show case titles (in which you donât mind spending a lot of money and time). Microsoft simply âgave upâ after a while with game development. They really did, were Sony doubled down. Microsoft tried getting more into "third party, were as Sony at the time was heavily invested into the game devs they had. And it shows.
I think you underestimate what the 360 actually is and what itâs GPU could do. No the effects canât be done on the PS3âs GPU, as itâs an older design stratagem. With fixed pixel and vertex shaders, and yes this is were the SPEâs did their work. But on the 360, the GPU COULD do those things.
Please stop looking at these two systems as just their respected main processors. There is so much more to either system and architecture is important. VERY important. In a way modern SOCS are exactly what they tried to do with the Cell BE, but better.
(More info on memexport).
PS: Donât just look to numbers like â3.2 ghzâ there is a lot more to it than just a âhigh numberâ. One of the weaknesses the Xenon had was No Branch Prediction (one of the reason why it wasnât just "Maxed out pretty early). Were as the original Xbox had that.
Sorry I am at work and would love to go into this deeper. But No⌠the Cell BE was NOT a good design the moment modern GPU design paradigms hit the market. And that happened⌠with a Console that launched a year earlier than the PS3.