PS3 Cell BE, what if?

I know this is an Xbox forum, but I think talking about graphics tech should be interesting to everyone.

The PS3 Cell BE is well known, but I wonder what could have been achieved if Sony had of stuck with it and revised it to better suit graphics.

The SPUs were extremely powerful, yet were a total pain in the arse to code for. They required alot of code to get it to perform an instruction.

On top of that was the PPE was an In Order chips.

So what if Sony had of kept the cell idea, but changed the PPE to an Out of Order one, and made changes to the SPUs to make them far more efficient?

With Die Shrinks they could have made it 3 three PPUs and increased the number of SPUs.

As a long time console gamer, I miss the exotic tech that was used in consoles. Even as a 360 player in that gen, I have to admit that what the Cell was capable of when coded well was amazing. It blew past the 360s tri core CPU and even surpassed what the 360 GPU has over the PS3 GPU. In other words, those 6 little SPUs had more comput then the additional CPU cores in the 360 and the additional GPU advantage of the 360. That’s impressive.

I wonder what could have been done with Cell if Sony continued to refine and improve it.

Yes, if Sony, Toshiba and IBM kept at it, the Cell could have become a x86 powerhouse. However, for programming, it was akin to getting engineers to hand sew a dress instead of build a bridge. Nothing was fit for function and optimisation was a very big undertaking for PS3 game production.

The other issue that came of Sony’s work for Cell was pushing a new programming platform out there with next to no tools or documentation. I’ve met a Sony ICE team member when they were in my home town working on a PS3 port of a big third party title. He was critical of how bad things were at the start and that he and many others went from working in a single studio offering support and guidance to being flown around the world, studio to studio every few months to help evaluate the game titles and help plan out how to make them work on PS3.

For BC work on PS5, where there is a will, there is a way but I personally do not see Sony investing the people/engineering hours to make it a reality. I will be happy to be proven wrong.

1 Like

Just imagine if Sony had of stuck with the initial plan of having two Cell processors instead of the GPU.

I wonder if it would have had more potential than the PS3 ended up with. I am always amazed at just how powerful the SPEs were when utilised. One of them was used for Audio, one for physics only 4 to help the GPU and to do CPU work.

But what if they had used the transistor budget for a bigger Cell with more PPE or SPU cores instead for a beefier GPU with a modern feature set? Then nobody had to use this shitty console CPU for normal graphics tasks, what a novel idea.

This would have killed the PS3.

Keeping the Cell would have killed the Playstation as we know it. They may have found a Nintendo-like niche with 1st party but smaller third party would have likely dropped it.

“exotic tech” was/is basically used as a means to handcuff developers to a platform (exclusives!). Invest time and resources into a complicated platform and you’ll have no way of co-developing or porting it to another platform. This worked when development projects were smaller, cost less, etc… but as budgets got larger and gamers less tolerant of poor performance they couldn’t be fighting with the hardware and development tools.


It honestly would have doomed the PS3 due to developers skipping the PS3 because of how different it would have been. The best thing Sony did was to see sense and stuck a traditional GPU in the console even if it was less capable compared to the 360’s.

It wasn’t as good as people think. Yes it could do simple calculations fast, but even in some basic graphical calculations it came in lacking compared to the new “Universal shaders”. Basically those things in the 360 are very similar to what they aimed to do with the Cell BE, but from a graphical perspective. It gave rise to why we now have compute on our GPU’s and AMD/ATI had functionality in there that would help you do compute on these more “generic” shaders.

Anything the Cell could do, so could the Xenos and it could do it better. (More compute power by far). The Cell BE was hyped, and yes for a server/super computer it made sense, in a time before GPU compute. But since that was introduced in about the same time, the entire thing was “dated” when it came out. The industry knew were it was going, and it wasn’t the Cell’s complex design.

People don’t give the 360 credit for what it was, a system MADE for running games. Yes comparing raw numbers it looks far weaker. But keep in mind: Sony always uses peak numbers, and MS uses more sustained numbers. This goes as far back as the 360 and further. MS knew what it wanted, they knew how to get it and had a laser focus to get it. And that is why the 360 was just a better designed more adaptable system.

Not saying Sony makes bad Hardware, far from it. But the Cell BE was a mistake. At least for a gaming machine. As a Server CPU it had merit, but only while it lasted. Multi core CPU’s were coming onto the market very quickly after the PS3 was introduced and GPU’s (like mentioned) were becoming more and more accepted as the “goto” for parallel floating point calculations.

Yes the Cell will cream the Xenon in raw floating point calculations, but the Xenon will still win in real world situations due to it having simply more general purpose flexibility and abilities. That isn’t just a “easier to program” thing, there just is more to computing than just “multiplying one FP16 with another”. And the this the Xenon could offload to the Xenos if it was needed.

So no it’s not just a pain int he arse to code for, it was the wrong chip for a game system. It was Krazy Kooky Ken Kutaragi at it’s finest, being a stubborn arse not wishing to see the music of the future being in GPU generalization and not CPU’s doing Graphical tasks to help out.

I am not in anyway technically minded but I can appreciate the gist of the more technical aspects of gaming and everything I have read over the years concurs with your post that the Cell BE was a fork in the road that ultimately led nowhere, because the things it was good at would ultimately be taken over by GPU compute.

In the end the Cell was a fascinating piece of silicon and what it could do, but it was a dead end for the silicon industry and wasted R&D for Sony. Mark Cerny was definitely right when he ditched the idea of a super Cell and went with X86 with the PS4.

Yeah, in that thing Cerny was absolutely right in pushing for a more “tried and tested” component. The Cell BE was basically dead after about 4 years since introduction. Multicore and GPGPU’s had not only caught up but surpassed it in such a manner, that you couldn’t even begin to compete with something like a Cell design. CUDA was mature by then (2007 release) and OpenCL had been released. It had not only been caught up with, it had been passed!

Im pretty sure a Super Cell was never considered for PS4. IBM could not make one, they stopped all development early in 2009. And PS4 would need a radical different Cell design with unified memory and an integrated GPU. This was never on IBMs roadmap.

It would have been a disaster in complexity and transistor count. GPGPU’s can do anything the Cell BE was excelling in and do it better. IBM knew this, especially by 2009. Sony also knew this and I guess that is why Krazy Kooky Ken Kutaragi was told to “take a long unpaid vacation, and please don’t call us we’ll call you”.

IBM also knew this during design, as they also designed the Xenon and it is a public secret that research done on the Cell BE’s technologies (like Cache, ALU, etc) benefited the Xenon as well. Especially concerning a GPU, something that IBM does not develop in house, they knew the writing was on the wall. I mean they knew by then what MS and AMD/ATI were developing at that time.

Again for a server/super computer it might have had merit, especially since CUDA didn’t exist yet during the CELL BE’s research stage. But the moment they saw the direction (GP)GPU’s were going into? You can bet your arse that all the engineers went “Yeah,… we done.”.


Don’t get me wrong, the 360 was by far the smarter design for gaming, and was undersold by MS at its launch. Some of the software shown by MS when they announced the 360 was pathetic, while Sony as usual was showing CGI as PlayStation games. The 360s CPU was not as powerful in real life as the specs suggested. A multithreaded tri core CPU clocked at 3.2ghz sounds like a beast. In reality it was maxed out pretty early on in the 360 lifespan.

The SPEs on the Cell were extremely powerful. They were a horrible thing to code for, and what would take 60 lines of code on the 360 would require some 1200 lines of code on the Cell for the same command. But if you were able to put your full resources into the PS3 it was capable of things the 360 wasn’t. Have a look at the particle effects such as explosions and smoke on Killzone 2. Even today, particle effects hit performance hard. The 360 would have a real hard time replicating those same effects, let alone the animations as well. These effects were done on the SPEs, not on the GPU. And then look at TLOU and UC2 on PS3, and you can see what was possible. My point is that if the Cell could have been overhauled and refined so that the SPEs didn’t require as much work to get firing, and the PPE could be made faster and OOO etc, I wonder what it could have been capable of.

But those three or four games on PS3 that were amazing don’t make up for the hundreds of multiplat games that sucked hard on PS3.

Please don’t just look at it from a ‘lines of code’ perspective. For the time the 360’s CPU was actually a very powerful CPU that actually allowed dev’s to get better with it as tools matured. No it wasn’t Maxed out, maxing out is something I hardly ever do with a CPU. In a development cycle it usually is cost vs benefit. How much more time do I need to further improve this performance and, if I have reached the target, why should I?

The SPE’s were also not extremely powerful, they were good in a limited amount of tasks (like I mentioned), but those tasks are done on GPU’s now. And even the 360’s GPU has the abilities to do similar functions to an SPE on the Cell. And yes, this was used, not just in 360 development but more in the PC sphere of things.

Not long after universal shaders came onto the market, did Nvidia release CUDA. Before that Cg was the main language and it had some limits, but with the arrival of more dedicated APIs, using the GPU for these tasks was just a thing. And yes even in Cg, you can do many of the things the Cell can do. Microsoft already had HLSL (High Level Shader Language) as a part of it’s Dev kits, which also were improved upon.

It also had features to push and pull data directly from system memory. In case you wonder what that does: Pull vector information directly from the memory, set up by the CPU. Then do the things you wish to do on that data, and return it to location. Doing basically the very same thing that you’d do on an SPE. Really anything you’d do on an SPE you would be able to do on the Xenos. You are staring too blindly on the CPU part of the equation here.

Also by using the extra vector instructions you had available to you on the CPU (VMX128) you could still optimize further, but very often it became a “Why would you?”. There is a lot more to game development that just “power”, costs and time are a major part of the equation.

I don’t really get why you only look at it from the “CPU” perspective, as for the Xbox 360, the architecture itself was why it was shining. Okay, there were some mistakes made (the EDRAM was too small for deferred rendering, but deferred rendering was not a thing when it was designed.). Were as for the Playstation 3, they HAD to bring in NVidia as the Cell by itself simply wouldn’t be able to cut it (as they originally thought it would).

What you see with the 360, was a glimpse into the future of GPUs. It was WAY ahead of it’s time some of the features it had didn’t make it to PC until the R600 series.

Now to reply about the explosions and particle effects: Anything you could do on a PS3? you can do on a 360. If both were exclusives and show case titles (in which you don’t mind spending a lot of money and time). Microsoft simply “gave up” after a while with game development. They really did, were Sony doubled down. Microsoft tried getting more into "third party, were as Sony at the time was heavily invested into the game devs they had. And it shows.

I think you underestimate what the 360 actually is and what it’s GPU could do. No the effects can’t be done on the PS3’s GPU, as it’s an older design stratagem. With fixed pixel and vertex shaders, and yes this is were the SPE’s did their work. But on the 360, the GPU COULD do those things.

Please stop looking at these two systems as just their respected main processors. There is so much more to either system and architecture is important. VERY important. In a way modern SOCS are exactly what they tried to do with the Cell BE, but better.

(More info on memexport).

PS: Don’t just look to numbers like “3.2 ghz” there is a lot more to it than just a “high number”. One of the weaknesses the Xenon had was No Branch Prediction (one of the reason why it wasn’t just "Maxed out pretty early). Were as the original Xbox had that.

Sorry I am at work and would love to go into this deeper. But No… the Cell BE was NOT a good design the moment modern GPU design paradigms hit the market. And that happened… with a Console that launched a year earlier than the PS3.

1 Like