..no, it's not that CISC is a problem. Or that just having a RISC core of some sort is going to solve everything. ARM chips are RISC chipsets, and still have the same problems as a PC bridge when paired to the external modules (new versions such as the Tegra based chipsets solve this, though - looking forward to get one, to see what can be done with it..).
And CISC comes from the usefulness of having a general instruction set. So that the cores and so on would still execute what appears to the abstraction layer as atomic instructions. CISC is basically a way of conflating atomic assembly instructions into common operations. If moving two registers and rotating with a specific operand always resolves the same way, for example, it can be reduced, and execute with one instruction instead of three. The program layer sees no difference except things go faster.
Problem is that you can't extend this endlessly if you want to keep a general instruction set. It's also not consistent, mathematically speaking.
RISC is the opposite, being deliberately written to allow high level operations with longer and more complex instruction words. But it's not a solution to every problem either. If the instructions are too specialised, they affect the flexibility of the high level abstraction.
So does any of this really concern developers of the actual programs?.. My opinion: not at all. You get the hybrids like PPC that support 64 and 32 bit lands, and they're essentially just used in the same way. Or at least you need to be extremely dedicated before you start to write code that specifically only compiles in a 64bit land. It's different from the hardware layer, though, so I don't think Sony or MS were extremely concerned with the licensing costs. IBM would probably have needed to use longer instructions to ensure the response from the elements on the bus.
That's really where RISC/CISC is interesting at all. And why the Cell is so interesting. Being the first programmable instruction set computer with quick enough execution to be used in more multimedia contexts. Similar designs are typically based on parallel execution with much longer time-slices, so they're useless for graphics and so on. (Take a look at graphics cards, and you have the same principle here as in the Cell, except with less complex instruction sets and less flexible processor elements).
Meaning that for current processors, we've hit the limit, see.. 5ghz. The only way to extend that was CISC, which has been going on since the first Pentium (seriously, remember the bashing over that as well. Jesus Christ, was at least as bad as the Cell bashing). RISC is another way to go, but will break compatibility, and leave you with specialised instruction sets that eventually affect the sdk anyway.
So the solution is EPIC.. an explicitly parallel instruction set computer. And a complete merge of graphics and cpu tasks. But... that's not going to happen until Intel stops suing people who infringe on their "cpu" patent.. I mean, seriously, that happened. They sued Nvidia over their Tegra prototype, apparently.