Results 1 to 3 of 3

Thread: interesting Mark Cerny interview with the Japanese press about PS4

  1. #1
    Senior Member Space Cat's Avatar
    Join Date
    May 2007
    Principality of Gallia

    Default interesting Mark Cerny interview with the Japanese press about PS4

    Part 1:

    Focusing on the “positive aspects” and Moving to the x86 Architecture
    Cerney states that he started thinking about a “next generation console” in Fall 2007. SCE should have gone into basic R&D regarding next gen technologies soon after PS3’s release and this falls in-line with when Cerney started investigating.

    Cerney: I had started discussions regarding the next generation following PS3 in 2007. At that time, I was investigating what should be done for next generation [technologies]. It was at that time, I wondered if we couldn’t use the x86 architecture for the next generation. I used then entirety of Thanksgiving weekend looking into this (lol). For Americans, this holiday is extremely important. But, that’s how I sacrificed (lol) the holidays to think about the future and what possibilities this might bring for our organization.

    After that, I went to Phil Harrison since he was at the top of the game development division. I was also introduced to Masayuki Chatani who was SCE’s CTO at that time and was directing the next-gen project. What was surprising was that he said “yes” to me being involved with the next generation console.

    By moving to the x86 architecture, this also means losing backwards compatibility with PS3. The basis of Cerney’s vision was to use this x86 architecture. This is a huge tradeoff, but SCE accepted this vision.

    Cerney: We struggled with this point. As a matter of fact, this was the major point I thought about during Thanksgiving. What to do with the current CPU and x86…

    We decided to focus on the “positive aspects” rising from switching to x86. X86 has instruction sets which are of significant importance for games. Multimedia instruction sets, specifically the existence of SSE 4.1 and 4.2. And of course, the existence of an APU gives us the ability to come close to the results obtained from the SPU.

    The decision to move to x86 had an extremely complex set of requirements. Of course there’s issues of backwards compatibility and issues from the vendor’s side as well. But that said, I believe the biggest topic for us was how much affinity the developers would have for this change. In the past 3 years, there have been a large number of refined tools and technologies released for the x86 architecture. If another architecture had been selected, it probably would have been even more problematic. The x86 architecture is well known and development is relatively easy.

    Ito: Backwards compatibility, particularly in Japan, is something that is strongly brought frequently, so we thought long and hard about this. Realistically, to support backwards compatibility with PS3, the CELL Broadband Engine would have needed to been part of the new console. Currently, it’s not possible to simulate this via software. If CELL were the only requirement, that wouldn’t have been much of an issue. We would also need to support the supporting hardware indefinitely. We can freely manufacture CELL if the decision is made that it is needed. However, that’s not the case with supporting hardware. There are parts which will become difficult to obtain since 7 years is already considered to be long in the IT industry…

    Using this opportunity, we decided to stop going down this path, and as Mark said, to focus our efforts on simplifying developer efforts.

    Essentially, SCE’s thinking was that when considering focusing on sustaining and maintaining PS3 (and prior) hardware long-term, they also saw the need to transition to a “more ordinary” platform. It can be interpreted that x86 offered easier development opportunities and also provided for a way to get away from a “proprietary” track, leading to this decision.
    Now, SCE hasn’t come to a conclusion regarding the BC problem. It’s said that use of Cloud long-term is part of their vision, but more accurately, SCE is evaluating various content in various forms including sustaining BC.

    GPU Customization with use of GPGPU in Mind. Difference in Launch Title Numbers
    Use of the x86 architecture also means externally, it becomes difficult to see the difference between PC development. How does Cerney think about how to showcase the difference and value of PS4?

    Cerney: Our primary target is to provide a powerful system that developers are familiar with. It goes without saying an x86 CPU has high familiarity. From a power perspective, and providing new possibilities, it will become more important to realize technologies benefiting from a GPU. GPUs increase graphics performance and have been used in that manner traditionally. But, the computing capabilities of GPU will be harnessed in various areas in manners we can’t even begin to think of now.

    This essentially means, [PS4] will be a console that not only focuses on CPU performance, but also on GPU performance…essentially a realization of a console using a GPGPU. In fact, at the PS4 press conference, a physics demo using the GPGPU was shown, and PS4 has an added value proposition of having a high performance GPGPU as a core feature set of the platform. For that purpose, the PS3 CPU and GPU has a few proprietary tricks up its sleeve.

    Cerney: The GPGPU for us is a feature that is of utmost importance. For that purpose, we’ve customized the existing technologies in many ways.

    Just as an example…when the CPU and GPU exchange information in a generic PC, the CPU inputs information, and the GPU needs to read the information and clear the cache, initially. When returning the results, the GPU needs to clear the cache, then return the result to the CPU. We’ve created a cache bypass. The GPU can return the result using this bypass directly. By using this design, we can send data directly from the main memory to the GPU shader core. Essentially, we can bypass the GPU L1 and L2 cache. Of course, this isn’t just for data read, but also for write. Because of this, we have an extremely high bandwidth of 10GB/sec.

    Also, we’ve also added a little tag to the L2 cache. We call this the VOLATILE tag. We are able to control data in the cache based on whether the data is marked with VOLATILE or not. If this tag is used, this data can be written directly to the memory. As a result, the entirety of the cache can be used efficiently for graphics processing.

    This function allows for harmonization of graphics processing and computing, and allows for efficient function of both. Essentially “Harmony” in Japanese. We’re trying to replicate the SPU Runtime System (SPURS) of the PS3 by heavily customizing the cache and bus. SPURS is designed to virtualize and independently manage SPU resources. For the PS4 hardware, the GPU can also be used in an analogous manner as x86-64 to use resources at various levels. This idea has 8 pipes and each pipe(?) has 8 computation queues. Each queue can execute things such as physics computation middle ware, and other prioprietarily designed workflows. This, while simultaneously handling graphics processing.

    This type of functionality isn’t used widely in the launch titles. However, I expect this to be used widely in many games throughout the life of the console and see this becoming an extremely important feature.

    Note: I took out most of the commentary....

    Part 2

    GPU Customization with use of GPGPU in Mind. Difference in Launch Title Numbers (cont’d)
    Cerney: In the next few years, we’ll also be supporting a different approach

    We have our own shader APIs, but in the future, we’ll provide functions which will allow deeper access to the hardware level and it will be possible to directly control hardware using the shader APIs. As a mid-term target, in addition to common PC APIs such as OpenGL and DirectX, we’ll provide full access to our hardware.

    Regarding the CPU, we can use well known hardware, and regarding the GPU, as developers devote time to it, new possibilities which weren’t possible before will open up.

    The properties of CPU and GPU are quite difference, so in the current stage, if you were to use an unified architecture such as HSA, it will be difficult to efficiently use the CPU and GPU. However, once the CPU and GPU are able to use the same APIs, development efficiency should increase exponentially. This will be rather huge. Thus, we expect to see this as somewhat of a long-term goal.

    Regarding easier development, talking about the action game KNACK

    Cerney: I’ve spoken with a lot of developers, but most of the developers are saying that creating a game is considerably easier.

    Working on a game myself, I feel that is true. KNACK is still in development, but the PS4, compared to the PS3, really makes game development easy.

    This will also lead to the main difference with the PS3 era. The main difference is, we will have many titles for launch. Because game development is easier, there shouldn’t be a barrier as there had been previously. PS3 had the image that it was difficult to develop for. Even the PS2 wasn’t that easy. PS4 has a PC CPU and a GPU that’s been enhanced from a PC so the game lineup should become very rich.

    The most important difference is, it won’t take as much technical training, so developers can focus more on the game-play aspects. That’s ideal isn’t it? As a result, [gamers] should a world with a richer gaming experience.
    Regarding 4KTV, he is a little passive

    Cerney: hm…(lol). Personally, I’m very interested in 4K

    We’re still in the initial stages of supporting 4Kx2K in games. Our focus is to provide for a solid FullHD experience. We can secure the display buffer for Game and OS separately, and can provide for independent scaling of both as well. (Regarding 4K) We can provide an extremely smooth user interface.

    If we consider purely memory bandwidth, with 4K, securing 2 displays worth of display buffer requires 10GB/sec. That just for simply displaying.
    This is our simple answer for why we’re focusing on just the FullHD experience.

    PS4 will read CDs, but will not play back audio CD music.

    Realizing Energy Efficiency and Smoothness using a Second Custom Chip with Embedded CPU
    Cerny: The second custom chip is essentially the Southbridge. However, this also has an embedded CPU. This will always be powered, and even when the PS4 is powered off, it is monitoring all IO systems. The embedded CPU and Southbridge manages download processes and all HDD access. Of course, even with the power off.

    Ito: The second custom chip also takes into consideration environmental problems. For background downloading, if the main CPU needs to be started every time, energy consumption increases significantly, so we run this with the second chip. Particularly in Europe, there are strict energy consumption regulations, so handling consumption in this manner is also one of our goals.

    Cerney: There’s also network bandwidth considerations. Background downloading allows for smooth downloading of large files even when bandwidth is limited.

    More importantly, this helps reduce the time required until a game can be played. Simultaneously, this also allows for decreased initial downloads. Only the first few GB are downloaded during the initial play session and while the game is being played, the remaining portions will be downloaded. Of course, even with the power off, the remaining download will continue. So, the primary goal is to decrease the amount of download time before initial play.

    Cerney: The data is logically divided into a few chunks, and uploaded [to the server by the dev?] with a specially annotated script. Further, based on how the script is written, additional customization is possible. For example, downloading of single player portion or multiplayer portion first…Related to this…system memory has increased by 16x since PS3, but the BD drive transfer speed has only increased a few fold. Because of this, using a similar technique, it’s possible to copy just the important parts from the BD to the HDD and start the game. By doing this, it’s possible to load directly more smoothly from the faster HDD. Of course, it’s possible to stream the data from a ginormous BD and play a game as well.

    Note:Commentary removed again...getting late so sorry for typos
    Part 3 (last one)

    Built-In Video Encoder for Video Sharing and Vita Remote-Play
    Cerney: The PS4 has a dedicated encoder for video sharing and such. There are a few dedicated encoder and decoder functions which are available and use the APU minimally. This is also used for playback of compressed in-game audio in MP3 and audio chat.

    When the system is fully on, the x86 CPU core controls the video sharing system. However the Southbridge has features to assist with network traffic control.

    Cerny: While investigating the initial hardware design, we had been thinking about what aspects will become important in the future. All hardware components have been prepared with enveloping the gamer with a realizing a wonderful user experience.

    Our team thought deeply of the concept of “computer entertainment”. People from other Sony groups participated and we investigated this from many different angles. Since we are the “game” people, we have UI specialists, and Richard Marcks(sp?) (father of SCE’s Natural UI design)was involved as well. This multi-facetted team spent a few weeks discussing how amazing of a user-experience we could realize.

    So, how does this affect games?

    Ito: For example, even without a PS4, this time, we can use the PlayStation App to see game details (content?). Even without a PS4, you would be able to experience “man, this game looks fun” or “this game might be pretty good”. And with this, it is our hopes that the allure of the PS4 will be evident.
    Of course, if a gamer shares images on Facebook, you can see this from Facebook without the PlayStation App.

    Regarding Vita Remote-Play

    Cerney: Vita Remote Play is special.
    Smartphones and Tablets are used to see game information regarding the PS4 in various places, experienced in various places. In addition to this, this type of content can be seen on the PC as well using a web platform.

    PS4 has video encoding hardware, and this is used with Video [Miracast?-type] feature. Vita’s control inputs are sent to the PS3, and using these functionalities, with minimal overhead and no pain, it becomes possible to remotely play PS4 games. Atleast, this is what we’re aiming for, and compared to the PS3 era, we’re aiming for a significantly wider support of remote play. Of course, this function applies to games using Dualshock. Games that use the camera (such as PS Move and PS4 location recognition) cannot utilize remoteplay.
    Leaving that aside, Vita’s remote play was developed to provide as close to perfect PS4 gameplay within a household as possible. This requires connection to a Wifi network and should be used in a low latency environment. The thinking here is also that even if someone else is using the TV, you can continue playing PS4 games.

    Use of Real Names for Gaming without Walls, Use of a BSD base for a Rich OS Layer
    Cerney: We’re investigating using the network to switch control between players. Just keep in mind, this doesn’t mean these types of features will all be present on Day 1. Please understand that we’re preparing and investigating these as features supportable on the platform.
    Either which way, use of social interactions to stimulate gameplay should become a huge weapon in our nal.

    Just my image of things but…think of it like being in a living room. When you’re playing a game with your friends, there’s no physical wall to prevent interaction. It’s kind of like having hundreds of friends gaming with you nearby, but this feeling while you’re sitting at the end of a network. We want to act as the facilitators to enable this; to enable the feeling of actually meeting and enjoying gaming with your real world friends around the world. Over the course of a few years, we’ll support all the features required to support and achieve this goal.
    For this purpose, PS4’s OS layer is very rich compared to PS3 or Vita

    Cerney: The OS is based on BSD. I believe this is the first game console using this architecture.
    From the OS side, the PS4 will allow use of many multiple features simultaneously. Our goal is something like the following:

    Send a video of the game you’re playing, and return to the game immediately to continue playing. Then, watch a game play video from your friend, then switch to video chat with that friend right away. If you see interesting DLC that your friend has, move to the store, then be able to ask your friend if it’s the right one.
    In this manner, we envision it to be possible to come-and-go between and use many features. Even for multi-player games, you would be able to move the game to the background without quitting, and go through a similar routine.

    The facilitation by the OS will allow for a rich set of actions.

    Regarding the “real name” policy

    Cerney: The concept of aliases for online games is the current paradigm. For example, in a multiplayer deathmatch, it’s better to have an alias, right? However, when coop aspects or communications leveraging social interactions are brought into the game, it’s better not to have an alias. For example, let’s say you gift an item you earn in a game to a friend. That type of interaction should evoke a completely different feeling than when you’re playing the game.

    Of course, we respect the desire to use an alias for game play as well. We support that type of gaming too. Having a deathmatch using aliases is possible on the PS4 as well. But at the same time, we want to support aspects which increase the fun of gaming with real world friends. For example, wouldn’t it be wonderful to meet a college friend you haven’t seen in ages?
    source: neogaf

  2. #2
    Join Date
    Jun 2008
    Miami, FL


    this is too long whats goin on

  3. #3
    Senior Member Jetstreamx's Avatar
    Join Date
    Dec 2005


    I'm not entirely sure I understand. I demand peer review.

    The Wii U uses the GPGPU as a means to compensate for a lacking CPU, while the PS4 uses the GPGPU setup to process and use information from the CPU more quickly by bypassing the L1 and L2 cache? The GPGPU still takes the traditional role of the GPU, but is able to interact with the CPU more directly, thus increasing speed and efficiency?


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts