Interview with Alexander Sytchev, Lifemode Interactive’s Director of Production

LifeStudio:HEAD technology is aimed at rendering computer character animation in real time. The technology SDK has already been licensed by several top Russian & UK studios; including Nival Interactive, C1, Gaijin Entertainment, Buka, G5 Software, and FireFly Studios which is based in the UK.
We speak with one of the brains behind LifeStudio, Alexander Sytchev, Lifemode Interactive’s Director of Production about the software.
XG: Hi, can you introduce yourself and how you are related to the company.
AS: I am, Alexander Sytchev, Lifemode Interactive’s Director of Production
XG: How about telling us more about LifeStudio:Head 2.6.
AS: In fact this is a tool kit for creating heads and mastering facial animation including automated lip-synch. Computer games and video presentations, talking heads and computer secretary – this is only a few examples of genres of applications where the tool kit can be used. Quick results can be easily produced within the LifeStudio:Head environment with the help of a comprehensive set of prototypes, texture constructor and animation libraries. On the other hand the tool kit is extremely open-ended, so mastering a head model is possible in 3ds max or Maya, as well as customization of almost every other component such as animation libraries and textures. You can create a talking animal or even a talking soap box within the same technology. One of the most remarkable features of the technology is cross-compatibility of all graphical assets (models, textures and animations) so that each of the characters could play any animation. The result can be output as an AVI in the tool, or rendered in 3ds max or Maya, or it can be exported to the facial animation engine - a part of the LifeStudio:Head SDK which may be included into user application. 
XG: Can you tell us more about the Macro-Muscles technology LifeStudio:Head is based on?
AS: Now an artist dealing with facial animation mostly creates animated characters with emotions manually. We’re trying to force AI routine to do some work for the artist. Therefore our Macro-Muscles system was made well balanced and smart enough so human intelligence can create such a set of the parameterized emotions - Macro-Muscles - that dull artificial intelligence can use them in real time. Thanks to our software, mixing different Macro-Muscles gives a visually recognizable effect, and instead of making the whole animation an artist creates bricks of emotions and the whole animation can be generated later in real-time. This is a key to non-linear animation. Of course an artist may create an animation instead of AI – this process becomes easy because it is almost mechanical.
XG: So, we know that LifeStudio:Head is sophisticated. But how easily can you make use of the product?
AS: It is sophisticated in depth but easy to use. The interface is intuitive and many templates are supplied so it is easy to get the results based on ready-made components. If you want to customize some component it will take more efforts but a step sequence can be easily explained. The most difficult part is to change several components at once – you may have to repeat our entire work on setting a good balance and orthogonalization of ingredients.
XG: What about the SDK. How does it complement the tool?
AS: The SDK provides the means to incorporate all the power of the technology into your own application. You get the benefit of cross compatibility and non-linear animation. If the player of a game pushes the button and something blows up, exactly at this moment the AI can force the game character to get frightened.
Only lip-synch based on speech recognition cannot be generated in real-time. It is not a limitation for games because if you have a collection of speech sound files you have the possibility to pre-render lip-synch.  But if the application is using some of text-to-speech packages with the possibility to generate phonemes on-the fly then lip-synching can also be done in real-time.
In addition to animation the SDK allows head constructing and even head generation in the application.
XG: Would it be alright if you tell us about some of the studios and games which have used this technology?
AS: The Silent Storm by Nival Interactive uses our SDK. So you can create faces of your own characters: choosing a personality, culture, face features and individual haircuts. The edited characters are extremely lifelike and respond with lip-synching and facial animation. Moreover SDK is now licensed by several successful developers including Firefly studios, 1C, Buka Entertainment, G5 Software etc. and some titles are still in development.
XG: Is there anything else you would like to tell our readers?
AS: I’m extremely satisfied with our development. As it happened we had foreseen the future game market demands. As you know the first-person-shooter share is reducing, and the interest in virtual speaking characters and human life simulation is growing. We anticipated the change in genre and we offered game developers the ability to use realistic characters in their games to enable a player to be emotionally involved in a dialogue with a virtual human. We dream of raising the content of computer games at a psychological level. And if we see that the characters in games become clearly but inappropriately emotional we will then switch to solving the Artificial Intelligence problems.