Please activate JavaScript!
Please install Adobe Flash Player, click here for download

SAE Magazine 12-2

W ith classical motion capturing (“mo- cap”) the actor or performer wears a suit with markers and sensors and is filmed by multiple special cameras. The movements of the performer and with this the markers’ changes in location in reference to position and orientation are recorded and imported into a 3D programme. Here the data is transferred onto a type of skeleton and this is combined with the object to be animat- ed. In contrast to the technology previously used in animation studios the interpolation between key frames makes the animation a lot more flowing and natural. In the early days only the body of the performer could be recorded but the MoCap technicians have now extended this to the so-called performance capture. The sensors are also placed on the face in order to achieve realistic facial expressions for the character to be animated.The most famous example of this procedure is Gollum from “The Lord of the Rings” films. The gestures and facial expressions of Andy Serkis were transferred so perfectly that a credible character was created in the film from completely computer animated means. The video games developer Rockstar Games took things a step further with their game “L.A. Noire” which was released in 2011. This detective game utilised the so-called MotionScan method where the actor’s face was recorded with a total of 32 vid- eo cameras covering every angle around the actor and were also installed above and below the actor’s field of vision. The vast amount of data captured was then transferred and displayed on a 3D model in real time. This set-up enabled incredibly realistic facial features so that later in the game the facial gestures of the characters could be used to tell who was lying and who was telling the truth. In the meantime motion capturing can be found in all areas, however classical MoCap with sen- sor suits comes with a big catch: the cost. Acquisi- tion of hardware, meaning specialist cameras and sensor suits but also renting premises in one of the countless MoCap studios is expensive and cannot currently be afforded by small companies with lim- ited budgets. The introduction of Microsoft’s mo- tion sensing input device “Kinect” was therefore an interesting development. The name Kinect comes from the terms “kinetic” and “connect”. The flat black housing has space for two infra-red sensors, a 640 x480 pixel camera and calculates approx. 30 images per second. Released in November 2010, approx. 2.5 million units were soldworldwideinthefirstmonth.Thecamerashould have solely been used as a new input medium for Xbox games but shortly after being introduced the first driver hacks appeared which extended the use of the Kinect to include some interesting possibili- ties. Microsoft did initially threaten not to tolerate such modifications but now the software giant has come to see these hacks as inspiring. So now the imaginations of programmers know no bounds. A team of Chinese students caused a sensation when they used the Kinect’s camera for motion capturing and came up with some interesting results. The beginning of motion capturing: “Mike the talking head” was a breakthrough in the field of facial motion capture in 1988. The head of Mike Gribble was scanned and digitised and could be used to visualise spoken words. n Infobox The Russian company iPiSoft brought out the first useful programme which allows movements to be captured and transferred using the Kinect. With the assistance of both infra-red sensors depth images can be generated which can then used as a template to transfer movements onto a virtual skeleton. The results of this marker-less MoCap are astoundingly good and can be easily transferred to today’s popu- lar 3D programme s such as 3Ds Max and Maya.As such a set-up of two Kinect cameras is really flex- ible in terms of room size and structure, this form of MoCap is now used in virtually all German SAE Institutes and is a fixed part of teaching in the new curriculum. This gives students the opportunity to imbed intricate animations in their final projects. Abreakthrough in the area of gesture recognition is expectedattheendof2012/beginningof2013:Leap (http://www.leapmotion.com), a start-up compa- ny from San Francisco plans to introduce their 3D gesture controller Leap Motion onto the market. This issmallboxwhichisconnectedtoaPCbyUSB or Bluetooth and recognises movements exactly ➤ 107 Production & Know How // MoCap and Leap Index

Pages