Lip Synchronization is one of the most time-intensive character animation challenges we face. To make an animated character appear to speak involves figuring out the timings of the speech as well as the actual animating of the lips/mouth to match the dialogue track. Creating a functional illusion can be very difficult; select the wrong shape or be one frame off with your timing and your audience is forced to remember the on-screen character isn’t really alive. So how do we get mouth animation that both a) sells the illusion of speech and b) dosent require an enormous budget?
Internally, our most popular solution is the ‘Lip Flap’ utility that Kipp introduced years back. The utility works dynamically, reading the waveform of an audio file and scaling the mouth from an open to closed state based off the volume of a given moment in time. This is far and away the best solution to the challenge, as it all but eliminates the cost of accurate mouth movement. The shape of the mouth (or phoneme) , however, is not altered using this approach, leaving you with a sorta kinda “flapping” effect wherein the words ‘dog’ and ‘elf’ would play out identically at run time. So how do we get mouth animation that a) sells the illusion of speech, b) dosent require an enormous budget, and c) features accurate phoneme swapping?
Smart Mouth is the answer. Using Smart Mouth you can automatically analyze audio content and assign corresponding mouth shapes. SmartMouth processes the audio and then matches each frame using a speech algorithm. Your Flash Timeline remains 100% editable, leaving you in full control of your animation. You’ll still want to go in and polish things up where needed, but the bulk of the work is done for you as though you had painstakingly set it up on your own.
and enjoy making your client happy!