Mikey Please Styrofoam and Light, Boxtrolls stop-motion and digital effects, Animation studios using scientists, Real-time facial tracking on a laptop

Wednesday, September 10, 2014

Mikey Please creates amazing dark comedy out of Styrofoam and Light

Mikey Please creates Maralyn Miller dark comedy animation from Styrofoam and Light"In the beginning it was literally nothing, and then there wasn't... nothing. There was stuff." Comedian Josie Long voices this gorgeous piece of animation following a girl who tries to make something perfect, and ends up simply giving up. The six-minute stop-motion short was a year in the making at Clapham Road Studios. This Mikey Please-directed labour of love features countless creations, intricately carved from Styrofoam and beautifully lit - something that must have been both a technical and creative headache. Marilyn Myller pokes a finger at the pretentiousness of the art world, as the titular character smashes the work she has so lovingly created in front of a crowd of champagne sipping morons, while the translated African soundtrack tells her she's being "kind of stupid". In an interview with Designboom, BAFTA award winning Please explains: "I try to keep a graphical quality in my work, using clean materials and illustrative compositions. But there’s a danger in that it can become over clinical and clean, which defeats the point of making this stuff for real. So the painting with light was a great way to inject a little organic-ness back into the image without losing the clarity (which I like) of the physical objects." This is a marvellous piece of animation that has already been featured at a number of festivals, including Sundance, and is sure to win Please and the Parabella Animation Studio numerous accolades. 

 

The Boxtrolls fuses stop-motion animation with digital effects

The Boxtrolls is the latest 3D stop-motion film from the creators of Oscar-nominated Coraline and Paranorman. Adapted from the Adam Snow book Here Be Monsters!, the story follows the adventures of an orphan boy named Eggs as he tries to save his adopted family, the Boxtrolls, from the evil exterminator Archibald Snatcher (voiced by Sir Ben Kingsley). The inspiration for the mythical town of Cheesbridge was inspired by the work of French graphic novelist Nicolas De Crecy. LAIKA co-director Tony Stachi explains on the Boxtrolls website: "His organic use of lines, patterns and shapes gives his cityscapes the same complexities that you find in nature". Michael Breton, a French-Canadian concept illustrator, was recruited to create the world of the Boxtrolls. He said that after he started drawing, his "approach was to think about the psychology behind the image and play with the audience's perception of what might be hiding in the dark". The film immaculately fuses traditional stop-motion animation with digital technology. Creative supervisor of puppet fabrication, Georgina Hayns, said one of the biggest challenges for her team was finding a way to move the Boxtrolls' limbs in and out of their boxes, adding that for some of the puppets, special gears were created to manage this. The realistic movement of the characters' faces was achieved through replacement animation, a technique that involves minute changes to each part of the face in every frame. Brian McLean, director of rapid prototyping, estimates that around 53,000 facial parts were made for the film. These were fashioned using a powder-based colour 3D printer, a piece of kit not available to animators just a few years ago. In order to create a fully populated imaginary world, the animators used CGI for the crowds and backgrounds. The film is slated for release at the end of the month. 

Animation studios turn to top scientists to solve long-standing problems

The past few years have seen film animation develop in leaps and bounds. This is in no small part thanks to big studios like DreamWorks and Disney's Pixar recruiting top scientists to work on the problems associated with creating 3D reality inside a computer. In an interview with the LA Times, physicist Ron Henderson, formerly a faculty member at Caltech, explained that making the move from the laboratory to the film industry is attractive to scientists because of the challenge of creating realistic effects such as dust, fire and water, and the satisfaction of seeing the results on the big screen. USC Institute for Creative Technologies chief visual officer, Paul Debevec, said: "The physics behind what's happening in these movies is incredibly complicated". The computer scientist added that: "You need real scientists to understand what's going on. These are PhD-level folks who could have been publishing papers in Physics Today. Instead, they are working on Hollywood blockbuster films." Henderson is currently working on an algorithm to help animators create 3D bubbles for DreamWorks' upcoming animated feature Home. The film is about tiny aliens called the Boov who live in soap bubbles. To understand how bubbles work, Henderson invited physicist colleague Alejandro Garcia from San Jose State to give a lecture to his team. Garcia did various tricks with liquid soap and party bubbles, including exploding a bubble made from hydrogen. As the demand for more realism in animation increases, it's likely that studios will continue to recruit scientists from the worlds of mathematics, chemistry, aeronautical engineering, astrophysics and cognitive science. DreamWorks has already poached a dozen scientists from NASA's Jet Propulsion Laboratory!

Facial tracking and animation available in real-time on a laptop

Newly developed software from a research team at the Graphics and Parallel Systems Laboratory at the Zhejiang University in China allows realistic real-time facial tracking and animation for anyone who has a camera on their computer or mobile device. This amazing programme does not require a RGDB camera or calibration to the individual's face. Instead, a regression-based algorithm with a Displaced Dynamic Expression model is used to represent the 3D facial expressions of the user as well as 2D facial landmarks. The Dynamic Expression Model is employed to correct the camera matrix, which negates the need for calibration by new users. In a paper entitled Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation, Kun Zhou, Qiming Hou and Chen Cao explain that the automatic approach: "learns a generic regressor from public image datasets, which can be applied to any user and arbitrary video cameras to infer accurate 2D facial landmarks as well as the 3D facial shape from 2D video frames, assuming the user identity does not change across frames." "The inferred 2D landmarks are then used to adapt the camera matrix and the user identity to better match the facial expressions of the current user. The regression and adaptation are performed in an alternating manner, effectively creating a feedback loop. With more and more facial expressions observed in the video, the whole process converges quickly with accurate facial tracking and animation." 

Blog Archive

Twitter