10 Future Technologies Computer graphics expert shares new technique for creating digital humans Dr Hao Li gave the IET Appleton EngTalk in September, describing a new artificial-intelligence-driven technique that can create digital humans more quickly and easily than traditional approaches. A researcher from the Institute for Creative Technologies, University of Southern California, Dr Li shared his work on human digitisation and facial animation with an audience at IET London: Savoy Place. ADVANCES IN THE ENTERTAINMENT INDUSTRY Dr Li showed that the VFX industry has already created near-photorealistic digital humans, referencing his work on the film Furious 7. Working with visual effects company Weta Digital, Dr Li helped to create a digital replica of the actor Paul Walker, who passed away mid-way through filming. Dr Li indicated that the gaming industry, too, has pioneered developments in human digitisation. “What is possible nowadays is a state of the art system that allows real people to drive virtual characters in real time – an avatar can be driven in real time,” he said. “The only issue is that creating such an asset requires weeks, if not months, of work…” The challenges for the entertainment industry are not so much creating photorealistic or real-time content, but having to invest huge amounts of time, expertise and money into the work. A “SHORTCUT” FOR GENERATING DIGITAL CONTENT Dr Li suggests that the solution is deep learning: “If you use a technique such as deep learning, one of the fields in artificial intelligence where all the processing is driven solely by massive amounts of data, you can in some ways shortcut the problem of generating digital content.” The approach involves providing a deep neural network with training data, which enables it to generate content. Dr Li has already applied this technique to projects that he’s worked on with colleagues at the Institute of Creative Technologies at the University of Southern California, the wider university and his start-up Pinscreen. They’ve used deep learning to transfer high- resolution features from a database to a face on a low-resolution photograph. They’ve also used it to synthesize new views of a face, for instance generating the side of a face when only the front has been provided as input. They’ve even used it to generate hair for a subject, which Dr Li says is “notoriously difficult for computer graphics.” NEW MEDIA FOR THE MASSES While the approach is intended to make content creation quicker and easier for industry, Dr Li is also interested in enabling everyday people to create similar kinds of content. “I think these types of content creation tools will move towards something that is accessible to everyone,” he said. Dr Sarah Atkinson, a Senior Lecturer in Digital Cultures at King’s College London, contributed to the discussion with her insight talk, indicating that we can expect a period of uncertainty when audiences are introduced to new media that blurs the lines between reality and fiction. She referred to examples from the past where audiences reacted with confusion and uncertainty to new media – from early photographs to ‘found footage’ films such as the Blair Witch Project. Dr Hao Li, Director of Vision and Graphics Lab, Institute for Creative Technologies at the University of Southern California A “THOUGHT- PROVOKING” EVENT Over 350 delegates attended September’s EngTalk, with many providing positive feedback on the event. “I have not attended an IET EngTalk since its re-brand from the Prestige Lecture series,” one delegate said. “The arrangements, the visuals and the atmosphere were fresh, modern and vibrant. The topics presented represented the technologies and research, but also brought in wider policy and ethical dimensions that is a core part of what it means to influence the future. A very enjoyable, informative and thought-provoking event.” For more detail on the technologies used in human digitisationand their impact, watch the EngTalk at: www.theiet.org/human-digitisation Member News – November 2018 www.theiet.org/member-news