This month, we have two weeks of projects to share with you. This week, we focus on the Unreal film projects we found. The breadth of work folks are creating with this toolset is astounding – all these films highlight a range of talent, development of workflows and the accessibility of the tools being used. The films also demonstrate what great creative storytelling talent there is among the indie creator communities across the world. Exciting times!
NOPE by Red Render
Alessio Marciello’s (aka Red Render) film NOPE uses UE5, Blender and iClone 8 to create a Jordan Peel inspired film, released on 11 December 2022. The pace and soundscape are impressive, the lucid dream of a bored schoolboy is an interesting creative choice, and we love the hint of Enterprise at the end! Check it out here –
The Perilous Wager by Ethan Nester
Ethan Nester’s The Perilous Wager, released 28 November 2022, uses UE’s Metahumans in our next short project pick. This is reminiscent of film noir and crime drama, mixed with a twist of lime. Its a well managed story with some hidden depths, only really evidenced in the buzzing of flies. It ends a little abruptly but, as its creator says, its about ideas for larger projects. It demonstrates great voice acting and we also love that Ethan voiced all the characters himself, he said using Altered.AI to create vocal deepfakes. He highlights how going through the voice acting process helped him improve his acting skills too – impressive work! We look forward to seeing how these ideas develop in due course. Here’s the link –
Gloom by Bloom
Another dark and moody project (its also our feature image for this post), Gloom was created for the Australia and New Zealand short film challenge 2022, supported by Screen NSW and Epic. The film is by Bloom, released 17 December 2022, and was created in eight weeks. The sci-fi concept is great, voice acting impressive and the story is well told with some fab jumpscares in it too. The sound design is worth taking note of but we recommend you wear a headset to get the full sense of the expansive soundscape the team have devised. Overall, a great project and we look forward to seeing more work from Bloom too –
Adarnia by Adarnia Studio
Our next project is one that turns UE characters into ancient ones – a slightly longer format project, this has elements of Star Wars, Blade Runner and just a touch of Jason and the Argonauts mixed together with an expansive cityscape to boot. Adarnia is a sci-fi fantasy created by Clemhyn Escosora and released 19 March 2021. There’s an impressive vehicle chase which perhaps goes on just a little too long, but there’s an interesting use of assets that are replicated in different ways across the various scenes that is brought together nicely towards the end of the film. The birdsong is a little distracting in places, one of those ‘nuisance scores’ we highlighted in last week’s blog post (Tech Update 2). There’s clearly a lot of work that’s gone into this, and pehaps there’s scope for a game to be made with the expansiveness demonstrated in the project, but the film’s story needs to be just a little tighter. We guess the creators agree because their YouTube channel is full of excerpts focussing on key components of this work. Check out the film here –
Superman Awakens by Antonis Fylladitis
Our final project for this week is a Superman tale, created by VFX company called Floating House. The film, released on 13 February 2023, is inspired by Kingdom Come Superman and Alex Ross and is a very interesting mix of comic styling and narrative film, with great voice acting from Daniel Zbel. Its another great illustration of the quality of the UE assets for talented storytellers –
Next week, we take a look at films made with other engines.
This week we review the winner of the 2022 Kitbash3D moving image challenge contest, based on their free Mission to Minerva asset pack. The film is called Secret Moon by Orencloud, and what a visually stunning and ethereal representation of Minerva this is, with a clear trajectory between this piece and Orencloud’s portfolio. We discuss some of the ways in which the film works and works less well for us, and note that at least one of us missed the ending!
YouTube Version of this Episode
Show Notes and Links
Secret Moon, by Orencloud, released 1 December 2022, film link –
This week, we highlight some time-saving examples for generating 3D models using – you guessed it – AIs, and we also take a look at some recent developments in motion tracking for creators.
3D Modelling
All these examples highlight that generating a 3D model isn’t the end of the process and that once its in Blender, or another animation toolset, there’s definitely more work to do. These add-ons are intended to help you reach your end result more quickly, cutting out some of the more tedious aspects of the creative process using AIs.
Blender is one of those amazing animation tools that has a very active community of users, and of course, a whole heap of folks looking for quick ways to solve challenges in their creative pipeline. We found folks that have integrated OpenAI’s ChatGPT into using the toolset by developing add-ons. Check out this illustration by Olav3D, whose comments about using ChatGPT for attempting to write Python scripts sum it up nicely, “better than search alone” –
Dreamtextures by Carson Katri is a Blender add-on using Stable Diffusion which is so clever that it even projects textures onto 3D models (with our thanks to Krad Productions for sharing this one). In this video, Default Cube talks about how to get results with as few glitches as possible –
and this short tells you how to integrate Dreamtextures into Blender, by Vertex Rage –
To check out Dreamtextures for yourself, you can find the Katri’s application on Github here and should you wish to support his work, subscribe to his Patreon channel here too.
OpenAI also launched its Point-E 3D model generator this month, which can then be imported into Blender but, as CGMatter has highlighted, using the published APIs takes a very long time sitting in cues to access the downloads, whilst downloading the code to your own machine to run it locally, well that’s easy – and once you have it, you can create point-cloud models in seconds. However, he’s running the code from Google’s CoLab, which means you can run the code in the cloud. Here’s his tutorial on how to use Point-E without the wait giving you access to your own version of the code (on Github) in CoLab –
We also found another very interesting Blender add-on, this one lets you import models from Google Maps into the toolset. The video is a little old, but the latest update of the mod on Github, version 0.6.0 (for RenderDoc 1.25 and Blender 3.4) has just released, created by Elie Michel –
We were also interested to see NVIDIA’s update at CES (in January). It announced a release for the Omniverse Launcher that supports 3D animation in Blender, with generative AIs that enhance characters’ movement and gestures, a future update to Canvas that includes 360 surround images for panoramic environments and also an AI ToyBox, that enables you to create 3D meshes from 2D inputs. Ostensibly, these tools are for creators to develop work for the metaverse and web3 applications, but we already know NVIDIA’s USD-based tools are incredibly powerful for supporting collaborative workflows including machinima and virtual production. Check out the update here and this is a nice little promo video that sums up the integrated collaborative capabilities –
Tracking
As fast as the 3D modelling scene is developing, so is motion tracking. Move.ai which launched late last year, announced its pricing strategy this month at $365 for 12 months of unlimited processing of recordings – this is markerless mocap at its very best, although not so much if you want to do live mocap (no pricing strategy announced yet). Move.ai (our feature image for this article) lets you record content using a mobile phone (a couple of old iPhones). You can find out more on its new website here and here’s a fun taster, called Gorillas in the mist, with ballet and 4 iPhones, released in December by the Move.ai team –
And another app although not 3D is Face 2D Live, released by Dayream Studios – Blueprints in January. This tool allows you to live link a Face app on your iPhone or iPad to make cartoons, including with your friends also using an iPhone app, out of just about anything. It costs just $14.99 and is available on the Unreal Marketplace here. Here’s a short video example to wet your appetite – we can see a lot of silliness ensuing with this for sure!
Not necessarily machinima but for those interested in more serious facial mocap, Weta has been talking about how it developed its facial mocap processes for Avatar, using something called an ‘anatomical plausible facial system’. This is an animator centric system that captures muscle movement rather than ‘facial action coding’ which focusses on identifying emotions. Weta stated its approach leads to a wider set of facial movements being integrated into the mocapped output – we’ll no doubt see more in due course. Here’s an article on the FX Guide website which discusses the approach being taken and for a wider ranging discussion on the types of performance tracking used by the Weta team, Corridor Crew have bagged a great interview with the Avatar VFX supervisor, Eric Saindon here –
Everything with AI has grown exponentially this year, and this week we show you AI for animation using different techniques as well as AR, VR and voice cloning. It is astonishing that some of these tools are already a part of our creative toolset, as illustrated in our highlighted projects by GUNSHIP and Fabien Stelzer. Of course, any new toolset comes with its discontents, and so we cover some of those we’ve picked up on this past month too. It is certainly fair to say there are many challenges with this emergent creative practice but it appears these are being thought through alongside the developing applications by those using it… although, of course, legislation is far from here.
Animation
Text-to-image generator Stable Diffusion raised $100M in October this year and is about to release its animation API. On 15 November it released DreamStudio, the first API on its web platform of future AI-based apps, and on 24 November it released Stable Diffusion 2.0. The animation API, DreamStudio Pro, will be a node-based animation suite enabling anyone to create videos, including with music, quickly and easily. It includes storyboarding and is compatible with a whole range of creative toolsets such as Blender, potentially making it a new part of the filmmaking workflow bringing imagination closer to reality without the pain, or so it claims. We’ll see about that shortly no doubt. And btw, 2.0 has higher resolution upscaling options, more filters on adult content, increased depth information that can be more easily transformed into 3D and text-guided in-painting which helps to switch out parts of an image more quickly. You can catch up with the announcements on Robert Scoble’s Youtube channel here –
As if that isn’t amazing enough, Google is creating another method for animating using photographs, think image-to-video, called Google AI FLY. Its approach will make use of pre-existing methods of in-painting, out-painting and super resolution of images to animate a single photo, creating a similar effect to nerf (photogrammetry) but without the requirement for many images. Check out this ‘how its done’ review by Károly Zsolnai-Fehér on the Two Minute Papers channel –
For more information, this article on Petapixel.com‘s site is worth a read too.
And finally this week, Ebsynth by Secret Weapon is an interesting approach that uses a video and a painted keyframe to create a new video resembling the aesthetic style used in the painted frame. It is a type of generative style transfer with an animated output that could only really be achieved in post production but this is soooo much simpler to do and it looks pretty impressive. There is a review of the technique on 80.lv’s website here and an overview by its creators on their Youtube channel here –
We’d love to see anyone’s examples of outputs with these different animation tools, so get in touch if you’d like to share them!
AR & VR
For those of you into AR, AI enthusiast Bjorn Karmann also demonstrated how Stable Diffusion’s in-painting feature can be used to create new experiences – check this out on his Twitter feed here –
For those of you into 360 and VR, Stephen Coorlas has used MidJourney to create some neat spherical images. Here is his tutorial on the approach –
Also Ran?
Almost late to the AI generator party (mmm….), China has released ERNIE-ViLG 2.0 by Baidu, a Chinese text-to-image AI which Alan Thompson claims is even better than DALL-E and Stable Diffusion albeit using much a smaller model. Check out his review which certainly looks impressive –
Voice
NVidia has done it again – their amazing Riva AI clones a voice using just 30 minutes of voice samples. The application of this is anticipated to be conversational virtual assistants, including multi-lingual assistants and its already been touted as frontrunner with Alexa, Meta and Google – but in terms of virtual production and creative content, it is also possible it could be used to replace actors when, say, they are double booked or poorly. So, make sure you get that covered in your voice-acting contract in future too.
Projects
We found a couple of beautiful projects that push the boundaries this month. Firstly GUNSHIP’s music video is a great example of how this technology can be applied to enhance their creative work. Their video focusses on the aesthetics of cybernetics (and is our headline image for this article). Nice!
Secondly, an audience participation film by Fabien Stelzer which is being released on Twitter. The project uses AI generators for image and voice and also for scriptwriting. After each episode is released, viewers vote on what should happen next which the creator then integrates into the subsequent episode of the story. The series is called Salt and its aesthetic style is intended to be 1970s sci-fi. You can read about his approach on the CNN Business website and be a part of the project here –
Emerging Issues
Last month we considered the disruption that AI generators are causing in the art world and this month its the film industry’s turn. Just maybe we are seeing an end to Hollywood’s fetish with Marvellizing everything or perhaps AI generators will result in extended stories with the same old visual aesthetic, out-painted and stylized… which is highly likely since AI has to be trained on pre-existing images, text and audio. In this article, Pinar Seyhan Demirdag gives us some thoughts about what might happen but our experience with the emergence of machinima and its transmogriphication into virtual production (and vice versa) teaches us that anything which cuts a few corners will ultimately become part of the process. In this case, AI can be used to supplement everything from concept development, to storyboarding, to animation and visual effects. If that results in new ideas, then all well and good.
When those new ideas get integrated into the workflow using AI generators, however, there is clearly potential for some to be less happy. This is illustrated by Greg Rutkowski, a Polish digital artist whose aesthetic style of ethereal fantasy landscapes is a popular inclusion in text-to-image generators. According to this article in MIT Technology Review, Rutkowski’s name has appeared on more than 10M images and used as a prompt more than 93,000 times in Stable Diffusion alone – and it appears that this is becasue data on which the AI has been trained includes ArtStation, one of the main platforms used by concept artists to share their portfolios. Needless to say, the work is being scaped without attribution – as we have previously discussed.
What’s interesting here is the emerging groundswell of people and companies calling for legislative action. An industry initiative has formed and is evolving rapidly, spearheaded by Adobe in partnership with Twitter and the New York Times called Content Authentication Initiative. CAI aims to authenticate content and is a publishing platform – check out their blog here and note you can become a member for free. To date, it doesn’t appear that the popular AI generators we have reviewed are part of the initiative but it is highly likely they will at some point, so watch this space. In the meantime, Stability AI, creator of Stable Diffusion, is putting effort into listening to its community to address at least some of these issues.
Of course, much game-based machinima will immediately fall foul of such initiatives, especially if content is commercialized in some way – and that’s a whole other dimension to explore as we track the emerging issues… What of the roles of platforms owned by Amazon, Meta and Google, when so much of their content is fan-generated work? And what of those games devs and publishers who have made much hay from the distribution of creative endeavour by their fans? We’ll have to wait and see but so far there’s been no real kick-back from the game publishers that we’ve seen. The anime community in South Korea and Japan has, however, collectively taken action against a former French game developer, 5you. The company used a favored artist’s work, Jung Gi, to create an homage to his practice and aesthetic style after he had died but the community didn’t agree with the use of an AI generator to do that. You can read the article on Rest of World’s website here. Community action is of course very powerful and voting with feet is something that invokes fear in the hearts of all industries.
In this episode, Phil, Ricky, Damien and Tracy discuss a range of films that riff off Guardians of the Galaxy, well apart from Phil’s whose pick is an astonishing map size comparison review! Discussion explores experimental filmmaking reviewing a machinima made in World of Warcraft; the possibilities of machinima as a pre-market concept testing tool for TV series; and, the influence of fans generally.
32:26 Star Trek Pike – Fan Made Opening | Made in Star Trek Online by ZEFilms Productions released 1 July 2019 and the possibilities for using machinima as a pre-market concept testing tool
41:36 It just a virtual kiss by Juan Poyuan (World of Warcraft) released 19 Nov 2021 (log into Vimeo to watch)
53:00 Discussion: What is experimental machinima and why do it?
Recent Comments