Not strictly machinima, but something we’ve named mAIchinima! This week, we discuss two films using generative AI to create narrative works. In both cases, the techniques employed emphasize the sound – music or voice acting – but whilst one is intentional, the other is not. We share our thoughts on these works and discuss some of the current limitations and benefits observed, which leads us into a timely discussion about style in filmmaking. We also discuss some recent developments in AI for creatives, such as the role of Glaze masking and Nightshade corrupting tools.
YouTube Version of this Episode
Show Notes & Links
Our films this week –
Nina | Denvery Pluto | Episode 2 by Dean Corrigan, released 13 Sept 2023
Prelude to Dust by Dark Machine Audio, released 5 Sept 2023
The generative AI tools we mention in our preliminary discussion are Glaze and Nightshade, information about both can be found here. You can also find out more about Glaze and how it works here –
Not machinima but some great projects to share with you this week.
This has to be SFX rather than cinematic… right? From what I can ascertain, this new game release trailer/taster, called Off the Grid by none other than the infamous Neil Blomkamp (District 9 director), was captured with Technoprops and edited with Dynamixyz Performer –
The short is called SWITCHER, and was released on 3 May. The game will apparently be launched later in 2023 so we can check out the stunning cinematics in more detail then, and hopefully see more shorts from this world in due course.
Our next film this week is a stop-mo Samurai spectacular. Its called Hidari, being based on the work of wooden sculpture Jingoro Hidari. It is presented in the style of a ‘Japanimation’ and is promoted as a pilot for a long-form feature film although its unclear whether or when the release will happen. Its creators are attempting to devise new visual effects that make use of the wooden materials to show texture and joints and, for example, to use sawdust gushing out instead of blood when the characters are being attacked. Here’s the short, released on 8 March –
From one horror to another, this creator has re-imagined Alien as a Pixar movie using Midjourney, ElevenLabs and ChatGPT tools – yep, you read that correctly! The short is by Yellow Medusa and was released on 27 March. Its not animation, but is an interesting visualization nonetheless – maybe all horror movies should be transformed in this way, for those with a more sensitive pallet? Here’s the link –
Finally this week, Tenacious D’s hilarious music vid about video games, is a must watch and which has apparently been so already by more than 18M viewers. Its called Tenacious D – Video Games (our feature image for this post) and was a collaboration with Oney Play, released on 11 May. Enjoy –
This week, we highlight three excellent Unreal storytelling projects, and some other interesting storymaking development projects we think you’ll find just as intriguing.
Brave Creatures, released on 2 March, is one of the most inventive and magical stories made using Unreal Engine we’ve seen and it’s not been set on an alien planet full of freakish monsters and travellers in space suits. The creative team, Studio Pallanza (none other than Academy award-winning VFX artist, Adam Valdez) was awarded a Mega Grant to bring this project to life, and it has done a truly outstanding job of it. It will surely be the basis of a new children’s series? Here’s the link –
and if you want to hear Adam discuss the work, check out Jae Salina’s interview with him here –
Promise with Dr. (English version), released on 17 Feb by TT Studio, is another magical story, albeit with a completely different aesthetic. Great editing and storytelling, do check this out too –
Miika is an award-winning film by Ugandan director, Nsiimenta Shevon, released on 27 Feb. This is powerful and disturbing, as only tales of African conflict can be. Beautifully animated by Solomon Jagwe, here’s the link –
Storymaking in Other Ways
This is not a film or an animation, but a fascinating insight into the storymaking possibilities of interactive chatbots and animated robots. In this ‘show and tell’ presentation at SXSW 2023 by Disney Parks’ chair of Experiences and Products Josh D’Amaro, Tinker Bell (Peter Pan’s sidekick) is shown as an animated chatbot in a box and a roller-skating child-like robot is emoted using mocap. These are Disney’s ‘greeters’ of the future, embedded with storytelling capabilities through the design process. What is particularly interesting is that, at least for me, the usual uncanny valley effect had somehow disappeared – what do you think?
In our next selection, MidJourney has been used to conflate two very different yet seemingly complementary storyworlds into a series of bizarre images, one being Star Wars and the other being the paintings of Hieronymus Bosch. This work, called Star Wars by MidJourney, by AI Visionary Art, was published on 18 Feb and somehow converts the grotesque and nonsensical creatures into a familiar canon (for some, Damien) –
And finally this week, we share an overview of an InWorld AI driven adventure game called Origins (our feature image for this post), animated using Unreal’s Metahuman characters and presented in the style of a film noir (or rather, a neo-noir). This is vaguely reminiscent of some of those very early games that inspired a lot of machinima creators back in the earliest days, Max Payne for those with long memories. InWorld AI has described its approach as the future of NPCs, but its also their DNA too. The chatbot and naturalistic style interface is a really interesting development for storymaking and storytelling and we’re definitely looking forward to seeing what creators do with this kind of creative platform in future. Check this out –
That’s it for this post, thanks for reading and do share with us anything you spot that you think we should be reviewing on the podcast.
March was another astonishing month in the world of AI genies with the release of exponentially powerful updates (GPT4 released 14 March; Baidu released Ernie Bot on 16 March), new services and APIs. It is not surprising that by the end of the month, Musk-oil is being poured over the ‘troubling waters’ – will it work now the genie is out of the bottle? Its anyone’s guess and certainly it seems a bit of trickery is the only way to get it back into the bottle at this stage.
More importantly, and with immediate effect, the US Copyright Office issued a statement on 16 March in relation to the IP issues that have been hot on many lips for several months now: registrations pertaining to copyright are about the processes of human creativity, where the role of generative AI is simply seen as a toolset under current legal copyright registration guidance. Thus, for example, in the case of Zarya of the Dawn (refer our comments in the Feb 2023 Tech Update), whilst the graphic novel contains original concepts that are attributable to the author, the use of images generated by AI (in the case of Zarya, MidJourney) are not copyrightable. The statement also makes it clear that each copyright registration case will be viewed on its own merit which is surely going to make for a growing backlog of cases in the coming months. It requires detailed clarification of how generative AI is used by human creators in each copyright case to help with the evaluation processes.
The statement also highlights that an inquiry into copyright and generative AIs will be undertaken across agencies later in 2023, where it will seek general public and legal input to evaluate how the law should apply to the use of copyrighted works in “AI training and the resulting treatment of outputs”. Read the full statement here. So, for now at least, the main legal framework in the US remains one of human copyright, where it will be important to keep detailed notes about how creators generated (engineered) content from AIs, as well as adapted and used the outputs, irrespective of the tools used. This will no doubt be a very interesting debate to follow, quite possibly leading to new ways of classifying content generated by AIs… and through which some suggest AIs as autonomous entities with rights could become recognized. It is clear in the statement, for example, that the US Copyright Office recognizes that machines can create (and hallucinate).
The complex issues of the dataset creation and AI training processes will underpin much of the legal stances taken and a paper released at the beginning of Feb 2023 could become one of the defining pieces of research that undermines it all. The research extracted near exact copyrighted images of identified people from a diffusion model, suggesting that it can lead to privacy violations. See a review here and for the full paper go here.
In the meantime, more creative platforms used to showcase creative work are introducing tagging systems to help identify AI generated content – #NoAI, #CreatedWithAI. Sketchfab joined the list at the end of Feb with its update here, with updates relating to its own re-use of such content through its licensing system coming into effect on 23 March.
Nvidia’s progressive march with AI genies needs an AI to keep up with it! Here’s my attempt to review the last month of releases relevant to the world of machinima and virtual production.
In February, we highlighted ControlNet as a means to focus on specific aspects of image generation and this month, on 8 March, Nvidia released the opposite which takes the outline of an image and infills it, called Prismer. You can find the description and code on its NVlabs GitHub page here.
Alongside the portfolio of generative AI tools Nvidia has launched in recent months, with the advent of OpenAI’s GPT4 in March, Nvidia is expanding its tools for creating 3D content –
It is also providing an advanced means to search its already massive database of unclassified 3D objects, integrating with its previously launched Omniverse DeepSearch AI librarian –
It released its cloud-based Picasso generative AI service at GTC23 on 23 March, which is a means to create copyright cleared images, videos and 3D applications. A cloud service is of course a really great idea because who can afford to keep up with the graphics cards prices? The focus for this is enterprise level, however, which no doubt means its not targeting indies at this stage but then again, does it need to when indies are already using DALL-E, Stable Diffusion, MidJourney, etc. Here’s a link to the launch video and here is a link to the wait list –
A procedural content generator for creating alleyways has been released by Difffuse Studios in the Blender Marketplace, link here and see the video demo here –
We spotted a useful social thread that highlights how to create consistent characters in Midjourney, by Nick St Pierre, using seeds –
and you can see the result of the approach in his example of an aging girl here –
JSFilmz created an interesting character animation using MidJourney5 (which released on 17 March) with advanced character detail features. This really shows its potential alongside animation toolsets such as Character Creator and Metahumans –
Runway’s Gen-2 text-to-video platform launched on 20 March, with higher fidelity and consistency in the outputs than its previous version (which was actually video-to-video output). Here’s a link to the sign-up and website, which includes an outline of the workflow. Here’s the demo –
Gen-2 is also our feature image for this blog post, illustrating the stylization process stage which looks great.
Wonder Dynamics launched on 9 March as a new tool for automating CG animations from characters that you can upload to its cloud service, giving creators the ability to tell stories without all the technical paraphenalia (mmm?). The toolset is being heralded as a means to democratize VFX and it is impressive to see that Aaron Sims Creative are providing some free assets to use with this and even more so to see none other than Steven Spielberg on the Advisory Board. Here’s the demo reel, although so far we’ve not found anyone that’s given it a full trial (its in closed beta at the moment) and shared their overview –
Finally for this month, we close this post with Disney’s Aaron Blaise and his video response to Corridor Crew’s use of generative AI to create a ‘new’ anime workflow, which we commented on last month here. We love his open-minded response to their approach. Check out the video here –
Genies are everywhere now. In this post, I’ll focus on some of the more interesting areas relating to the virtual production pipeline, which interestingly is becoming clearer day by day. Check out this mandala of the skills identified for virtual production by StoryFutures in the UK (published 2 March) but note that skills for using genies within the pipeline are not there (yet)!
Future of Filmmaking
Virtual Producer online magazine published an interesting article, by Noah Kadner (22 Feb), about the range of genie tools available for the film production pipeline, covering the key stages of pre-production, production and post-production. Alongside it, he gives an overview of some of the ethical considerations we’ve been highlighting too. Its nice to the see the structured analysis of the tools although, of course, what AIs do is change or emphasize aspects of processes, conflate parts and obviate the need for others. Many of the tools identified are ones we’ve already discussed in our blogs on this topic, but its fascinating to see the order being put on their use. I think the key thing all of us involved in the world of machinima have learned over the years, however, is that its often the indie creators that take things and do stuff that no one thought about before, so I for one will be interested to see how these neat categories evolve!
Bits and Pieces
It was never going to take long to showcase the ingenuity among users of genies: last month, whilst Futurism was reporting on the dilemma of ethical behaviour among users who have ‘jailbroken’ the ChatGPT safeguards, MidJourney was busy invoking even more governance over its use. MidJourney says its approach, which now bans the use of words about human reproductive systems, is to ‘temporarily prevent people from creating shocking or gory images’. All this very much reminds me of an AI experiment carried out by Microsoft almost seven years ago as we release this post, on 24 March 2016, and of the artist Zach Blas’ interpretation of that work showcased in 2017, called ‘Im here to learn so :))))))‘.
For those without long(ish) memories, Blas’ work was a video art installation visualizing Tay, which had been designed by Microsoft as a 19 years old American female chatbot. As an AI, it lived for just one day on its social media platform where it was subjected to a tyranny of misognyistic, abusive, hate-filled diatribe. Needless to say, corporate nervousness in its creative representation of the verbiage it generated from its learning processes resulted in it being terminated before it really got going. Blas’ interpretation of Tay, ironically using Reallusion’s CrazyTalk to animate it as an ‘undead AI’, is a useful reminder of how algorithms work and the nature of humanbeans. The link under the image below takes you to where you can watch the video of Tay reflecting on its experience and deepdreams. Salutary.
Speaking of dreams, Dreamix is a creative tool that uses an input video with a text prompt to create some other video output. In effect, it takes the user through the pre-production, production and post-production process in just one sweep. Here’s a video explainer –
In a not dissimilar vein, ControlNet takes an image generated in Stable Diffusion and applies a controller to inpaint the image in any style you’d like to see. Here’s an explainer by Software Engineering Courses –
and here’s the idea taken to a whole new level by Corridor Crew in their development of an anime film. The explainer takes you through the process they created from scratch, including training an AI –
They describe the process they’ve gone through really well, and its surely not going to be too long before this becomes automated with an app you can pick up in a virtual store near you.
Surprise, surprise, here is RunwayML’s Gen-1: not quite the automated app actually, but pretty close. Runway has created an AI that takes video input and an image with a style you would like to apply to it and with a little bit of genie magic, the output video has the style transferred to it. What makes this super interesting, however, is that Runway Studios is now a thing too – it is the entertainment and production division of Runway and aims to partner with ‘next gen’ storytellers. It has launched two initiaties worth following: an annual AI Film Festival, which just closed its first call for entries. Here’s a link to the panel discussion that took place in New York on 1 Mar, with Paul Trillo, Souki Mehdaoui, Cleo Abram and Darren Aronofsky –
The second initiative is its creative grants for ‘aspiring filmmakers from various backgrounds who are in need of production support’. On its Google formlet, it states grants take various shapes, including advanced access to the latest AI Magic Tools, funding allocations, as well as educational resources. Definitely worth bearing in mind for your next step in devising machine-cinema stories.
Whilst we sit back and wait for the AI generated films to bubble to the top of our algorithmically controlled YouTube channel, or at least, the ones where Google tools have been part of the process, we bring you a new-old classic. Welcome to FrAIsier 3000. This is described as a parody show that combines surreal humor, philosophical musings and heartfelt moments from an alternate dimension, where an hallucinogenic FrAIsier reflects on the mysteries of existence and the human condition. Wonderful stuff, as ever. Here’s a link to episode 1 but do check out episode 2, waxing lyrically on ‘coq au vin’ as a perfect example of the balance between the dichotomy of discipline and carefreeness (and our feature image for this post) –
If you find inspiring examples of AI generated films, or yet more examples of genies that push at the boundaries of our virtual production world, do get and touch or share in the comments.