The Dog Days is an unusual machinima that combines the glitchiness of AI generated content composited with a Second Life avatar. Whilst not realtime, the performative aspect of this work is evident and we reflect on the fit of the current state of generative AI with the Fluxus art movement. We discuss what The Dog Days means, metaphorically and literally, and how we interpret this film concluding it is not a happy outcome for the focal character. The Dog Days evokes memories of a long-ago childhood spent with a much loved pet but the non-sense words and images are of something far more sinister, contextualized by the current general media coverage of wars in Eastern Europe, Middle East and elsewhere. We also discuss our interpretations of the fragility represented in the avatar, including the role of generative AI in becoming cyborg.
YouTube Version of this Episode
Show Notes and Links
TheDog Days by Fau Ferdinandand Elle Thorkveld, released 17 Feb 2024
This week we compare and contrast, in a way extending that ongoing debate about whose IP is the daddy… but the ep is about much more than that too. We discuss @solofilmmaking4857 ‘s use of #chatgpt to generate a film’s story, using the plots of 60 of his portfolio works on Batman, mixing styles and genres along the way. We don’t see it as wholly successful and we discuss why. We contrast that with an unusual hobbyist technique to create ‘analogue machinima’, inspired by Spiderman toy figurines digitised and animated by @MakeItMoveMedia. In the end, our conclusion is that its all Marvellous!
YouTube Version of This Episode
Show Notes and Links
Vengence: Batman Fan Made Cinematic | Unreal Engine 5.3 | DLSS 3.5 by Solo Filmmaking
‘Making of’ film, link –
Spider-Man Action Figure Animations Episodes 1-8 by Make It Move Media
What do AI, 48 hours and Second Life have in common? Not a lot, beyond some stunningly creative pieces that we found for you this week!
AI films are now beginning to come through and we have two very interesting ones for you to take a look at. The first is created/prompted by Matt Mayle (our feature image for this post) and has been made using Elevenlabs (voice), Runwayml’s GEN2 (animation) and ChatGPT4 (text concept) and is described an ‘AI assisted short’, called The Mass (released 26 April) –
The second is called The Frost, by Waymark Creative Labs (released 5 June). This has a distinct aesthetic to it, encapsulates a curious message, and overall reflects the state of AI animation at this stage, but its nonetheless a gripping piece. Its been created with DALLE-2 and D-ID –
Our next pick was made in 48 hours (well, with a bit of tweaking on top) and has been made in Unreal Engine, called Dude Where’s My Ship by Megasteakman. To be frank, the speed of its creation does show in the final quality of film, but its nonetheless an interesting development, especially given that I was a regular judge on the 48 Hour Filmmaking Contest a few years ago. The machinima version of that contest was managed and supported by Chantal Harvey for several years and its astonishing to think that this is the next generation of that process –
Finally, this week, a film made in Second Life, which lends itself to flash production, based on content from a plethera of creators on whom its content relies. The film is called The Doll Maker (released 27 May) and has been made by FeorieFrimon using various models and Paragon Dance Animations movements to a Beats Antique music composition called Flip –
Not machinima but some great projects to share with you this week.
This has to be SFX rather than cinematic… right? From what I can ascertain, this new game release trailer/taster, called Off the Grid by none other than the infamous Neil Blomkamp (District 9 director), was captured with Technoprops and edited with Dynamixyz Performer –
The short is called SWITCHER, and was released on 3 May. The game will apparently be launched later in 2023 so we can check out the stunning cinematics in more detail then, and hopefully see more shorts from this world in due course.
Our next film this week is a stop-mo Samurai spectacular. Its called Hidari, being based on the work of wooden sculpture Jingoro Hidari. It is presented in the style of a ‘Japanimation’ and is promoted as a pilot for a long-form feature film although its unclear whether or when the release will happen. Its creators are attempting to devise new visual effects that make use of the wooden materials to show texture and joints and, for example, to use sawdust gushing out instead of blood when the characters are being attacked. Here’s the short, released on 8 March –
From one horror to another, this creator has re-imagined Alien as a Pixar movie using Midjourney, ElevenLabs and ChatGPT tools – yep, you read that correctly! The short is by Yellow Medusa and was released on 27 March. Its not animation, but is an interesting visualization nonetheless – maybe all horror movies should be transformed in this way, for those with a more sensitive pallet? Here’s the link –
Finally this week, Tenacious D’s hilarious music vid about video games, is a must watch and which has apparently been so already by more than 18M viewers. Its called Tenacious D – Video Games (our feature image for this post) and was a collaboration with Oney Play, released on 11 May. Enjoy –
Genies are everywhere now. In this post, I’ll focus on some of the more interesting areas relating to the virtual production pipeline, which interestingly is becoming clearer day by day. Check out this mandala of the skills identified for virtual production by StoryFutures in the UK (published 2 March) but note that skills for using genies within the pipeline are not there (yet)!
Future of Filmmaking
Virtual Producer online magazine published an interesting article, by Noah Kadner (22 Feb), about the range of genie tools available for the film production pipeline, covering the key stages of pre-production, production and post-production. Alongside it, he gives an overview of some of the ethical considerations we’ve been highlighting too. Its nice to the see the structured analysis of the tools although, of course, what AIs do is change or emphasize aspects of processes, conflate parts and obviate the need for others. Many of the tools identified are ones we’ve already discussed in our blogs on this topic, but its fascinating to see the order being put on their use. I think the key thing all of us involved in the world of machinima have learned over the years, however, is that its often the indie creators that take things and do stuff that no one thought about before, so I for one will be interested to see how these neat categories evolve!
Bits and Pieces
It was never going to take long to showcase the ingenuity among users of genies: last month, whilst Futurism was reporting on the dilemma of ethical behaviour among users who have ‘jailbroken’ the ChatGPT safeguards, MidJourney was busy invoking even more governance over its use. MidJourney says its approach, which now bans the use of words about human reproductive systems, is to ‘temporarily prevent people from creating shocking or gory images’. All this very much reminds me of an AI experiment carried out by Microsoft almost seven years ago as we release this post, on 24 March 2016, and of the artist Zach Blas’ interpretation of that work showcased in 2017, called ‘Im here to learn so :))))))‘.
For those without long(ish) memories, Blas’ work was a video art installation visualizing Tay, which had been designed by Microsoft as a 19 years old American female chatbot. As an AI, it lived for just one day on its social media platform where it was subjected to a tyranny of misognyistic, abusive, hate-filled diatribe. Needless to say, corporate nervousness in its creative representation of the verbiage it generated from its learning processes resulted in it being terminated before it really got going. Blas’ interpretation of Tay, ironically using Reallusion’s CrazyTalk to animate it as an ‘undead AI’, is a useful reminder of how algorithms work and the nature of humanbeans. The link under the image below takes you to where you can watch the video of Tay reflecting on its experience and deepdreams. Salutary.
Speaking of dreams, Dreamix is a creative tool that uses an input video with a text prompt to create some other video output. In effect, it takes the user through the pre-production, production and post-production process in just one sweep. Here’s a video explainer –
In a not dissimilar vein, ControlNet takes an image generated in Stable Diffusion and applies a controller to inpaint the image in any style you’d like to see. Here’s an explainer by Software Engineering Courses –
and here’s the idea taken to a whole new level by Corridor Crew in their development of an anime film. The explainer takes you through the process they created from scratch, including training an AI –
They describe the process they’ve gone through really well, and its surely not going to be too long before this becomes automated with an app you can pick up in a virtual store near you.
Surprise, surprise, here is RunwayML’s Gen-1: not quite the automated app actually, but pretty close. Runway has created an AI that takes video input and an image with a style you would like to apply to it and with a little bit of genie magic, the output video has the style transferred to it. What makes this super interesting, however, is that Runway Studios is now a thing too – it is the entertainment and production division of Runway and aims to partner with ‘next gen’ storytellers. It has launched two initiaties worth following: an annual AI Film Festival, which just closed its first call for entries. Here’s a link to the panel discussion that took place in New York on 1 Mar, with Paul Trillo, Souki Mehdaoui, Cleo Abram and Darren Aronofsky –
The second initiative is its creative grants for ‘aspiring filmmakers from various backgrounds who are in need of production support’. On its Google formlet, it states grants take various shapes, including advanced access to the latest AI Magic Tools, funding allocations, as well as educational resources. Definitely worth bearing in mind for your next step in devising machine-cinema stories.
Genious?
Whilst we sit back and wait for the AI generated films to bubble to the top of our algorithmically controlled YouTube channel, or at least, the ones where Google tools have been part of the process, we bring you a new-old classic. Welcome to FrAIsier 3000. This is described as a parody show that combines surreal humor, philosophical musings and heartfelt moments from an alternate dimension, where an hallucinogenic FrAIsier reflects on the mysteries of existence and the human condition. Wonderful stuff, as ever. Here’s a link to episode 1 but do check out episode 2, waxing lyrically on ‘coq au vin’ as a perfect example of the balance between the dichotomy of discipline and carefreeness (and our feature image for this post) –
If you find inspiring examples of AI generated films, or yet more examples of genies that push at the boundaries of our virtual production world, do get and touch or share in the comments.
Recent Comments