Following on from Tracy’s visit to the Oberhausen International Short Film Festival, which took place between 26 April and 1 May, Ricky leads a discussion with Tracy on the thoughts about film selection at the event, different distribution strategies, topics discussed at the panel, and the future of avant-garde machinima films and filmmaking for festivals generally.
YouTube Version of this Episode
Show Notes and Links
Tracy’s CM blog post on her visit to the Oberhausen International Short Film Festival
Hypatia Pickens aka Sarah Higley CM episode, poetry in machinima!
Tracy’s paper on a manifesto for machinima, published in Journal of Visual Culture (2011) – a preprint free download on ResearchGate can be found here
You can also follow this episode with Tracy’s interview with Vladimir Nadein and Dmitry Frolov, curators of the Oberhausen International Short Film Festival, on our channel here –
You can also follow this episode with Tracy’s interview with Vladimir Nadein and Dmitry Frolov, curators of the Oberhausen International Short Film Festival, on our channel here
In comparison to the previous six months, the past month has not exactly been a damp squib but it has certainly revealed a few rather under-whelming releases and updates, notwithstanding Adobe’s Firefly release. We also share some great tutorials and explainers as well as some interesting content we’ve found.
Next Level?
Nvidia and Getty have announced a collaboration that will see visuals created with fully licensed content, using Nvidia’s Picasso model. The content generation process will also enable original IP owners to receive royalties. Here’s the link to the post on Nvidia’s blog.
Microsoft has released its Edge AI image generator, based on OpenAI’s DALL-E generator, into its Bing chatbot. Ricky has tried the tool and comments that whilst the images are good, they’re nowhere near the quality of Midjourney at the moment. Here’s an explainer on Microsoft’s YouTube channel –
Stability AI (Stable Diffusion) has released its SDK for animation creatives (11 May). This is an advancement on the text-to-image generator, although of course we’ve previously talked about similar tools, plus ones that advance this to include 3D processes. Here’s an explainer from the Stable Foundation –
RunwayML has released its Gen 1 version for the iPhone. Here’s the link to download the app. The app lets you use a video from your roll to apply either a text prompt or a reference image or a preset to create something entirely new. Of course, the benefit is that from within the phone’s existing apps, you can then share on social channels at your will. Its worth noting that at the time of writing we and many others are still waiting for access to Gen 2 for desktop!
Most notable of the month is Adobe’s release of Firefly for AdobeVideo. The tool enables generative AI to be used to select and create enhancements to images, music and sound effects, creating animated fonts, graphics and fonts and b-roll content – and all that, Adobe claims, without copyright infringements. Ricky has, however, come across some critics who say that Adobe’s claim that their database is clean is not correct. Works created in Midjourney have been uploaded to Adobe Stock and are still part of its underpinning database, meaning that there is a certain percent (small) of works in the Adobe Firefly database that ARE taken from online artist’s works. Here’s the toolset explainer –
Luma AI has released a plug-in for NeRFs in Unreal Engine, a technique for capturing realistic content. Here’s a link to the documentation and how-tos. In this video, Corridor Crew wax lyrical about the method –
Tuts and Explainers
Jae Solina aka JSFilmz has created a first impressions video about Kaiber AI. This is quite cheap at $5/month for 300 credits (it seems that content equates to appx 35 credits per short vid). In this explainer, you can see Jae’s aged self as well as a cyberpunk version, and the super-quick process this new toolset has to offer –
If you’re sick to the back teeth of video explainers (I’m not really), then Kris Kashtanova has taken the time to generate a whole series of graphic novel style explainers (you may recall the debate around her Zarya of the Dawn Midjourney copyright registration case a couple of months back) – these are excellent and somehow very digestible! Here’s the link. Of course, Kris also has a video channel for her tutorials too, latest one here looks at Adobe’s Firefly generative fill function –
In this explainer, Solomon Jagwe discussed his beta test of Wonder Studio’s AI mocap for body and finger capture although its not realtime unfortunately. This is however impressive and another tool that we can’t wait to try out once its developoer gets a link out to all those that have signed up –
Content
There has been a heap of hype about an advert created by Coca Cola using AI generators (we don’t know which exactly) but its certainly a lot of fun –
In this short by Curious Refuge, Midjourney has been used to re-imagine Lord of the Rings… in the style of Wes Anderson, with much humor and Benicio del Toro as Gimli (forever typecast and our feature image for this post). Enjoy –
We also found a trailer for an upcoming show, Not A Normal Podcast, but a digital broadcast where it seems AIs will interview humans in some alternative universe. Its not quite clear what this will be, but it looks intriguing –
although it probably has a way to go to compete with the subtle humor of FrAIsier 3000, which we’ve covered previously. Here is episode 4, released 21 March –
A well-trodden path in alien encounters, this week’s short is Genesis by Edy Recendez, filmed in Unreal Engine. From the very beginning the film is such fun, with great editing and sound design – right up to and including the ending ‘rap slash dance’ credits. Ricky, Phil and Damien also talk about the upcoming Diablo 4 release.
From AI to sci-fi to dystopian world stories, this week’s selection demonstrates creative tools and processes being used to realize these shorts.
Our first selection this week is a beautifully rendered morphing AI film called The High Seas, made using 60fps/4K by Drew Medina (released 9 Apr 2023) – one of the few we’ve seen so far. Embedding has been disabled, but please do follow the link here.
Constelar is by Oskar Alvardo (score by Lee Daish), released 4 Feb 2023. This has been made using Blender and an interesting approach to storytelling, with an almost 1970s noir feel to it –
The next film is a cinematic tribute to the makers of StarCraft, called Judgment Cinematic by Nakma, released 23 Mar 2023. The music (which we note is uncredited) adds much to the story telling but it also needs some understanding of the StarCraft world to fully appreciate the nuances in the plot which is vaguely Star Wars-ish. Nonetheless, a great effort, especially since it took just three months to make this machinima – there are some great shots and editing is well done –
The dystopian world of Valve’s Half Life, made using Source Filmmaker, has been used in our next two film selections. The first is called Combined and draws on the lore in the game. It is quite violent but does well to ‘humanise’ the characters. The animation looks surprisingly old-style, even if it is only 2021 – a reflection on just how quickly the cinematic aesthetic has changed in such a short period of time. In Perimeter (our feature image for this post), which also portrays the Combine, there is quite a different aesthetic finish to it. What’s interesting about this film is the inspiration it drew from: concept art by Vyacheslav Gluhov. Both these films are great examples of how a game inspires creators to take one aspect, in this case the Combine character in HL, and extend the narrative into new and interesting directions.
This week, Ricky, Phil and Damien discuss Torn Seas – a feature made with Unreal Engine, by Richard Boisvert. We don’t often review feature-length films on this show, not least because few are brave enough to make them, and we talk about that aspect a bit too. Overall, we were impressed by this film but we also have a few suggestions to consider.
YouTube Version of this Episode
Show Notes & Links
Torn Seas – a feature made with Unreal Engine, by Richard Boisvert, released 26 March 2023 –
Dynamo Dream – Salad Mug by Ian Hubert, Episode 1 –
Recent Comments