This week Ricky goes all out Halloween for us. His selection is an old-style machinima (think Chaplin or Keaton) by one of the most prolific Warcraft movie makers from back in the day… actually 2008 for this one. The tale by Olibith has been made in World of Warcraft and has shades of Lovecraft, the grimmest of Grimms’ fairy tales and The Flintstones! We also discuss a bonus Lovecraftian film for all you Halloween buffs, It Lives Within the Sea by Orange Squadron (dir. Dominic Edwards) made in RDR2.
YouTube Version of this Episode
Links and Notes
Little White Poney Inn by Olibith, rel 22 September 2008
Olibith’s Warcraft movies page with tutorials on how to make machinima (from 2010)
Have a look at Olibith’s other work (Vimeo channel here), such as Le Terroriste
After we had recorded this show, we went looking for Olibith on social media and were dismayed to find that he passed away earlier this year. We were all very saddened by this news and of course extend our deepest condolences to his family. We dedicate this episode to his memory.
It Lives Within the Sea by Squadron Orange (Dir. Dominic Edwards), rel 28 Aug 2021
In this month’s special report, we take a look at some of the key challenges in using creative AI generators such as DALL-E, MidJourney, Stable Diffusion and others. Whilst we think they have FANTASTIC potential for creators, not least because they cut down the time in finding some of the creative ideas you want to use, there are some things that are emerging that need to be considered when using them.
Firstly, IP is a massive issue. As noted in this article on Kotaku (Luke Plunkett), the recent rise of AI-created art has brought to the fore some of the moral and legal problems in using it. In terms of the moral issues, some are afraid of a future where entry level art positions are taken over by AI and others see AI-created art as a reflection of what’s already occuring between artists – the influence of style and content… but this is an argument that came to the fore when computers were first used by artists back in the 1960s. Quite frankly we are now seeing some of the most creative work in a generation come to fruitition that just would not have happened without computational assistance. Take a look at the Lumen Prize annual entries, for example, to see what the state of the art is with creative possibilities of AI and other tech. Tracy even directs an Art AI Festival, aiming to showcase some of the latest AIs in creative applications, working in collaboration with one of the world’s leading creative AI curators, Luba Elliott.
As to the legal issues, these are really only just emerging and in a very disjointed and piecemeal way. It was interesting to note that Getty Images notified its contributors in an email (21 Sept 2022) that “Effective immediately, Getty Images will cease to accept all submissions created using AI generative models (e.g., Stable Diffusion, Dall‑E 2, MidJourney, etc.) and prior submissions utilizing such models will be removed.” It went on to state: “There are open questions with respect to the copyright of outputs from these models and there are unaddressed rights issues with respect to the underlying imagery and metadata used to train these models. These changes do not prevent the submission of 3D renders and do not impact the use of digital editing tools (e.g., Photoshop, Illustrator, etc.) with respect to modifying and creating imagery.” This is hot on the heals of a number of developments earlier in the year: in February 2022, the US Copyright Office refused to acknowledge that an AI could hold copyright of its creative endeavour (article here). By September 2022, an artwork created with MidJourney by Jason Allen that won the Colorado State Fair contest was causing a major stir across the art world as to what constitutes art, as outlined in this article (Smithsonian Magazine) and this short news report here –
Of course, the real dilemma is what happens to artists, particularly those at the lower end of the food chain. By way of another example, consider the UK actors’ union Equity’s response to recent proposals by the Government to include a data mining exemption for audio-visual content in its proposed new AI regulation. Why that’s interesting is because already a number of organizations that would otherwise employ these artists, say as graphic designers or concept artists, are rapidly replacing them with AI generated images – Cosmopolitan used its ‘first AI generated cover’ in June 2022 and advertising agencies the world over are doing likewise (Adage article). Some image users have even stated that in future they will ONLY use these tools as image sources, effectively cutting out the middle man, and indeed the originator of the contributory works. So, of course Getty is not going to be happy about this… and neither are the many contributors to their platforms.
And so here is the nub of the problem: in the rush that is now going to follow Getty’s stance (and probably others with similar influence to follow), how will the use of AI generators be policed? This has pretty serious consequences because it has implications for all content including on YouTube, in festivals and contests around the world – how would creative works like The Crow be judged (see our blog post here too)? It certainly places emphasis on the role of metadata and statements of authorship, but it is also as good an argument we can think of for using blockchain too! The Crow for example briefly mentions the AI generator tool it has used, which is freely available to use on Google CoLab here, but it doesn’t show the sources of the underlying training data set used.
AI code source is Pytii Colab Notebook (sportsracer48)
We contend, the only way to police the use of AI generated content is actually by using AI, say by analysing pixel level detail… and that’s because one of Getty’s points is no doubt going to be how their own stock images, even with copyright claims over them, have been used in training data sets. AI simply cuts out the stuff out that it doesn’t want and voila, something useful emerges! So, unless there is greater transparency and disclosure among the creators of AI generators AS A PRIORITY on where images have been scraped from and how they have been used, there is going to be a major problem for all types of content creators, including the machinima and virtual production creator using these tools as a way to infuse new ideas into their creative projects, and as the ability to turn 2D image into 3D object becomes more accessible to a wider range of creators. Watch this space!
In the meantime, we’ll be doing a podcast on the Completely Machinima YouTube channel some of the best creative ideas we’ve seen next month so do look out for that too.
We’d love to hear your views on this topic, so do drop them into the comments.
btw, our featured image was created in MidJourney using the prompt: ‘Diary of a Camper made in Quake engine’, by @tgharwood
This week, Damien has picked a very interesting Eastern-made alien tale. Its been beautifully shot and rendered using Omniverse, and inspired him to try some of the techniques shown. Ricky is a little more critical of the nostalgic trope. Tracy reflects on the journey of the storytelling, and the nature of what it is to be human that is the heart of the story. Phil brings Solaris into the discussion, as only Phil can. Overall, we reflect on the different styles of animation used and how influential they were. And, finally, how on earth did the producer achieve that tendril effect?!
Its no surprise to hear that Martin Bell’s Prazinburk Ridge has won its first award, Best Animation – and very fitting that it should be at the North of England’s Wigan and Leigh Film Festival, not a stone’s throw away from Huddersfield, where the main character in the story hailed from. Many congratulations, Martin!
You can see us review the film also on our YouTube channel here –
UE: Creep it Real
Possibly a bit late notifying you but a nice little Unreal contest launched earlier this month – Unreal Challenge: Creep It Real! Here’s the link – deadline is 29 October. There are some great prizes for video content created with the assets you use which is LESS THAN 1 MINUTE, so late as we are posting this, there’s still no excuse for not participating! There were 450 entries to their Better Light Than Never contest, held earlier in the year, so we’re looking forward to seeing the sizzle reel from entries to this one in due course.
Unreal Challenge: Creep It Real
MacInnes Studios’ Dance Challenge
Another contest has launched, hosted by John MacInnes aka MacInnes Studios, and its hot on the heals of his Mood Scene contest, the results for which we look forward to seeing soon. The new contest is all about dance moves – check out the details here – start date is 1st October and it runs for 30 days.
MacInnes Studios Dance Challenge – Oct 2022
and if you want to hear John talk more about his use of avatars and ‘the future of digital humans’, here’s a great webinar you can catch up on too, hosted by Faceware (one of the Dance Challenge sponsors).
Open Calls
There are numerous experimental film festivals that are currently calling for entries – check them out on ExpCinema.org – we liked the look of Underneath the Floorboards!
In this episode, we review Tracy’s pick for the month: ‘The Eye: Calanthek’ by Aaron Sims, made in Unreal Engine 5 using Metahuman tech, as an early exemplar of the capabilities of the engine (released 2021).
YouTube Version of this Episode
Show Notes and Links
We discuss the eyes, the monster, the surprise and camera shots.
Time stamps
1:06 Tracy introduces ‘The Eye: Calanthek‘ by Aaron Sims, released 4 November 2021
5:56 What makes it so realistic? The eyes!
11:02 Things that break the storytelling
15:02 Does knowing the craft of filmmaking restrict creative approaches to filmmaking?
Recent Comments