Our monthly roundup of all things we enjoy about this crazy world of machinima.
YouTube Version of This Episode
Show Notes & Links
Projects
AFK’s Season 3 Star Wars Parody ‘supercut’, created in UE5 –
Second Life’s Fantasy Faire, Pryda Parx Studio’s run through covers 18 regions in total – as ever, great work by everyone being showcased –
and here is the Fantasy Faire Second Life YouTube channel for videos submitted to the Film Festival Machinima Competition, hosted by Chantal Harvey
The GTA6 new trailer release… check out Brian in the first few seconds!
Mans1ay3r’s Skyrim short, called Beware the Daughter of the Troll – a Bard’s Tale, a really stunning song, beautifully produced –
and ToCoSo’s song called If We All Haul Together, celebrating the community is Elite Dangerous –
Drunk Physicist’s Mission on a Tiny Planet – available on Instagram here
genAI updates
AI Animation Contest winners hosted by Curious Refuge, supported by Promise Studios and Luma AI – the show reel is worth a run through –
Films made with AI will be eligible for the Oscars, although decision-making will of course consider its role and that of human creators – article here
Flawless’ Watch the Skies AI dubbing technology is a Swedish scifi that’s now acceptable to those who can’t speak the language – impressive stuff –
Runway’s Gen-4 is also worth taking a look at, producing multiple styles for a single character –
Sora AI‘s Red Dead Redemption 2 1970s style Western Movie –
Phil’s AI copyright discussion link for Suno AI –
Tools & Resources
Reallusion’s CC5 teaser
and Solomon Jagwe‘s thoughts –
A new Star Wars game afoot?
How to create smart hair –
April Phil
Minecraft Speedrunner vs 3 Hunters –
and don’t forget to check out Phil’s latest machinima series, Ralph & Chuck –
One of the most extraordinary picks we’ve made, probably ever, on the show! This week, we review an emergent absurdist talk show, focussing on the life of the Glurons – a post-human race of critters set in a future world, but most creatively uses generative AI tools. Its outstanding quality is, however, the writing. Check out our review and pick up the links mentioned on our show notes below.
YouTube Version of This Episode
Show Notes & Links
Unanswered Oddities by Neural Viz released on 9 Dec 2024, made using MidJourney, Runway Act One and ElevenLabs and Premiere for editing
This week we celebrate Nvidia’s investment in machinima, discuss latest AIs and give you the heads up on some great projects we want to highlight – there are just too many for us to fully review everything we’re seeing that we want to share with you these days! Do check them out, we’d love to hear your thoughts too – links and notes below.
Youtube Version of This Episode
Show Notes & Links
Homage to Nvidia’s Omniverse Machinima app by Pekka Varis, released 14 April 2024 –
Endgame by Peaches Chrenko and Dark Machine Audio, sound track –
JP Ferre’s DCS: Spitfires – Cinematic, a real tribute to those brave souls in WWII –
TheDavedood (Scratby Films) animation for Pink Floyd’s Dark Side of the Moon 50 years celebration – this one is for the single, Time –
Anomidae’s latest episode of the Half Life supernatural series Interloper –
Fallout inspired videos: JT Music’s Fallout rap, All in With the Fallout –
and Fallout – Dream on (tribute) by Couch Patrol –
Fables of the Foolish by Dreeko –
AI is Genie-us Init?
ElevenLabs has released a dubbing toolset tutorial, making different languages for videos even more accessible – link here
Google DeepMind has announced a new video generation model called Veo, which produces 1080p res videos for over a minute length in a whole range of different cinematic styles – link here
Stability AI has launched Stable Artisan to a wider user group on Discord – ats a tool for media generation and editing – link here
Showrunner by The Simulation, text to episode generator – link here
Winner of the 2nd AI Film Festival at Runway is by Daniel Antebi, called Get Me Out –
Luc Shurgers’ Skibidi Sam, video only on LinkedIn, created using Replikant – link here
Sony backs out of AI training with its catalogue – article here
Not strictly machinima, but something we’ve named mAIchinima! This week, we discuss two films using generative AI to create narrative works. In both cases, the techniques employed emphasize the sound – music or voice acting – but whilst one is intentional, the other is not. We share our thoughts on these works and discuss some of the current limitations and benefits observed, which leads us into a timely discussion about style in filmmaking. We also discuss some recent developments in AI for creatives, such as the role of Glaze masking and Nightshade corrupting tools.
YouTube Version of this Episode
Show Notes & Links
Our films this week –
Nina | Denvery Pluto | Episode 2 by Dean Corrigan, released 13 Sept 2023
Prelude to Dust by Dark Machine Audio, released 5 Sept 2023
The generative AI tools we mention in our preliminary discussion are Glaze and Nightshade, information about both can be found here. You can also find out more about Glaze and how it works here –
March was another astonishing month in the world of AI genies with the release of exponentially powerful updates (GPT4 released 14 March; Baidu released Ernie Bot on 16 March), new services and APIs. It is not surprising that by the end of the month, Musk-oil is being poured over the ‘troubling waters’ – will it work now the genie is out of the bottle? Its anyone’s guess and certainly it seems a bit of trickery is the only way to get it back into the bottle at this stage.
Rights
More importantly, and with immediate effect, the US Copyright Office issued a statement on 16 March in relation to the IP issues that have been hot on many lips for several months now: registrations pertaining to copyright are about the processes of human creativity, where the role of generative AI is simply seen as a toolset under current legal copyright registration guidance. Thus, for example, in the case of Zarya of the Dawn (refer our comments in the Feb 2023 Tech Update), whilst the graphic novel contains original concepts that are attributable to the author, the use of images generated by AI (in the case of Zarya, MidJourney) are not copyrightable. The statement also makes it clear that each copyright registration case will be viewed on its own merit which is surely going to make for a growing backlog of cases in the coming months. It requires detailed clarification of how generative AI is used by human creators in each copyright case to help with the evaluation processes.
The statement also highlights that an inquiry into copyright and generative AIs will be undertaken across agencies later in 2023, where it will seek general public and legal input to evaluate how the law should apply to the use of copyrighted works in “AI training and the resulting treatment of outputs”. Read the full statement here. So, for now at least, the main legal framework in the US remains one of human copyright, where it will be important to keep detailed notes about how creators generated (engineered) content from AIs, as well as adapted and used the outputs, irrespective of the tools used. This will no doubt be a very interesting debate to follow, quite possibly leading to new ways of classifying content generated by AIs… and through which some suggest AIs as autonomous entities with rights could become recognized. It is clear in the statement, for example, that the US Copyright Office recognizes that machines can create (and hallucinate).
The complex issues of the dataset creation and AI training processes will underpin much of the legal stances taken and a paper released at the beginning of Feb 2023 could become one of the defining pieces of research that undermines it all. The research extracted near exact copyrighted images of identified people from a diffusion model, suggesting that it can lead to privacy violations. See a review here and for the full paper go here.
In the meantime, more creative platforms used to showcase creative work are introducing tagging systems to help identify AI generated content – #NoAI, #CreatedWithAI. Sketchfab joined the list at the end of Feb with its update here, with updates relating to its own re-use of such content through its licensing system coming into effect on 23 March.
NVisionary
Nvidia’s progressive march with AI genies needs an AI to keep up with it! Here’s my attempt to review the last month of releases relevant to the world of machinima and virtual production.
In February, we highlighted ControlNet as a means to focus on specific aspects of image generation and this month, on 8 March, Nvidia released the opposite which takes the outline of an image and infills it, called Prismer. You can find the description and code on its NVlabs GitHub page here.
Alongside the portfolio of generative AI tools Nvidia has launched in recent months, with the advent of OpenAI’s GPT4 in March, Nvidia is expanding its tools for creating 3D content –
It is also providing an advanced means to search its already massive database of unclassified 3D objects, integrating with its previously launched Omniverse DeepSearch AI librarian –
It released its cloud-based Picasso generative AI service at GTC23 on 23 March, which is a means to create copyright cleared images, videos and 3D applications. A cloud service is of course a really great idea because who can afford to keep up with the graphics cards prices? The focus for this is enterprise level, however, which no doubt means its not targeting indies at this stage but then again, does it need to when indies are already using DALL-E, Stable Diffusion, MidJourney, etc. Here’s a link to the launch video and here is a link to the wait list –
Pro-seed-ural
A procedural content generator for creating alleyways has been released by Difffuse Studios in the Blender Marketplace, link here and see the video demo here –
We spotted a useful social thread that highlights how to create consistent characters in Midjourney, by Nick St Pierre, using seeds –
🌱 –seeds in Midjourney
What they are, why they're useful, where to find them, & when/how to use them 👇
and you can see the result of the approach in his example of an aging girl here –
Animation
JSFilmz created an interesting character animation using MidJourney5 (which released on 17 March) with advanced character detail features. This really shows its potential alongside animation toolsets such as Character Creator and Metahumans –
Runway’s Gen-2 text-to-video platform launched on 20 March, with higher fidelity and consistency in the outputs than its previous version (which was actually video-to-video output). Here’s a link to the sign-up and website, which includes an outline of the workflow. Here’s the demo –
Gen-2 is also our feature image for this blog post, illustrating the stylization process stage which looks great.
Wonder Dynamics launched on 9 March as a new tool for automating CG animations from characters that you can upload to its cloud service, giving creators the ability to tell stories without all the technical paraphenalia (mmm?). The toolset is being heralded as a means to democratize VFX and it is impressive to see that Aaron Sims Creative are providing some free assets to use with this and even more so to see none other than Steven Spielberg on the Advisory Board. Here’s the demo reel, although so far we’ve not found anyone that’s given it a full trial (its in closed beta at the moment) and shared their overview –
Finally for this month, we close this post with Disney’s Aaron Blaise and his video response to Corridor Crew’s use of generative AI to create a ‘new’ anime workflow, which we commented on last month here. We love his open-minded response to their approach. Check out the video here –
Recent Comments