RunwayML

S4 E124 Machinima News Omnibus (Apr 2024)

Tracy Harwood Podcast Episodes April 10, 2024 Leave a reply

A packed review this month, with some sad 🙁 better 😐 and happy news 🙂



YouTube Version of this Episode

Show Notes and Links

Endings

The long-anticipated demise of Rooster Teeth and has been announced – as has the much anticipated final season of Red Vs Blue in this trailer –

If you are a die hard RTer, here’s a link to the Rooster Teeth team response to Warner’s announcement, its ‘Not a Final Goodbye’ –

Tracy and Ben’s Pioneers in Machinima book chapter is here too, Chapter 4: Rooster Teeth Bites.

Here’s the link to the 2007 Special Ep, Going Global, released at the First European Machinima Festival in 2007 (in Leicester, UK) –

Articles, Variety, Deadline and IGN.

Another ending, Draxtor Dupres’ Second Life series supported by Linden Labs, Drax Files: World Makers. Here’s his final report and here’s a link to a livestream he did to celebrate the achievements of the show –

Accolades

Sam Crane’s Hamlet in GTA5 won a Jury prize for best documentary at SXSW 2024, link to news coverage here, and the distribution deal awarded here.

Electro League and Weta’s War is Over has become the first UE5 film to win an Oscar, Best Animated Short –

Aye AI AI…

YouTube has updated its T&Cs, a nice article here

ConvAI has announced a partnership with Unity for NPCs, and also teamed up with Second Life. Here’s a link to Wagner James Au’s New World Notes blog – and here’s a couple of useful links –

Stability AI has introduced a 3D video generator, information here

RunwayML has partnered with Musixmatch to generate video to lyrics, link to information here

and it has also announced a new lipsync feature for generative audio, currently on early access to its creators programme members

Hume has a demo of a voice-to-voice generator, described as an Empathic Voice Interface – link to sign-up for early access here

and if you need a little overview of how all things generative AI are developing, here’s a nice video summary by Henrik Kniberg

Inspiration

To celebrate the phenomenal success of Gozilla Minus One at the 2024 Oscars, Nelson Escobar, the VFX creator of the Godzilla model, has released it as a free download for Blender. Link here –

Blockbuster Inc has announced its release date: 6 June 2024. Here’s the official gameplay trailer –

and here’s the link to Prologue on Steam

Finally, here’s the link to the interview with Denis Villeneuve, discussing his approach to making the latest Dune films –

Post Script

We are super excited to see our Podcast make the Top 3 listing of indie filmmaking podcasts on Goodpods in March 2024 – how cool is that?!

Projects Update 2 (August 2023)

Tracy Harwood Blog August 28, 2023 Leave a reply

What do AI, 48 hours and Second Life have in common? Not a lot, beyond some stunningly creative pieces that we found for you this week!

AI films are now beginning to come through and we have two very interesting ones for you to take a look at. The first is created/prompted by Matt Mayle (our feature image for this post) and has been made using Elevenlabs (voice), Runwayml’s GEN2 (animation) and ChatGPT4 (text concept) and is described an ‘AI assisted short’, called The Mass (released 26 April) –

The second is called The Frost, by Waymark Creative Labs (released 5 June). This has a distinct aesthetic to it, encapsulates a curious message, and overall reflects the state of AI animation at this stage, but its nonetheless a gripping piece. Its been created with DALLE-2 and D-ID –

Our next pick was made in 48 hours (well, with a bit of tweaking on top) and has been made in Unreal Engine, called Dude Where’s My Ship by Megasteakman. To be frank, the speed of its creation does show in the final quality of film, but its nonetheless an interesting development, especially given that I was a regular judge on the 48 Hour Filmmaking Contest a few years ago. The machinima version of that contest was managed and supported by Chantal Harvey for several years and its astonishing to think that this is the next generation of that process –

Finally, this week, a film made in Second Life, which lends itself to flash production, based on content from a plethera of creators on whom its content relies. The film is called The Doll Maker (released 27 May) and has been made by FeorieFrimon using various models and Paragon Dance Animations movements to a Beats Antique music composition called Flip –

Tech Update: AI (June 2023)

Tracy Harwood Blog June 5, 2023 Leave a reply

In comparison to the previous six months, the past month has not exactly been a damp squib but it has certainly revealed a few rather under-whelming releases and updates, notwithstanding Adobe’s Firefly release. We also share some great tutorials and explainers as well as some interesting content we’ve found.

Next Level?

Nvidia and Getty have announced a collaboration that will see visuals created with fully licensed content, using Nvidia’s Picasso model. The content generation process will also enable original IP owners to receive royalties. Here’s the link to the post on Nvidia’s blog.

Microsoft has released its Edge AI image generator, based on OpenAI’s DALL-E generator, into its Bing chatbot. Ricky has tried the tool and comments that whilst the images are good, they’re nowhere near the quality of Midjourney at the moment. Here’s an explainer on Microsoft’s YouTube channel –

Stability AI (Stable Diffusion) has released its SDK for animation creatives (11 May). This is an advancement on the text-to-image generator, although of course we’ve previously talked about similar tools, plus ones that advance this to include 3D processes. Here’s an explainer from the Stable Foundation –

RunwayML has released its Gen 1 version for the iPhone. Here’s the link to download the app. The app lets you use a video from your roll to apply either a text prompt or a reference image or a preset to create something entirely new. Of course, the benefit is that from within the phone’s existing apps, you can then share on social channels at your will. Its worth noting that at the time of writing we and many others are still waiting for access to Gen 2 for desktop!

Most notable of the month is Adobe’s release of Firefly for AdobeVideo. The tool enables generative AI to be used to select and create enhancements to images, music and sound effects, creating animated fonts, graphics and fonts and b-roll content – and all that, Adobe claims, without copyright infringements. Ricky has, however, come across some critics who say that Adobe’s claim that their database is clean is not correct. Works created in Midjourney have been uploaded to Adobe Stock and are still part of its underpinning database, meaning that there is a certain percent (small) of works in the Adobe Firefly database that ARE taken from online artist’s works. Here’s the toolset explainer –

Luma AI has released a plug-in for NeRFs in Unreal Engine, a technique for capturing realistic content. Here’s a link to the documentation and how-tos. In this video, Corridor Crew wax lyrical about the method –

Tuts and Explainers

Jae Solina aka JSFilmz has created a first impressions video about Kaiber AI. This is quite cheap at $5/month for 300 credits (it seems that content equates to appx 35 credits per short vid). In this explainer, you can see Jae’s aged self as well as a cyberpunk version, and the super-quick process this new toolset has to offer –

If you’re sick to the back teeth of video explainers (I’m not really), then Kris Kashtanova has taken the time to generate a whole series of graphic novel style explainers (you may recall the debate around her Zarya of the Dawn Midjourney copyright registration case a couple of months back) – these are excellent and somehow very digestible! Here’s the link. Of course, Kris also has a video channel for her tutorials too, latest one here looks at Adobe’s Firefly generative fill function –

In this explainer, Solomon Jagwe discussed his beta test of Wonder Studio’s AI mocap for body and finger capture although its not realtime unfortunately. This is however impressive and another tool that we can’t wait to try out once its developoer gets a link out to all those that have signed up –

Content

There has been a heap of hype about an advert created by Coca Cola using AI generators (we don’t know which exactly) but its certainly a lot of fun –

In this short by Curious Refuge, Midjourney has been used to re-imagine Lord of the Rings… in the style of Wes Anderson, with much humor and Benicio del Toro as Gimli (forever typecast and our feature image for this post). Enjoy –

We also found a trailer for an upcoming show, Not A Normal Podcast, but a digital broadcast where it seems AIs will interview humans in some alternative universe. Its not quite clear what this will be, but it looks intriguing –

although it probably has a way to go to compete with the subtle humor of FrAIsier 3000, which we’ve covered previously. Here is episode 4, released 21 March –

Tech Update 1: AI Generators (Mar 2023)

Tracy Harwood Blog March 6, 2023 1 Comment

Genies are everywhere now. In this post, I’ll focus on some of the more interesting areas relating to the virtual production pipeline, which interestingly is becoming clearer day by day. Check out this mandala of the skills identified for virtual production by StoryFutures in the UK (published 2 March) but note that skills for using genies within the pipeline are not there (yet)!

Future of Filmmaking

Virtual Producer online magazine published an interesting article, by Noah Kadner (22 Feb), about the range of genie tools available for the film production pipeline, covering the key stages of pre-production, production and post-production. Alongside it, he gives an overview of some of the ethical considerations we’ve been highlighting too. Its nice to the see the structured analysis of the tools although, of course, what AIs do is change or emphasize aspects of processes, conflate parts and obviate the need for others. Many of the tools identified are ones we’ve already discussed in our blogs on this topic, but its fascinating to see the order being put on their use. I think the key thing all of us involved in the world of machinima have learned over the years, however, is that its often the indie creators that take things and do stuff that no one thought about before, so I for one will be interested to see how these neat categories evolve!

Bits and Pieces

It was never going to take long to showcase the ingenuity among users of genies: last month, whilst Futurism was reporting on the dilemma of ethical behaviour among users who have ‘jailbroken’ the ChatGPT safeguards, MidJourney was busy invoking even more governance over its use. MidJourney says its approach, which now bans the use of words about human reproductive systems, is to ‘temporarily prevent people from creating shocking or gory images’. All this very much reminds me of an AI experiment carried out by Microsoft almost seven years ago as we release this post, on 24 March 2016, and of the artist Zach Blas’ interpretation of that work showcased in 2017, called ‘Im here to learn so :))))))‘.

For those without long(ish) memories, Blas’ work was a video art installation visualizing Tay, which had been designed by Microsoft as a 19 years old American female chatbot. As an AI, it lived for just one day on its social media platform where it was subjected to a tyranny of misognyistic, abusive, hate-filled diatribe. Needless to say, corporate nervousness in its creative representation of the verbiage it generated from its learning processes resulted in it being terminated before it really got going. Blas’ interpretation of Tay, ironically using Reallusion’s CrazyTalk to animate it as an ‘undead AI’, is a useful reminder of how algorithms work and the nature of humanbeans. The link under the image below takes you to where you can watch the video of Tay reflecting on its experience and deepdreams. Salutary.

source: Zach Blas’ website

Speaking of dreams, Dreamix is a creative tool that uses an input video with a text prompt to create some other video output. In effect, it takes the user through the pre-production, production and post-production process in just one sweep. Here’s a video explainer –

In a not dissimilar vein, ControlNet takes an image generated in Stable Diffusion and applies a controller to inpaint the image in any style you’d like to see. Here’s an explainer by Software Engineering Courses –

and here’s the idea taken to a whole new level by Corridor Crew in their development of an anime film. The explainer takes you through the process they created from scratch, including training an AI –

They describe the process they’ve gone through really well, and its surely not going to be too long before this becomes automated with an app you can pick up in a virtual store near you.

Surprise, surprise, here is RunwayML’s Gen-1: not quite the automated app actually, but pretty close. Runway has created an AI that takes video input and an image with a style you would like to apply to it and with a little bit of genie magic, the output video has the style transferred to it. What makes this super interesting, however, is that Runway Studios is now a thing too – it is the entertainment and production division of Runway and aims to partner with ‘next gen’ storytellers. It has launched two initiaties worth following: an annual AI Film Festival, which just closed its first call for entries. Here’s a link to the panel discussion that took place in New York on 1 Mar, with Paul Trillo, Souki Mehdaoui, Cleo Abram and Darren Aronofsky –

The second initiative is its creative grants for ‘aspiring filmmakers from various backgrounds who are in need of production support’. On its Google formlet, it states grants take various shapes, including advanced access to the latest AI Magic Tools, funding allocations, as well as educational resources. Definitely worth bearing in mind for your next step in devising machine-cinema stories.

Genious?

Whilst we sit back and wait for the AI generated films to bubble to the top of our algorithmically controlled YouTube channel, or at least, the ones where Google tools have been part of the process, we bring you a new-old classic. Welcome to FrAIsier 3000. This is described as a parody show that combines surreal humor, philosophical musings and heartfelt moments from an alternate dimension, where an hallucinogenic FrAIsier reflects on the mysteries of existence and the human condition. Wonderful stuff, as ever. Here’s a link to episode 1 but do check out episode 2, waxing lyrically on ‘coq au vin’ as a perfect example of the balance between the dichotomy of discipline and carefreeness (and our feature image for this post) –

If you find inspiring examples of AI generated films, or yet more examples of genies that push at the boundaries of our virtual production world, do get and touch or share in the comments.