Still Here… a chilling dystopian tale of a world devasted by climate crisis and wealth inequality. This week’s review is of a film by Guido Ekker, featured on the Film Shortage Channel (YouTube), and is one of the few shorts we’ve seen that attempts to make a point beyond pure entertainment. We identify some of the stereotypical tropes used and discuss the creative potential of machinima and virtual production for this kind of political messaging – all with our usual critical bonhomie.
YouTube Version of this Episode
Show Notes & Links
Film, released 17 April on the Film Shortage Channel, YouTube –
From AI to sci-fi to dystopian world stories, this week’s selection demonstrates creative tools and processes being used to realize these shorts.
Our first selection this week is a beautifully rendered morphing AI film called The High Seas, made using 60fps/4K by Drew Medina (released 9 Apr 2023) – one of the few we’ve seen so far. Embedding has been disabled, but please do follow the link here.
Constelar is by Oskar Alvardo (score by Lee Daish), released 4 Feb 2023. This has been made using Blender and an interesting approach to storytelling, with an almost 1970s noir feel to it –
The next film is a cinematic tribute to the makers of StarCraft, called Judgment Cinematic by Nakma, released 23 Mar 2023. The music (which we note is uncredited) adds much to the story telling but it also needs some understanding of the StarCraft world to fully appreciate the nuances in the plot which is vaguely Star Wars-ish. Nonetheless, a great effort, especially since it took just three months to make this machinima – there are some great shots and editing is well done –
The dystopian world of Valve’s Half Life, made using Source Filmmaker, has been used in our next two film selections. The first is called Combined and draws on the lore in the game. It is quite violent but does well to ‘humanise’ the characters. The animation looks surprisingly old-style, even if it is only 2021 – a reflection on just how quickly the cinematic aesthetic has changed in such a short period of time. In Perimeter (our feature image for this post), which also portrays the Combine, there is quite a different aesthetic finish to it. What’s interesting about this film is the inspiration it drew from: concept art by Vyacheslav Gluhov. Both these films are great examples of how a game inspires creators to take one aspect, in this case the Combine character in HL, and extend the narrative into new and interesting directions.
This week we share with you a couple of notable RTs (of the UnTwitter kind) and a Dynamo Dream or two. Enjoy!
Who can believe it but Rooster Teeth is now 20 years old. Its come a very long way from its RVB days, not all of it good, but its still rolling. Indeed, RT is now also in the same stable as the final remains of Machinima.com (RIP). Ben Grussi and I dedicated a chapter to the RT story in our book Pioneers in Machinima (2021) and one thing we noted was its resilience to change over the years, so here’s wishing them all the best for the next turn on their roundabout too –
Another long-time favorite on our podcast is the RT Music (formerly RT Machinima) team. This month, I’d like to share their Elden Ring Rap with you (released 12 Mar 2022). Its definitely worth watching the video, not only are these guys great at writing some toe-tappers but they also do a pretty good job of showing off their machinima skills too –
Finally this week, Ian Hubert has released two episodes of his Dynamo Dream live action/VFX series (our feature image for this post). We covered the first episode of this stunning series on the podcast back in August 2021 (audio only) but what’s quite incredible about the release of Eps 2 and 3 in such quick succession is frankly the speed at which he’s been able to release them… and of course they’re very good if ever so slightly absurdist.
A Single Point in Space – Dynamo Dream, Ep 2 (released 23 Mar 2023) –
A Pete Episode – Dynamo Dream, Ep 3 (released 6 Apr 2023) –
Next week, we have some more selections to share with you too but if you find something you’d like us to do a full review of on the podcast, do share it.
This week, our review is a roundup of new releases, some tools and tuts that add realism to productions and some interesting new tools announced for moviemakers everywhere, irrespective of creative engine preference.
Releases
Blender has released version 3.5, with an astonishing hair toolset. See the overview here –
UE5 editor for Fortnite has been released – UEFN is a PC application for designing, developing, and publishing games and experiences directly into Fortnite. You can see the release launch at GDC here –
Reallusion has released an astonishing range of 3D motions and characters for Actorcore, called Run For Your Life. Its not cheap but then again it may well be the only action set you ever need. Here’s a a demo reel –
Facegood’s Avatary (made in China) has released a desktop facial mocap system with some basic functionality for free. Here’s a nice little overview of what this version of it can do –
Realism
The quality of modelling continues to astound – I’m still blown away by Unreal’s Substrate materials system, although you need an epic system to render no doubt –
However, there are a few other releases that we’ll share with you this month too. Firstly, the UE Crashes course – not just any ole course, of course, but one where you can see how to animate ‘epic’ car crashes in UE5 (is that too many puns… sure it is) –
Secondly, Taichi Kobayashi has developed a stunning Cliffwood Village – a large-scale and beautifully detailed 3D model for UE5 –
Finally, William Faucher’s use of Reality Captures’ tech to create an arctic environment for UE5 is also something stunning to see. Check out his overview of the creative process here –
Movie-makers
An interesting development is the release of what’s being badged as The Movies mark II, called Blockbuster Inc in which “You will take total control of your very own movie studio. You will be able to construct all the facilities, hire and manage all sorts of employees and stars with the aim to produce the most prolific films and TV” (Super Sly Fox, developer). Its not yet been released, but you can find the holding page on Steam here.
Big news of the month is that Moviestorm‘s long awaited previsualisation software, FirstStage (although they need a new intro vid on their YouTube channel asap), is finally out of beta with ver 32 (our cover image for this post) –
This will surely be a useful tool for all those major creative projects, whatever the final engine used may be including film, TV and video as well as 3D environment engine-based, and it is very reasonably priced at $10/month per user (non-commercial). For those with short memories, Moviestorm (its creator channel is here fyi) launched originally in 2007 at the First European Machinima Festival as I recall and became a platform that many used to create content long before the likes of Reallusion’s iClone and Source Filmmaker got a wider foothold. One of my all-time favourites made in Moviestorm was IceAxe’s (aka Iain Friar) Clockwork (2008), a retelling of that classic tale by Anthony Burgess –
What will be interesting, however, is how it will compete with the in-engine toolsets being developed along similar lines, for example, Matt Workman’s UE Cine Tracer which delivers a similar experience. Of course, there are also individual tools, such as this camera crane by Cinematography Database for UE5 –
March was another astonishing month in the world of AI genies with the release of exponentially powerful updates (GPT4 released 14 March; Baidu released Ernie Bot on 16 March), new services and APIs. It is not surprising that by the end of the month, Musk-oil is being poured over the ‘troubling waters’ – will it work now the genie is out of the bottle? Its anyone’s guess and certainly it seems a bit of trickery is the only way to get it back into the bottle at this stage.
Rights
More importantly, and with immediate effect, the US Copyright Office issued a statement on 16 March in relation to the IP issues that have been hot on many lips for several months now: registrations pertaining to copyright are about the processes of human creativity, where the role of generative AI is simply seen as a toolset under current legal copyright registration guidance. Thus, for example, in the case of Zarya of the Dawn (refer our comments in the Feb 2023 Tech Update), whilst the graphic novel contains original concepts that are attributable to the author, the use of images generated by AI (in the case of Zarya, MidJourney) are not copyrightable. The statement also makes it clear that each copyright registration case will be viewed on its own merit which is surely going to make for a growing backlog of cases in the coming months. It requires detailed clarification of how generative AI is used by human creators in each copyright case to help with the evaluation processes.
The statement also highlights that an inquiry into copyright and generative AIs will be undertaken across agencies later in 2023, where it will seek general public and legal input to evaluate how the law should apply to the use of copyrighted works in “AI training and the resulting treatment of outputs”. Read the full statement here. So, for now at least, the main legal framework in the US remains one of human copyright, where it will be important to keep detailed notes about how creators generated (engineered) content from AIs, as well as adapted and used the outputs, irrespective of the tools used. This will no doubt be a very interesting debate to follow, quite possibly leading to new ways of classifying content generated by AIs… and through which some suggest AIs as autonomous entities with rights could become recognized. It is clear in the statement, for example, that the US Copyright Office recognizes that machines can create (and hallucinate).
The complex issues of the dataset creation and AI training processes will underpin much of the legal stances taken and a paper released at the beginning of Feb 2023 could become one of the defining pieces of research that undermines it all. The research extracted near exact copyrighted images of identified people from a diffusion model, suggesting that it can lead to privacy violations. See a review here and for the full paper go here.
In the meantime, more creative platforms used to showcase creative work are introducing tagging systems to help identify AI generated content – #NoAI, #CreatedWithAI. Sketchfab joined the list at the end of Feb with its update here, with updates relating to its own re-use of such content through its licensing system coming into effect on 23 March.
NVisionary
Nvidia’s progressive march with AI genies needs an AI to keep up with it! Here’s my attempt to review the last month of releases relevant to the world of machinima and virtual production.
In February, we highlighted ControlNet as a means to focus on specific aspects of image generation and this month, on 8 March, Nvidia released the opposite which takes the outline of an image and infills it, called Prismer. You can find the description and code on its NVlabs GitHub page here.
Alongside the portfolio of generative AI tools Nvidia has launched in recent months, with the advent of OpenAI’s GPT4 in March, Nvidia is expanding its tools for creating 3D content –
It is also providing an advanced means to search its already massive database of unclassified 3D objects, integrating with its previously launched Omniverse DeepSearch AI librarian –
It released its cloud-based Picasso generative AI service at GTC23 on 23 March, which is a means to create copyright cleared images, videos and 3D applications. A cloud service is of course a really great idea because who can afford to keep up with the graphics cards prices? The focus for this is enterprise level, however, which no doubt means its not targeting indies at this stage but then again, does it need to when indies are already using DALL-E, Stable Diffusion, MidJourney, etc. Here’s a link to the launch video and here is a link to the wait list –
Pro-seed-ural
A procedural content generator for creating alleyways has been released by Difffuse Studios in the Blender Marketplace, link here and see the video demo here –
We spotted a useful social thread that highlights how to create consistent characters in Midjourney, by Nick St Pierre, using seeds –
and you can see the result of the approach in his example of an aging girl here –
Animation
JSFilmz created an interesting character animation using MidJourney5 (which released on 17 March) with advanced character detail features. This really shows its potential alongside animation toolsets such as Character Creator and Metahumans –
Runway’s Gen-2 text-to-video platform launched on 20 March, with higher fidelity and consistency in the outputs than its previous version (which was actually video-to-video output). Here’s a link to the sign-up and website, which includes an outline of the workflow. Here’s the demo –
Gen-2 is also our feature image for this blog post, illustrating the stylization process stage which looks great.
Wonder Dynamics launched on 9 March as a new tool for automating CG animations from characters that you can upload to its cloud service, giving creators the ability to tell stories without all the technical paraphenalia (mmm?). The toolset is being heralded as a means to democratize VFX and it is impressive to see that Aaron Sims Creative are providing some free assets to use with this and even more so to see none other than Steven Spielberg on the Advisory Board. Here’s the demo reel, although so far we’ve not found anyone that’s given it a full trial (its in closed beta at the moment) and shared their overview –
Finally for this month, we close this post with Disney’s Aaron Blaise and his video response to Corridor Crew’s use of generative AI to create a ‘new’ anime workflow, which we commented on last month here. We love his open-minded response to their approach. Check out the video here –
Recent Comments