This week we celebrate Nvidia’s investment in machinima, discuss latest AIs and give you the heads up on some great projects we want to highlight – there are just too many for us to fully review everything we’re seeing that we want to share with you these days! Do check them out, we’d love to hear your thoughts too – links and notes below.
Youtube Version of This Episode
Show Notes & Links
Homage to Nvidia’s Omniverse Machinima app by Pekka Varis, released 14 April 2024 –
Endgame by Peaches Chrenko and Dark Machine Audio, sound track –
JP Ferre’s DCS: Spitfires – Cinematic, a real tribute to those brave souls in WWII –
TheDavedood (Scratby Films) animation for Pink Floyd’s Dark Side of the Moon 50 years celebration – this one is for the single, Time –
Anomidae’s latest episode of the Half Life supernatural series Interloper –
Fallout inspired videos: JT Music’s Fallout rap, All in With the Fallout –
and Fallout – Dream on (tribute) by Couch Patrol –
Fables of the Foolish by Dreeko –
AI is Genie-us Init?
ElevenLabs has released a dubbing toolset tutorial, making different languages for videos even more accessible – link here
Google DeepMind has announced a new video generation model called Veo, which produces 1080p res videos for over a minute length in a whole range of different cinematic styles – link here
Stability AI has launched Stable Artisan to a wider user group on Discord – ats a tool for media generation and editing – link here
Showrunner by The Simulation, text to episode generator – link here
Winner of the 2nd AI Film Festival at Runway is by Daniel Antebi, called Get Me Out –
Luc Shurgers’ Skibidi Sam, video only on LinkedIn, created using Replikant – link here
Sony backs out of AI training with its catalogue – article here
This week’s update is all about the virtual production pipeline and digital cultural history.
VP Pipeline
DaVinci Resolve 18.5 (and .1 fixes) has finally released, and Blackmagic Design have a comprehensive support centre you can make use of here (only for the pro version license holders). The version includes a bunch of new features for integrating AI genie content and collaboration. Here’s an overview, courtesty of MrAlexTech –
Unreal Engine has an ever-expanding and truly talented community. In this tut, Jonathan Winbush (our feature image this week) shares his approach to creating procedurally generated towns using PCG and blueprints inside UE and Cargo (Kitbash3D). Winbush has a wealth of material on his channel, all free, for anyone to pick up and work with, so there’s really no excuse not to learn Unreal Engine –
Boundless Entertainment has release a course for filmmaking, pre-viz and VFX. Its designed for taking beginners to more professional levels in 10 days… mmm, lets see! Its not free, like many of the YouTube tutorials, but for $180 it will undoubtedly suit some learning styles.
Finally, if you want to share your VP process and also learn from others, Nvidia has a new #StartToFinish challenge running til the end of August. Its focussed on those working with the Omniverse platform, with a chance to be showcased on their social media channels. You can find out more about it on their Discord server.
Digital Culture History
We were interested to see a post on the BBC’s website that reported on NoClip’s Danny O’Dwyer rescue of hundreds of hours worth video content of gaming history from landfill. The collection mostly pre-dates YouTube, and comprises of footage and media clips that were cut from being shown on TV or websites. You can see Danny talk about his gold strike here –
We look forward to seeing what Danny digs up as he goes through the material over the next 10 years or so.
Back to the Future, that classic 1980s trilogy we all love for a whole range of reasons, is BACK again. This time, its as a Musical at the Alephi Theatre in London’s West End and the Winter Garden Theatre on Broadway, and in 2024, a North American tour. Its also fascinating to hear the rejection story of Bob Gale and Robert Zemeckis’ original film script – rejected over 40 times before finally being signed. There are certainly many lessons in here for creatives today, not least the process of adapting film FX to theatre, for which MoveAI/Disguise for mocap and virtual production techniques are being employed –
A first in the UK, with a 5G screen test for a dual-location virtual production method for real-time performance capture –
We’ve been following the debate on copyright, fair use and transformative use of IP for what seems like 30 years in the world of machinima (see some of our posts here, here and here) – oh, actually its 27 years…! On 18 May, the world was exercised a little further on the issue of transformative use when the Supreme Court (US) reached its decision on Andy Warhol’s use of a photograph of Prince in a magazine – a case that’s been running since 2016, following Prince’s death. Many suggested this decision is the beginning of end of transformative use – or at least ‘narrows the ‘fair use’ doctrine‘ – and will have massive detrimental impacts on all things created, such as machinima from games engines… however, with the particular scenario fully outlined, this was probably the right outcome for this case. The scenario relates to an unattributed use of an image from a private collecton of works (created and held by Warhol/foundation), where other works involving the same creatives in the collection had previously been attributed and the photographer recompensed when having been used in magazines, and the fact that both Warhol and the photographer (Lynn Goldsmith) made money from selling images individually. So, this decision is about context of use involving the individuals as much as it is ‘fair use’ per se. Justice Sotomayor stated the important factor in the fair-use analysis was that “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes” pushed the decision in favour of the photographer, arguing that “licenses, for photographs or derivatives of them, are how photographers like Goldsmith make a living. They provide an economic incentive to create original works, which is the goal of copyright.” You can read the ruling in full here – or use your favorite search tool for a link to any one of the numerous news articles covering the case.
So, until such time as the principle applied in this case is actually applied to a creator context, where income is rarely a goal of productions beyond individual recognition and perhaps the meagre YouTube % share for eyeballs it receives, and transformation is generally well beyond that originally intended by say a game dev, it feels like there’s nothing to see here.
Meta
On 23 June, Second Life turns 20 years old! There will be virtual parties, exhibitions, product sales and more – for 20 days of course, and you can find out more on the community website here. Happy Birthday to all the Lindens – the first open world environment to truly embrace metaversal themes.
If you want to catch up on some light reading, then its also worth noting that Wagner James Au’s new book releases a week later on 27 June, called Making a Metaverse that Matters. Au also regularly writes some great updates for what has to be one of the longest-running metaverse blogs. Its called New World Notes, which he founded in 2006. Au was the first metaverse journalist and marketer for SL back in 2003. Links to the book here –
Excited to tease the cover for my book, "Making a Metaverse That Matters"!
In stores next month. Please consider pre-ordering.
Nvidia are releasing a monthly update on its blog of all things Omniverse, including latest advancements for the OpenUSD framework that has so quickly become the gold standard for integrating a wide range of creator tools in a 3D workflow. Here‘s the link to the first part of the ‘Into the Omniverse’ series (our feature image for this post) which includes an overview of an update to the connector for Adobe Substance 3D Painter. Substance 3D releases its latest version 203.0 in mid June. This series is a must follow for all content creators, whether or not you own an RTX!
-Versal
For those seeking advice on devising a virtual production pipeline, Unreal Engine has helpfully released a visualisation guide here and a nice vid here –
Unreal Engine released version 5.2 on 11 May, which includes some fab new features including a preview of its still in dev Procedural Content Generation framework, enabling creators to populate large scenes more efficiently; Substrate, that supports a greater range of surface appearances such as the opalescent finish showcased in this vid –
an enhanced virtual production set of tools for realtime filmmaking support; enhanced VCam system for multi-camera control; and nDisplay extended support, which is setting the scene for the next version 5.3. A link to the release notes is here.
We also spotted a useful tool in the UE Marketplace albeit pricey at $249 for indies: MetaShoot. It includes lighting and render presets for assistance with creating sophisticated lighting setups in your VP studio, released by VINZI – Code Plugins, link here.
Also super helpful is Kitbash3D’s new Cargo asset browser, including some 10,000 searchable assets. The basic account, which is free, allows you to 1-click upload content to your project and manage the assets you have but for a fee of $65/month, the pro version will let you search and access the full model and media library. Its another layer of cost so do check out the small print.
In comparison to the previous six months, the past month has not exactly been a damp squib but it has certainly revealed a few rather under-whelming releases and updates, notwithstanding Adobe’s Firefly release. We also share some great tutorials and explainers as well as some interesting content we’ve found.
Next Level?
Nvidia and Getty have announced a collaboration that will see visuals created with fully licensed content, using Nvidia’s Picasso model. The content generation process will also enable original IP owners to receive royalties. Here’s the link to the post on Nvidia’s blog.
Microsoft has released its Edge AI image generator, based on OpenAI’s DALL-E generator, into its Bing chatbot. Ricky has tried the tool and comments that whilst the images are good, they’re nowhere near the quality of Midjourney at the moment. Here’s an explainer on Microsoft’s YouTube channel –
Stability AI (Stable Diffusion) has released its SDK for animation creatives (11 May). This is an advancement on the text-to-image generator, although of course we’ve previously talked about similar tools, plus ones that advance this to include 3D processes. Here’s an explainer from the Stable Foundation –
RunwayML has released its Gen 1 version for the iPhone. Here’s the link to download the app. The app lets you use a video from your roll to apply either a text prompt or a reference image or a preset to create something entirely new. Of course, the benefit is that from within the phone’s existing apps, you can then share on social channels at your will. Its worth noting that at the time of writing we and many others are still waiting for access to Gen 2 for desktop!
Most notable of the month is Adobe’s release of Firefly for AdobeVideo. The tool enables generative AI to be used to select and create enhancements to images, music and sound effects, creating animated fonts, graphics and fonts and b-roll content – and all that, Adobe claims, without copyright infringements. Ricky has, however, come across some critics who say that Adobe’s claim that their database is clean is not correct. Works created in Midjourney have been uploaded to Adobe Stock and are still part of its underpinning database, meaning that there is a certain percent (small) of works in the Adobe Firefly database that ARE taken from online artist’s works. Here’s the toolset explainer –
Luma AI has released a plug-in for NeRFs in Unreal Engine, a technique for capturing realistic content. Here’s a link to the documentation and how-tos. In this video, Corridor Crew wax lyrical about the method –
Tuts and Explainers
Jae Solina aka JSFilmz has created a first impressions video about Kaiber AI. This is quite cheap at $5/month for 300 credits (it seems that content equates to appx 35 credits per short vid). In this explainer, you can see Jae’s aged self as well as a cyberpunk version, and the super-quick process this new toolset has to offer –
If you’re sick to the back teeth of video explainers (I’m not really), then Kris Kashtanova has taken the time to generate a whole series of graphic novel style explainers (you may recall the debate around her Zarya of the Dawn Midjourney copyright registration case a couple of months back) – these are excellent and somehow very digestible! Here’s the link. Of course, Kris also has a video channel for her tutorials too, latest one here looks at Adobe’s Firefly generative fill function –
In this explainer, Solomon Jagwe discussed his beta test of Wonder Studio’s AI mocap for body and finger capture although its not realtime unfortunately. This is however impressive and another tool that we can’t wait to try out once its developoer gets a link out to all those that have signed up –
Content
There has been a heap of hype about an advert created by Coca Cola using AI generators (we don’t know which exactly) but its certainly a lot of fun –
In this short by Curious Refuge, Midjourney has been used to re-imagine Lord of the Rings… in the style of Wes Anderson, with much humor and Benicio del Toro as Gimli (forever typecast and our feature image for this post). Enjoy –
We also found a trailer for an upcoming show, Not A Normal Podcast, but a digital broadcast where it seems AIs will interview humans in some alternative universe. Its not quite clear what this will be, but it looks intriguing –
although it probably has a way to go to compete with the subtle humor of FrAIsier 3000, which we’ve covered previously. Here is episode 4, released 21 March –
This week, tech updates cover Epic’s new tools for self-publishing, Omniverse’s USD rebrands, thoughts about the nascent metaverse and some throwbacks to good-old-fashioned machinima creative techniques.
Epic’s Games Store
Surely a move that will make rival Steam squirm, Epic announced on 9 March that it has launched new tools for self-publishing on the Games Store, all on the back of its 68M active monthly users. Publishers will receive 88% of the revenue through sales (compared to 70% on Steam). There are some interesting points raised in the T&Cs, such as the need for cross-playability (across all PC stores), achievement tracking for games, age rating requirements and an affiliate creator programme that enables publishers to share their takings with others – check out the T&Cs on their announcement here. The announcement intimates at much bigger things to come, relating to metaverse propositions, but its an interesting development for now. Here’s a walk through of the tools from their livestream about it –
Omniverse USD
Nvidia’s Omniverse Create and Omniverse View are rebranding, announced on 3 March. These will now be called, respectively, Omniverse USD Composer and Omniverse USD Presenter. The omnipresence of USD (Universal Scene Description) has become a driving force for 3D creative development in a very short space of time – just last August, Nvidia summarized its vision with embedding USD as the foundation of the metaverse for creatives (and also industrial teams, smart services providers and such), where content could be pushed across a vast array of different platforms. Less than a year later, workflows everywhere have evolved with it and USD is now a ubiquitous technology, much like the internet is the driving force for the web. What’s a little intriguing is why draw attention to it at this juncture, and what’s the point of editing archival videos to include the new names, like this one – recognition, reinforcement, repositioning or something new coming down the pipeline?
Blended
Beyond the hype, and clearly the practices as we’ve highlighted above, the metaverse is taking shape in interesting ways. An interesting article, published in VentureBeat on 4 March, highlights the lengths that media and entertainment companies such as Sony are going to in creating virtual worlds that transcend film, game and experiences, including in VR and theme parks. These are more than alignments of creative talent teams, but allude to the potential of vast new ecosystems for collaborators and partners. What’s interesting of course is that the inflection into such ecosystems can be from any creative medium (game, film or artwork presumably), with outputs that are going to be more visceral and consequently more immersive. Since toolsets such as USD facilitate the creation of these ecosystems, it will be interesting to see how indies get in on this action too – we’re already seeing a number of start-up enterprises pushing the boundaries, but there’s also scope for small studios to join in. Question is, where are they now?
Cinematics (the Old Way)
No Man’s Sky has been a machinina creators’ go-to for some time, and this short gives a great overview of how to create cinematics in the environment, by EvilDr.Porkchop (also our blog post feature image) –
Eve Online is another such environment, and now of course a [very] old one, but here’s a nice ‘how to’ for making epic looking machinimas, by WINGSPAN TT –
Recent Comments