This month, we have another packed episode, iClone 8 update, Balder’s Gate 3 modding tools, Starfield expansion, projects, projects, projects, Sketchfab, Backrooms, YouTube AI disclosure and more. Check out the ep and be sure to comment too.
March was another astonishing month in the world of AI genies with the release of exponentially powerful updates (GPT4 released 14 March; Baidu released Ernie Bot on 16 March), new services and APIs. It is not surprising that by the end of the month, Musk-oil is being poured over the ‘troubling waters’ – will it work now the genie is out of the bottle? Its anyone’s guess and certainly it seems a bit of trickery is the only way to get it back into the bottle at this stage.
Rights
More importantly, and with immediate effect, the US Copyright Office issued a statement on 16 March in relation to the IP issues that have been hot on many lips for several months now: registrations pertaining to copyright are about the processes of human creativity, where the role of generative AI is simply seen as a toolset under current legal copyright registration guidance. Thus, for example, in the case of Zarya of the Dawn (refer our comments in the Feb 2023 Tech Update), whilst the graphic novel contains original concepts that are attributable to the author, the use of images generated by AI (in the case of Zarya, MidJourney) are not copyrightable. The statement also makes it clear that each copyright registration case will be viewed on its own merit which is surely going to make for a growing backlog of cases in the coming months. It requires detailed clarification of how generative AI is used by human creators in each copyright case to help with the evaluation processes.
The statement also highlights that an inquiry into copyright and generative AIs will be undertaken across agencies later in 2023, where it will seek general public and legal input to evaluate how the law should apply to the use of copyrighted works in “AI training and the resulting treatment of outputs”. Read the full statement here. So, for now at least, the main legal framework in the US remains one of human copyright, where it will be important to keep detailed notes about how creators generated (engineered) content from AIs, as well as adapted and used the outputs, irrespective of the tools used. This will no doubt be a very interesting debate to follow, quite possibly leading to new ways of classifying content generated by AIs… and through which some suggest AIs as autonomous entities with rights could become recognized. It is clear in the statement, for example, that the US Copyright Office recognizes that machines can create (and hallucinate).
The complex issues of the dataset creation and AI training processes will underpin much of the legal stances taken and a paper released at the beginning of Feb 2023 could become one of the defining pieces of research that undermines it all. The research extracted near exact copyrighted images of identified people from a diffusion model, suggesting that it can lead to privacy violations. See a review here and for the full paper go here.
In the meantime, more creative platforms used to showcase creative work are introducing tagging systems to help identify AI generated content – #NoAI, #CreatedWithAI. Sketchfab joined the list at the end of Feb with its update here, with updates relating to its own re-use of such content through its licensing system coming into effect on 23 March.
NVisionary
Nvidia’s progressive march with AI genies needs an AI to keep up with it! Here’s my attempt to review the last month of releases relevant to the world of machinima and virtual production.
In February, we highlighted ControlNet as a means to focus on specific aspects of image generation and this month, on 8 March, Nvidia released the opposite which takes the outline of an image and infills it, called Prismer. You can find the description and code on its NVlabs GitHub page here.
Alongside the portfolio of generative AI tools Nvidia has launched in recent months, with the advent of OpenAI’s GPT4 in March, Nvidia is expanding its tools for creating 3D content –
It is also providing an advanced means to search its already massive database of unclassified 3D objects, integrating with its previously launched Omniverse DeepSearch AI librarian –
It released its cloud-based Picasso generative AI service at GTC23 on 23 March, which is a means to create copyright cleared images, videos and 3D applications. A cloud service is of course a really great idea because who can afford to keep up with the graphics cards prices? The focus for this is enterprise level, however, which no doubt means its not targeting indies at this stage but then again, does it need to when indies are already using DALL-E, Stable Diffusion, MidJourney, etc. Here’s a link to the launch video and here is a link to the wait list –
Pro-seed-ural
A procedural content generator for creating alleyways has been released by Difffuse Studios in the Blender Marketplace, link here and see the video demo here –
We spotted a useful social thread that highlights how to create consistent characters in Midjourney, by Nick St Pierre, using seeds –
and you can see the result of the approach in his example of an aging girl here –
Animation
JSFilmz created an interesting character animation using MidJourney5 (which released on 17 March) with advanced character detail features. This really shows its potential alongside animation toolsets such as Character Creator and Metahumans –
Runway’s Gen-2 text-to-video platform launched on 20 March, with higher fidelity and consistency in the outputs than its previous version (which was actually video-to-video output). Here’s a link to the sign-up and website, which includes an outline of the workflow. Here’s the demo –
Gen-2 is also our feature image for this blog post, illustrating the stylization process stage which looks great.
Wonder Dynamics launched on 9 March as a new tool for automating CG animations from characters that you can upload to its cloud service, giving creators the ability to tell stories without all the technical paraphenalia (mmm?). The toolset is being heralded as a means to democratize VFX and it is impressive to see that Aaron Sims Creative are providing some free assets to use with this and even more so to see none other than Steven Spielberg on the Advisory Board. Here’s the demo reel, although so far we’ve not found anyone that’s given it a full trial (its in closed beta at the moment) and shared their overview –
Finally for this month, we close this post with Disney’s Aaron Blaise and his video response to Corridor Crew’s use of generative AI to create a ‘new’ anime workflow, which we commented on last month here. We love his open-minded response to their approach. Check out the video here –
To kick start 2023 with a virtual BANG, we are highlighting some projects we’ve seen that are great examples of machinima and virtual production, demonstrating a breadth of techniques, a range of technologies, and comprise good ole’ short-form storytelling. We also really enjoyed Steve Cutts tale of man… let’s hope for a peaceful and happy year. Enjoy!
Force of Unreal
We were massively impressed throughout last year with the scope of creative work being produced in Unreal Engine. So, we have a few more to tell you about!
RIFT by HaZimation is a Sci-Fi Anime style film with characters created in Reallusion’s Character Creator. The film debuted at the Spark Computer Graphics Society’s Spark Animation Festival last October. We love the stylized effects that have been used here, which Haz Dulull, director/producer, describes as a combination of 2D and 3D in this article (scroll to below half way). We are also impressed that those same 3D assets and environment used in the film making process have also been integrated into a FPS game. The game is currently available free on Steam in early access here. This is another great example of creators using virtual assets in multiple ways – and builds very much on the model that Epic envisaged when they first released the City sample last year, hot on the heals of the release of The Matrix Resurrections film and The Matrix Awakens: UE5 Expeirence for which the city was created. We also love HaZimation’s strategy of co-creation for the new RIFT game experience with players – “We at HaZimation believe that a great game is only possible with direct feedback from the audience as early as possible” (Steam). We fully expect to see more creative works using the RIFT content in future too. Congrats to everyone involved.
As any of you that have been following the podcast will have gathered, we love a good alien film too, and we have found another made in UE5 that we really enjoyed. This one is called The Lab, by Haylox (released 14 Sept 2022). The director/producer builds the suspense well in this although, of course, its the same Alien trope we’ve seen many times over. Nonetheless, this has nice effects and well balanced soundscape.
We also love a good music video. The next project is a dance video made by Guru Pradeep using the music ‘Urvashi’ – Kaadhalan (A R Rahman), released 2 Aug 2022. Its a little rough around the edges, having seemingly been cobbled together with Megascans, Sketchfab and items grabbed the UE Marketplace, but the mocap is done particularly well, although we don’t know what was used, as is the editing. We look forward to seeing more from this creator in future.
Aspiring Assets
We want to highlight the amazing content that’s being developed for use in UE with Reality Capture. In this video, which is not a film but a ‘show and tell’ more than a tut, William Faucher reveals how he created a Lofoten-inspired cabin environment from the 1800s. Its impressive stuff if you have an eye of photogrammetry as well as some of the challenges for asset creation and there are lots of tips and hints in here with more detailed tutorials on his channel.
We have also been impressed with the range of fabulous assets that are being created and used in the Kitbash 3D Mission to Minerva challenge (closed 2 Dec 2022) the outcome of which will be a new galaxy of the combined concept artworks and in-motion content being submitted. There are some really nice videos which you can find using #kb3dchallenge on YouTube that are definitely worth a looksee. We liked this one, which has a nice touch of a disaster about it, by Mike Seto.
With an impressive field of judges that included talent acquisition representatives from NASA Concept Labs, Netflix, Riot Games and ILM, winners were announced on 20 Dec.
And Finally?
Let’s hope for a more progressive year in 2023 than the hate-filled traps that befell so many across a whole plethora of virtual platforms and IRL… and maybe reflect on the message contained within this great fun short, created in Clip Studio Paint with Cinema 4D and After Effects. The film is by Steve Cutts, called A Brief Disagreement, released 30 Sept 2022. Steve is not a nOOb in the world of machinima (and the earlier days of Reallusion’s CrazyTalk) – his classic comedy about the fate of Roger and Jessica Rabbit, as well as every other iconic cartoon character you can think of, even 8 years after its release is still a good laugh for those of a certain age (and its the featured image for this article in case you were wondering)!
This week’s Tech Update picks for machinima, virtual production and 3D content producers:
Nvidia RTX4080
Nvidia is launching two RTX 4080 graphics cards in November… you know what they say, you wait ages for a bus and then two come at once: the RTX 4080 12GB and RTX 4080 16GB. Here’s the story on PC Gamer‘s website. You can also catch up on all latest Nvidia’s announcements made in Jensen Huang’s (CEO) keyote at GTC in September in this video and on their blog here.
Ricky comments: Of course it was only a matter of time before NVidia announced the 40x series of RTX graphics cards. Two models have been announced so far, the 4080 and the 4090, with the 30x series sticking around for the lower price range. My guess is so they can focus their resources on producing more of just two high end cards instead of a whole range. Although given the prices of these new cards ($800+), I think I’ll be sticking with my 3070 for the time being.
UE 5.1.0
Unreal Engine have teased the new features coming to V5.1.0 – see the features documentation on their website here. Onsetfacilities.com has produced a nice overview – link here – and a nice explainer by JSFilmz here –
Cine Tracer
Check out the new Actor Animation system in Cine Tracer v0.7.6. This update gives the Actors a set of talking animations that can be used as an alternative to the Posing system.
Follow the socials on Instagram and download Cine Tracer on Steam
Sketchfab
Sketchfabis doing a weekly listing of top cultural heritage and history models – these are actually pretty amazing and of course downloadable (for a fee)!
DALL-E
DALL-E, one of the creative AI generators that is all the buzz at the moment, has developed a new feature called Outpainting which can help users extend an image beyond its original borders by adding visual elements in the same style, or taking a story in new directions. This could be great for background shots in virtual productions.
Second Life
Second Life have launched a puppetry project for their avatars. As Wagner James Au reports in his regular blog on all things metaverse and Second Life in particular, this is using a webcam and mocap. Check out Au’s review of it here and go directly to Second Life here to read their post about it and follow their channel on YouTube for latest updates and how-tos here.
Eleven Labs
Eleven Labs have launched Voice Conversion. This lets you transform one person’s voice into another’s. It uses a process called voice cloning to encode the target voice – ie, the voice we convert to – to generate the same message spoken in a way which matches the target speaker’s identity but preserves the original intonation. What’s interesting about this is the filmmaking potential but of course there are very clearly IP interests that have to be considered here – it has potential for machinima application but beware the guidelines on using it. Importantly, note that it is primarily going to be used as part of an identity-preserving automatic dubbing tool which Eleven is launching in 2023. More here on this and the guidlines on using Voice Conversion.
Tracy Harwood and Ben Grussi’s Machinima Book, On Sale Now!
Recent Comments