VFX

Tech Update 2 (Feb 2023)

Tracy Harwood Blog February 13, 2023 Leave a reply

This week, we highlight some time-saving examples for generating 3D models using – you guessed it – AIs, and we also take a look at some recent developments in motion tracking for creators.

3D Modelling

All these examples highlight that generating a 3D model isn’t the end of the process and that once its in Blender, or another animation toolset, there’s definitely more work to do. These add-ons are intended to help you reach your end result more quickly, cutting out some of the more tedious aspects of the creative process using AIs.

Blender is one of those amazing animation tools that has a very active community of users, and of course, a whole heap of folks looking for quick ways to solve challenges in their creative pipeline. We found folks that have integrated OpenAI’s ChatGPT into using the toolset by developing add-ons. Check out this illustration by Olav3D, whose comments about using ChatGPT for attempting to write Python scripts sum it up nicely, “better than search alone” –

Dreamtextures by Carson Katri is a Blender add-on using Stable Diffusion which is so clever that it even projects textures onto 3D models (with our thanks to Krad Productions for sharing this one). In this video, Default Cube talks about how to get results with as few glitches as possible –

and this short tells you how to integrate Dreamtextures into Blender, by Vertex Rage –

To check out Dreamtextures for yourself, you can find the Katri’s application on Github here and should you wish to support his work, subscribe to his Patreon channel here too.

OpenAI also launched its Point-E 3D model generator this month, which can then be imported into Blender but, as CGMatter has highlighted, using the published APIs takes a very long time sitting in cues to access the downloads, whilst downloading the code to your own machine to run it locally, well that’s easy – and once you have it, you can create point-cloud models in seconds. However, he’s running the code from Google’s CoLab, which means you can run the code in the cloud. Here’s his tutorial on how to use Point-E without the wait giving you access to your own version of the code (on Github) in CoLab –

We also found another very interesting Blender add-on, this one lets you import models from Google Maps into the toolset. The video is a little old, but the latest update of the mod on Github, version 0.6.0 (for RenderDoc 1.25 and Blender 3.4) has just released, created by Elie Michel –

We were also interested to see NVIDIA’s update at CES (in January). It announced a release for the Omniverse Launcher that supports 3D animation in Blender, with generative AIs that enhance characters’ movement and gestures, a future update to Canvas that includes 360 surround images for panoramic environments and also an AI ToyBox, that enables you to create 3D meshes from 2D inputs. Ostensibly, these tools are for creators to develop work for the metaverse and web3 applications, but we already know NVIDIA’s USD-based tools are incredibly powerful for supporting collaborative workflows including machinima and virtual production. Check out the update here and this is a nice little promo video that sums up the integrated collaborative capabilities –

Tracking

As fast as the 3D modelling scene is developing, so is motion tracking. Move.ai which launched late last year, announced its pricing strategy this month at $365 for 12 months of unlimited processing of recordings – this is markerless mocap at its very best, although not so much if you want to do live mocap (no pricing strategy announced yet). Move.ai (our feature image for this article) lets you record content using a mobile phone (a couple of old iPhones). You can find out more on its new website here and here’s a fun taster, called Gorillas in the mist, with ballet and 4 iPhones, released in December by the Move.ai team –

And another app although not 3D is Face 2D Live, released by Dayream Studios – Blueprints in January. This tool allows you to live link a Face app on your iPhone or iPad to make cartoons, including with your friends also using an iPhone app, out of just about anything. It costs just $14.99 and is available on the Unreal Marketplace here. Here’s a short video example to wet your appetite – we can see a lot of silliness ensuing with this for sure!

Not necessarily machinima but for those interested in more serious facial mocap, Weta has been talking about how it developed its facial mocap processes for Avatar, using something called an ‘anatomical plausible facial system’. This is an animator centric system that captures muscle movement rather than ‘facial action coding’ which focusses on identifying emotions. Weta stated its approach leads to a wider set of facial movements being integrated into the mocapped output – we’ll no doubt see more in due course. Here’s an article on the FX Guide website which discusses the approach being taken and for a wider ranging discussion on the types of performance tracking used by the Weta team, Corridor Crew have bagged a great interview with the Avatar VFX supervisor, Eric Saindon here –

Completely Machinima Interview: John Gaeta, The Matrix

Tracy Harwood Podcast Episodes February 17, 2022 2 Comments

In this episode, Tracy talks to John Gaeta about his interests in machinima and real time filmmaking, The Matrix Awakens Experience, the influence of the bullet time shot, building the metaverse, future of storytelling in immersive environments, the potential of NFTs and his advice for indie creators.



YouTube Version of this Episode

Show Notes and Links

John Gaeta on IMDb

screencap: Matrix Resurrections (l-r Donald Mustard, Kim Libreri, John Gaeta, Keanu Reeves)

Pioneers in Machinima, Prologue by Kim Libreri, CTO Epic Games

SIGGRAPH 2000

The Matrix Awakens Experience video (UE5)

bullet time

UE5 Nanite

News coverage of Disney’s AR patent application

3D Game-Based Filmmaking: Art of Machinima (book published 2004) by Paul Marino

Time Stamps and Themes

3.32 Backstory to Matrix Awakens Experience

10.01 Game vs Experience

12.00 World building The Matrix

18.11 Why PS5 and Xbox X/S?

21.36 Bullet time – again!

26.04 Awakens as a time capsule – remaking the virtual Neo

29.03 Multi-modal storytelling is the future

33.06 Storytelling and AI

37.00 Real time vs cinema – a cross-over moment

39.39 Scalable metaverses, AR and AI – reflecting on Disney’s recent announcements

49.27 The space between bones and spaceship in 2001: A Space Odyssey – vis Paul Marino’s 2004 Art of Machinima book

52.10 The role of Nvidia’s Omniverse platform

54.46 NFTs and open markets

1.03.00 Advice to machinima and real-time creators: passion and knowledge!

1.08.00: Red or Blue?

Credits

Producer/Editor: Ricky Grove

Music credits: frankum’s Nebua Techno House. freesound.org Creative Commons

Percy has nothing on this guy!

Tracy Harwood Blog March 29, 2021 Leave a reply

Released in November 2020, this viral short by Birchpunk is a lot of fun. Uses VFX to present a vision of how we might colonize Mars and the future of cyber-farming, so our beloved Percy (NASA’s Perseverence) has a way to go to catch up with the vibe… eat your heart out Mr Musk, the Russians have beaten you to it. “We have love…”! Enjoy. You can follow Birchpunk’s channel on YouTube