Move.ai

Tech Update 2 (Feb 2023)

Tracy Harwood Blog February 13, 2023 Leave a reply

This week, we highlight some time-saving examples for generating 3D models using – you guessed it – AIs, and we also take a look at some recent developments in motion tracking for creators.

3D Modelling

All these examples highlight that generating a 3D model isn’t the end of the process and that once its in Blender, or another animation toolset, there’s definitely more work to do. These add-ons are intended to help you reach your end result more quickly, cutting out some of the more tedious aspects of the creative process using AIs.

Blender is one of those amazing animation tools that has a very active community of users, and of course, a whole heap of folks looking for quick ways to solve challenges in their creative pipeline. We found folks that have integrated OpenAI’s ChatGPT into using the toolset by developing add-ons. Check out this illustration by Olav3D, whose comments about using ChatGPT for attempting to write Python scripts sum it up nicely, “better than search alone” –

Dreamtextures by Carson Katri is a Blender add-on using Stable Diffusion which is so clever that it even projects textures onto 3D models (with our thanks to Krad Productions for sharing this one). In this video, Default Cube talks about how to get results with as few glitches as possible –

and this short tells you how to integrate Dreamtextures into Blender, by Vertex Rage –

To check out Dreamtextures for yourself, you can find the Katri’s application on Github here and should you wish to support his work, subscribe to his Patreon channel here too.

OpenAI also launched its Point-E 3D model generator this month, which can then be imported into Blender but, as CGMatter has highlighted, using the published APIs takes a very long time sitting in cues to access the downloads, whilst downloading the code to your own machine to run it locally, well that’s easy – and once you have it, you can create point-cloud models in seconds. However, he’s running the code from Google’s CoLab, which means you can run the code in the cloud. Here’s his tutorial on how to use Point-E without the wait giving you access to your own version of the code (on Github) in CoLab –

We also found another very interesting Blender add-on, this one lets you import models from Google Maps into the toolset. The video is a little old, but the latest update of the mod on Github, version 0.6.0 (for RenderDoc 1.25 and Blender 3.4) has just released, created by Elie Michel –

We were also interested to see NVIDIA’s update at CES (in January). It announced a release for the Omniverse Launcher that supports 3D animation in Blender, with generative AIs that enhance characters’ movement and gestures, a future update to Canvas that includes 360 surround images for panoramic environments and also an AI ToyBox, that enables you to create 3D meshes from 2D inputs. Ostensibly, these tools are for creators to develop work for the metaverse and web3 applications, but we already know NVIDIA’s USD-based tools are incredibly powerful for supporting collaborative workflows including machinima and virtual production. Check out the update here and this is a nice little promo video that sums up the integrated collaborative capabilities –

Tracking

As fast as the 3D modelling scene is developing, so is motion tracking. Move.ai which launched late last year, announced its pricing strategy this month at $365 for 12 months of unlimited processing of recordings – this is markerless mocap at its very best, although not so much if you want to do live mocap (no pricing strategy announced yet). Move.ai (our feature image for this article) lets you record content using a mobile phone (a couple of old iPhones). You can find out more on its new website here and here’s a fun taster, called Gorillas in the mist, with ballet and 4 iPhones, released in December by the Move.ai team –

And another app although not 3D is Face 2D Live, released by Dayream Studios – Blueprints in January. This tool allows you to live link a Face app on your iPhone or iPad to make cartoons, including with your friends also using an iPhone app, out of just about anything. It costs just $14.99 and is available on the Unreal Marketplace here. Here’s a short video example to wet your appetite – we can see a lot of silliness ensuing with this for sure!

Not necessarily machinima but for those interested in more serious facial mocap, Weta has been talking about how it developed its facial mocap processes for Avatar, using something called an ‘anatomical plausible facial system’. This is an animator centric system that captures muscle movement rather than ‘facial action coding’ which focusses on identifying emotions. Weta stated its approach leads to a wider set of facial movements being integrated into the mocapped output – we’ll no doubt see more in due course. Here’s an article on the FX Guide website which discusses the approach being taken and for a wider ranging discussion on the types of performance tracking used by the Weta team, Corridor Crew have bagged a great interview with the Avatar VFX supervisor, Eric Saindon here –

Tech Update 2 (Jan 2023)

Tracy Harwood Blog January 16, 2023 Leave a reply

This week, we highlight some character development tools, NeRFs, NFTs and environments for machinima and virtual production.

Characters

Beginning with the awe-inspiring toolset of Unreal Engine’s MetaHumans, the organization has released a FREE three-hours long online course for beginners on real-time animating with Faceware Analyzer and Retargeter tools. Here’s a taster of what you can expect –

A creator we’ve featured a number of times (his tutorials are awesome), JSFILMZ (our feature image) has posted a taster of MetaHuman’s Live Drive from Facegood, which launched in December. The demo shows straight from camera to Unreal but what’s amazing is the price for the head-mounted hardware of <$500! This obviously isn’t free but its good value compared to some of the other facial tracking hardware on the market, and Jae compares those to give you an overview of what you get for the money. The Facegood software itself, Avatary, is free though, which produces some impressive animations. Check out Jae’s introductory overview below, and then pick up his tutorials on each of the components he discusses on his channel –

Move.ai has launched its iPhone beta application for free markerless mocap (requires two phones). Ultimately, this isn’t going to be free to use so make the most of the beta sign-up opportunity – the official launch takes place in March 2023 and their main target in the first instance is professional studios, which will put this out of reach for many indies. This article gives you a quick overview (by 80.lv), and this short video explainer introduces their store –

And finally, on characters this month, we highlight Inworld AI. This organization is creating interactive conversational characters that can be exported and shared across various platforms, either as avatars or the underlying chatbot (think smart NPCs). Some of you may recall John Gaeta mentioned this in our interview with him last year, and since then, Inworld has become part of the Walt Disney Company’s Accelerator Programme, been awarded an Epic MegaGrant and raised a pot of money from investors. The application of the software is vast – everything from games to marketing, as well as machinima and virtual productions too… and that’s because of how the characters can be moulded. Inworld states: ‘When crafting your character’s brain, you are able to use the Studio to tailor many elements of cognition and behavior, such as goals and motivations, manners of speech, memories and knowledge, and voice‘. Inworld released a nice tutorial in December, link below. Its definitely one to try out –

NeRFing Around

We found a nice short on Neural Radiance Fields (aka NeRFs) by Corridor Crew, using the Luma AI app, which is truly stunning for recreating realistic anything. They highlight some of the key challenges, and present a very interesting test with a chrome ball – surely it is never going to possible to create this kind of object with dynamic reflections and all…? Check it out here –

As Corridor Crew states, this is clearly one of the next big tech things in image capture for CGI.

NFTs

The fluid waters of NFTs continues to muddy. This article (by NFT Now) highlights some of the recent class action law suits being brought against creator platforms, suggesting that the markets are being artificially inflated by celebrity endorsers, although this is surely true for so many other products too? Its more an argument about the nature of the endorsement process and the stake in the investment that the endorser has that’s the issue here seemingly. One of the main challenges here is the fundemantal role of community in NFTs, which is always going to mean there is a very fine line on ‘insider trading’. Its also interesting to note that IP owners are now becoming more actively involved in this nascent space. Once again, whenever the legals get involved, everyday creatives are the losers, so whilst some of the actions highlighted are less directly relevant, the outcomes of the legal disputes ultimately will be, so we’ll keep tracking this.

Environments

Finally, we want to highlight a couple of environments for you.

Firstly, Half Life Alyx has a new mod, courtesy of Corey Laddo! Corey has created a mod that allows you to view the game in the role of Alyx Vance. Its designed to be a free of charge for owners of the game, and provides a 4-5 hour experience for ‘average players’. Great if you want to shoot content from a first person perspective. You can support Corey on his Patreon account, should you want to give him something for his effort. Download the mod from Steam here. Meantime, here’s a taster for you –

Secondly, Damien shared a new sandbox environment that will be launching soon (well, we think it will since its apparently been in dev since 2012), called Outerra World by Microprose. This looks amazing, and will allow you create any kind of realistic 1:1 scale terrain simulation, which you can share and navigate using any asset that the community creates and shares too. Here’s the link to the Steam page (to add your details to the waitlist).

If you have comments or thoughts on any of the techs this week, do go ahead and comment.