Meta

Tech Update 1 (Nov 2022)

Tracy Harwood Blog October 30, 2022 Leave a reply

Hot on the heels of our discussion on AI generators last week, we are interested to see tools already emerging that turn text prompts into 3D objects and also film content, alongside a tool for making music too. We have no less than five interesting updates to share here – plus a potentially very useful tool for rigging the character assets you create!

Another area of rapidly developing technological advancements is mo-cap, especially in the domain of markerless which lets face it is really the only way to think about creating naturalistic movement-based content. We share two interesting updates this week.

AI Generators

Nvidia has launched an AI tool that will generate 3D objects (see video). Called GET3D (which is derived from ‘Generate Explicit Textured 3D meshes’), the tool can generate characters and other 3D objects, as explained by Isha Salian on their blog (23 Sept). The code for the tool is currently available on Github, with instructions on how to use it here.

Google Research with researchers at the University of California, Berkeley are also working on similar tools (reported in Gigazine on 30 Sept). DreamFusion uses NeRF tech to create 3D models which can be exported into 3D renderers and modeling software. You can find the tool on Github here.

DreamFusion

Meta has developed a text-to-video generator, called Make-A-Video. The tool uses a single image or can fill in between two images to create some motion. The tool currently generates five second videos which are perfect for background shots in your film. Check out the details on their website here (and sign up to their updates too). Let us know how you get on with this one too!

Make-A-Video

Runway has released a Stable Diffusion-based tool that allows creators to switch out bits of images they do not like and replace them with things they do like (reported in 80.lv on 19 Oct), called Erase and Replace. There are some introductory videos available on Runway’s YouTube channel (see below for the Introduction to the tool).

And finally, also available on Github, is Mubert, a text-to-music generator. This tool uses a Deforum Stable Diffusion colab. Described as proprietary tech, its creator provides a custom license but says anything created with it cannot be released on DSPs as your own. It can be used for free with attribution to sync with images and videos, mentioning @mubertapp and hashtag #mubert, with an option to contact them directly if a commercial license is needed.

Character Rigging

Reallusion‘s Character Creator 4.1 has launched with built in AccurRIG tech – this turns any static model into an animation ready character and also comes with cross-platform support. No doubt very useful for those assets you might want to import from any AI generators you use!

Motion Capture Developments

That every-ready multi-tool, the digital equivalent of the Swiss army knife, has come to the rescue once again: the iPhone can now be used for full body mocap in Unreal Engine 5.1, as illustrated by Jae Solina, aka JSFilmz, in his video (below). Jae has used move.ai, which is rapidly becoming the gold standard in markerless mocap tech and for which you can find a growing number of demo vids showing how detailed movement can be captured on YouTube. You can find move.ai tutorials on Vimeo here and for more details about which versions of which smart phones you can use, go to their website here – its very impressive.

Another form of mocap is the detail of the image itself. Reality Capture has launched a tool that you can use to capture yourself (or anyone else or that matter, including your best doggo buddy) and use the resulting mesh to import into Unreal’s MetaHuman. Even more impressive is that Reality Capture is free, download details from here.

We’d love to hear how you get on with any of the tools we’ve covered this week – hit the ‘talk’ button on the menu bar up top and let us know.

Completely Machinima S2 Ep 40 News (July 2022)

Tracy Harwood Podcast Episodes July 7, 2022 Leave a reply

Despite being encouraged to create an episode of just 15 minutes duration by one of our followers, the team have this month extended their coverage – hear Tracy, Damien and Phil discuss vtubing, Ricky’s Duke Henry the Red character in the game Evil Dead, the FTC’s proposed updates to social media guidelines, Unreal’s review of the Matrix Awakens Experience, John Gaeta’s latest exploits, metaquette, Reallusion’s iClone 8 and CC4 and a number of other exciting developments relevant to the world of real-time filmmaking and machinima.  Thankfully, you can use the timestamps to jump to the bits your most interested in!



YouTube Version of This Episode

Time stamps, links and show notes

1:34 Feedback from our followers: 3DChick, Al Scotch, Spentaneous, Mike Clements, Circu Virtu, Notagamer3d

8:14 Vtubing and Face Rig app (Steam), VTuber Studio – real time puppeteering using faceware

17:41 Interactive video on Vimeo, branching narrative storylines

20:23 Ukranian films of the recent war, showcased at Milan Machinima Film Festival website

24:17 Evil Dead and the character Duke Henry the Red played by Ricky Grove and IP generally, Ricky Grove on IMDb

Ricky vs Ricky

39:29 FTC updating ‘disclosures 101 for social media infuencers’ guide’ discussion, the relationship between brands, platforms and influencers and see also [Company] Rulez! (Phil Rice aka zsOverman & Evan Ryan aka Krad Productions).  Here is PC Gamers’ comments and proposals to update the guidelines

54:55 Competition updates: Nvidia Omniverse Machinima promo video ‘Top Goose’ | A NVIDIA Omniverse Machinima Short #MadeInMachinima, released 9 June 2022, and Unreal competition.  Here is Ben Tuttle’s The Amazing Comet (Unreal Engine/iClone) (4413 Media), released 9 June 2022.  And here’s a link to William Faucher’s YouTube channel.

screencap Ben Tuttle’s The Amazing Comet

58:03 Making of Unreal’s Matrix Awakens Experience, Behind the Scenes on The Matrix Awakens: An Unreal Engine 5 Experience, released 6 June 2022, and Tracy’s interview with John Gaeta, VFX, The Matrix films, released 17 February 2022.  Living Cities is a new metaverse mirror world, website link and Visual Effects Giant John Gaeta joins Inworld AI as Chief Creative Officer, 1 June 2022 and Inworld AI teaser video, released 28 April 2022

screencap Behind the Scene of Matrix Awakens Experience

1:00:16 Pooky Amsterdam’s blog on metaverse etiquette, called Metaquette website link

1:00:39 Reallusion’s iClone 8 character animation processes including Character Creator 4 Launch by Reallusion (released 26 May 2022) and iClone 8 Demo Video by Reallusion (released 26 May 2022)

1:13:08 Mesh to MetaHuman in Unreal Engine by Unreal Engine, released 9 June 2022

1:13:33 Meta’s new model allows creating photoreal avatars with an iPhone, 80.lv, 14 June 2022 and full research publication

1:14:02 Mocap with the MoCats: Livestreaming with Multiple Actors (Faceware & Movella/Xsens) by Faceware, released 7 June 2022

1:14:29 Love, Death & Robots, Jibaro character creation Love, Death + Robots | Inside the Animation: Jibaro, Netflix, released 9 June 2022 and Art Dump: stunning projects made for Love, Death + Robots, 80.LV website link

1:16:48 Jonathon Nimmons WriteSeen.com, launched June 2022, website connecting creative writers with industry professionals (upload written content, attach a video pitch, audio clips, video clips and a link to a prototype if required)

1:19:08 A word of thanks to our sponsors