Nvidia

Tech Update 2 (Nov 2022)

Tracy Harwood Blog November 7, 2022 Leave a reply

This week, we take a look at some potentially useful tech platforms, starting with an inspired new service from Nvidia, then a new service and mod hub for The Sims 4, followed by some interesting distribution options linked to blockchain tech and another for festivals and events.

Cloud Services for Artists

With the ongoing challenges of access to kit for using many of the new render tools we’ve reviewed on the show over the months we’ve been running, its interesting to see that Nvidia are now launching Omniverse Cloud services. Ostensibly, the service is aimed at powering future ‘metaverse’ applications and those working on digital twin-type projects, but clearly its a very good way for content creators to finally be able to access contemporary tools without the hassles of continually updating their hardware to do it – or indeed ever worrying about acquiring the latest desirable RTX card! You can find out more about the services here – and we’d love to hear from anyone using the services about their experiences with the services.

Nvidia Omniverse Cloud Nucleus

Anyone for Sims?

The Sims 4 is now FREE to use (announced 18 Oct 2022), although we note that specific content packs will still be paid only accessible. No doubt Phil will be peeved since we all advised him to go for Unreal as a creative option when he switched his attention from RDR2 last year! Their glitzy Summit vid is clearly pitching itself against the Fortnite user, but with an entirely different heritage and more adult trajectory. They are even partnering with a new content creator curation platform, a mod hub hosted by Overwolf (coming soon).

Distribution Options

With rapid progression towards Web3, and the growing demand for 3D content that will fill the platforms and sites people create, Josephyine If has usefully created a spreadsheet that you can access here. The XLS file lists platforms and their creators including website addresses for film and video content can be shared (at the time of writing, some 18 different platforms such as Hyphenova, MContent – see video below – and Eluv.io). The main point of the platforms, at least at this stage, is to manage IP of content, so the emphasis is on how to share blockchain-marked film. Its probably one of the most interesting aspects and benefits that Web3 has for content creators, the ability to sell, track and manage content over time. This is something that’s been a major flaw of the YouTube platform process over the years since it evolved into an ad revenue driven distribution model. If you find any of the platforms particularly useful (or not), or others not mentioned on the list, do drop us a line and let us know.

We also found a potentially interesting distribution platform for festivals and events primarily, called VisualContainerTV. The platform launched in 2009 makes content available for free and therefore competes directly with the likes of YouTube (which frankly it can’t easily do) but more importantly, it can make content accessible behind a paywall. This means artists, creators and curators can receive payment for ticketed content shown over the platform via the internet and also have that branded and associated with particular curated events. At this stage of its development, it appears to be primarily targeting college students and courses based in Europe (the platform has been developed in Italy) but it is certainly something that looks interesting for small scale user groups. There are some very interesting arts projects on the site, so if nothing else, add it to your streaming platforms folder to check out periodically for interesting new works coming out.

VisualContainerTV

Tech Update 1 (Nov 2022)

Tracy Harwood Blog October 30, 2022 Leave a reply

Hot on the heels of our discussion on AI generators last week, we are interested to see tools already emerging that turn text prompts into 3D objects and also film content, alongside a tool for making music too. We have no less than five interesting updates to share here – plus a potentially very useful tool for rigging the character assets you create!

Another area of rapidly developing technological advancements is mo-cap, especially in the domain of markerless which lets face it is really the only way to think about creating naturalistic movement-based content. We share two interesting updates this week.

AI Generators

Nvidia has launched an AI tool that will generate 3D objects (see video). Called GET3D (which is derived from ‘Generate Explicit Textured 3D meshes’), the tool can generate characters and other 3D objects, as explained by Isha Salian on their blog (23 Sept). The code for the tool is currently available on Github, with instructions on how to use it here.

Google Research with researchers at the University of California, Berkeley are also working on similar tools (reported in Gigazine on 30 Sept). DreamFusion uses NeRF tech to create 3D models which can be exported into 3D renderers and modeling software. You can find the tool on Github here.

DreamFusion

Meta has developed a text-to-video generator, called Make-A-Video. The tool uses a single image or can fill in between two images to create some motion. The tool currently generates five second videos which are perfect for background shots in your film. Check out the details on their website here (and sign up to their updates too). Let us know how you get on with this one too!

Make-A-Video

Runway has released a Stable Diffusion-based tool that allows creators to switch out bits of images they do not like and replace them with things they do like (reported in 80.lv on 19 Oct), called Erase and Replace. There are some introductory videos available on Runway’s YouTube channel (see below for the Introduction to the tool).

And finally, also available on Github, is Mubert, a text-to-music generator. This tool uses a Deforum Stable Diffusion colab. Described as proprietary tech, its creator provides a custom license but says anything created with it cannot be released on DSPs as your own. It can be used for free with attribution to sync with images and videos, mentioning @mubertapp and hashtag #mubert, with an option to contact them directly if a commercial license is needed.

Character Rigging

Reallusion‘s Character Creator 4.1 has launched with built in AccurRIG tech – this turns any static model into an animation ready character and also comes with cross-platform support. No doubt very useful for those assets you might want to import from any AI generators you use!

Motion Capture Developments

That every-ready multi-tool, the digital equivalent of the Swiss army knife, has come to the rescue once again: the iPhone can now be used for full body mocap in Unreal Engine 5.1, as illustrated by Jae Solina, aka JSFilmz, in his video (below). Jae has used move.ai, which is rapidly becoming the gold standard in markerless mocap tech and for which you can find a growing number of demo vids showing how detailed movement can be captured on YouTube. You can find move.ai tutorials on Vimeo here and for more details about which versions of which smart phones you can use, go to their website here – its very impressive.

Another form of mocap is the detail of the image itself. Reality Capture has launched a tool that you can use to capture yourself (or anyone else or that matter, including your best doggo buddy) and use the resulting mesh to import into Unreal’s MetaHuman. Even more impressive is that Reality Capture is free, download details from here.

We’d love to hear how you get on with any of the tools we’ve covered this week – hit the ‘talk’ button on the menu bar up top and let us know.

S3 E49 Film Review: ‘Most Precious Gift’ by Shangyu Wang (Oct 2022)

Tracy Harwood Podcast Episodes October 19, 2022 Leave a reply

This week, Damien has picked a very interesting Eastern-made alien tale. Its been beautifully shot and rendered using Omniverse, and inspired him to try some of the techniques shown. Ricky is a little more critical of the nostalgic trope. Tracy reflects on the journey of the storytelling, and the nature of what it is to be human that is the heart of the story. Phil brings Solaris into the discussion, as only Phil can. Overall, we reflect on the different styles of animation used and how influential they were. And, finally, how on earth did the producer achieve that tendril effect?!



YouTube Version of this Episode

Link to Film

Tech Update (Oct 2022)

Tracy Harwood Blog October 3, 2022 2 Comments

This week’s Tech Update picks for machinima, virtual production and 3D content producers:

Nvidia RTX4080

Nvidia is launching two RTX 4080 graphics cards in November… you know what they say, you wait ages for a bus and then two come at once: the RTX 4080 12GB and RTX 4080 16GB. Here’s the story on PC Gamer‘s website. You can also catch up on all latest Nvidia’s announcements made in Jensen Huang’s (CEO) keyote at GTC in September in this video and on their blog here.

Ricky comments: Of course it was only a matter of time before NVidia announced the 40x series of RTX graphics cards. Two models have been announced so far, the 4080 and the 4090, with the 30x series sticking around for the lower price range. My guess is so they can focus their resources on producing more of just two high end cards instead of a whole range. Although given the prices of these new cards ($800+), I think I’ll be sticking with my 3070 for the time being.

UE 5.1.0

Unreal Engine have teased the new features coming to V5.1.0 – see the features documentation on their website here. Onsetfacilities.com has produced a nice overview – link here – and a nice explainer by JSFilmz here –

Cine Tracer

Check out the new Actor Animation system in Cine Tracer v0.7.6. This update gives the Actors a set of talking animations that can be used as an alternative to the Posing system.

Follow the socials on Instagram and download Cine Tracer on Steam

Sketchfab

Sketchfab is doing a weekly listing of top cultural heritage and history models – these are actually pretty amazing and of course downloadable (for a fee)!

source: Sketchfab – cultural heritage and history top 10

DALL-E

DALL-E, one of the creative AI generators that is all the buzz at the moment, has developed a new feature called Outpainting which can help users extend an image beyond its original borders by adding visual elements in the same style, or taking a story in new directions. This could be great for background shots in virtual productions.

Source: DALL-E, original is Girl with a Pearl Earring by Johannes Vermeer, Outpainting by August Kamp

Second Life

Second Life have launched a puppetry project for their avatars. As Wagner James Au reports in his regular blog on all things metaverse and Second Life in particular, this is using a webcam and mocap. Check out Au’s review of it here and go directly to Second Life here to read their post about it and follow their channel on YouTube for latest updates and how-tos here.

Eleven Labs

Eleven Labs have launched Voice Conversion. This lets you transform one person’s voice into another’s. It uses a process called voice cloning to encode the target voice – ie, the voice we convert to – to generate the same message spoken in a way which matches the target speaker’s identity but preserves the original intonation. What’s interesting about this is the filmmaking potential but of course there are very clearly IP interests that have to be considered here – it has potential for machinima application but beware the guidelines on using it. Importantly, note that it is primarily going to be used as part of an identity-preserving automatic dubbing tool which Eleven is launching in 2023. More here on this and the guidlines on using Voice Conversion.

Completely Machinima S2 Ep 43 Films (August 2022)

Tracy Harwood Podcast Episodes August 11, 2022 Leave a reply

In this episode, Damien, Ricky and Tracy discuss four very different films.  Damien reviews an interesting explainer on witches in The Folklore of Phasmophobia game, Ricky presents us with another of Jae Solina’s tutorials, this time on path tracing in Omniverse, Tracy selects Tiny Elden Ring – yep, its tiny! And Phil, absent due to sickness, ironically picked a satirical Zombie fest, which mixed Walking Dead ‘live action’ with machinima!  The team then discuss that approach to creating films, highlighting some of the key challenges with some more fab examples of films that have used the techniques well. 



YouTube Version of this Episode

Show Notes and Links

0:57 The Folklore of Phasmophobia | Modern Mythology, by The Digital Dream Club (released 9 January 2021)

The Folklore of Phasmophobia

9:51 NVIDIA Omniverse Machinima Path Tracing Test, by JSFilmz (21 June 2022) and a nice little article on the difference between rasterization, ray tracing and path tracing that folks might find interesting, Nvidia says real-time path tracing is on the horizon, but what is it? By Eric Frederiksen, Gamespot.com, 1 May 2022

NVIDIA Omniverse Machinima Path Tracing Test

17:33 Tiny Elden Ring | Tilt Shift, by Flurdeh (11 April 2022) and here’s Flurdeh’s list of filmmaking tools https://github.com/Flurdeh/Youtube-Resources and a post-production tutorial on the tilt shift effect tutorial, How to create Tilt-Shift / Miniature World Time-lapses, by Science Filmmaking Tips (24 Jan 2017)

Tiny Elden Ring

27:27 What a typical project Zomboid Run looks like, by Pathoze (26 Jan 2022)

What a typical project Zomboid Run looks like

31:45 Discussion: using live action with machinima footage in films, what are the challenges?

Examples mentioned –

39:11 Damien’s The Great Bug War on Machinima Expo (8 December 2014)

Damien and Kim Genly

46:12 Ricky’s reference to a 2D/3D combo – Carson Mell’s TARANTULA A-1 : Nightmares (5 August 2012), shot in  Los Angeles

TARANTULA A-1: Nightmares

48:30 Phil Tippett’s stop mo film Mad God, including live action with animation (now available on Shudder TV)

Mad God

51:20 Tutsy Navarathna’s film, A Journey into the Metaverse and an interview we did with him on the podcast in Season 1

A Journey into the Metaverse