Jae Solina

S4 E120 Machinima News Omnibus (Mar 2024)

Tracy Harwood Podcast Episodes March 14, 2024 Leave a reply

This week’s podcast episode is our curated news omnibus for this month.  We cover lots, and enjoy reflecting on the significance of the stories we highlight for the world of machinima and virtual production. 

btw, the international internet pipes failed and microwaves fried our apps during this recording session, video corrupted and monsters ran loose among us, so enjoy the voice data we’ve managed to resurrect and video clips we’ve added – we’ll be back next week in full glory!



YouTube Version of This Episode

Show Notes & Links

Fan communities under pressure?

Valve’s Steam policy and recent take downs for Team Fortress 2 and Portal fan projects – article on Games Radar here

Half Life 3 aka Entropy Zero (and 2) projects – Fandom.com overview here and video –

New Moviemaking Toolsets

Blockbuster Inc, Prologue version now on Steam here

Demo by Orbital Potato here –

Replikant now in free beta on the Unreal Marketplace.

Replikant has a Youtube channel with plenty of tutorials on it already, and there’s a great demo of it which gives you a sense of the animation quality it produces –

And for those wanting a quick and dirty tutorial on UE5, check this out –

Steamboatin’ Along!

Minecraft Steamboat Willie which is a fun take on it, made by Red and Blue –

Fewture Studios’ trailer for The Return of Steamboat Willie, made in Unreal Engine –

Screams

The origins and impact of the Wilhelm Scream – BBC shorts here

More about Sheb Wooley here

And another video about it here –

https://www.youtube.com/watch?v=HNvZYzg7o68

AI Genies Rising

Community action against deepfakes of Taylor Swift – link to story on BBC site here

Jae Solina has done a nice overview of OpenAI’s SORA here –

and this is a link to RunwayML’s video controller toolset (and our feature image for this post) –

The Suno model, released by ElevenLabs, for music composition – here is a fun example on X –

S3 E52 Film Review: Metaverse Music Video by JSFilmz (Nov 2022)

Tracy Harwood Podcast Episodes November 9, 2022 1 Comment

This week’s pick is a 360 music video – a ‘metaverse’ video – by a creator we’ve been following all year, Jae Solina aka JSFilmz. The film has been created in UE5 and includes some nifty mocap, great dance moves and some interesting lighting effects. Hear what the team have to say about the film and format and let us have your comments too!



YouTube Version of this Episode

Show Notes and Links

Metaverse Music Video, released 10 Sept 2022 (note, the video can be viewed as a VR experience or a 360 video) – where is the Batman Easter Egg?!!!

Our discussion on Friedrich Kirschner’s immersive machinima, person2184, in THIS episode

Nightmare Puppeteer allows 360 filmmaking – check out the engine on Steam HERE

Key questions: what new language might be needed for machinima story vs experience creators to get the most out of VR/360 formats?

Credits –

Speakers: Ricky Grove, Damien Valentine, Tracy Harwood (MIA Phil Rice, courtesy of Hurricane Ian)
Producer/Editor: Damien Valentine
Music: Scott Buckley – www.scottbuckley.com.au CC 00

Tech Update 1 (Nov 2022)

Tracy Harwood Blog October 30, 2022 Leave a reply

Hot on the heels of our discussion on AI generators last week, we are interested to see tools already emerging that turn text prompts into 3D objects and also film content, alongside a tool for making music too. We have no less than five interesting updates to share here – plus a potentially very useful tool for rigging the character assets you create!

Another area of rapidly developing technological advancements is mo-cap, especially in the domain of markerless which lets face it is really the only way to think about creating naturalistic movement-based content. We share two interesting updates this week.

AI Generators

Nvidia has launched an AI tool that will generate 3D objects (see video). Called GET3D (which is derived from ‘Generate Explicit Textured 3D meshes’), the tool can generate characters and other 3D objects, as explained by Isha Salian on their blog (23 Sept). The code for the tool is currently available on Github, with instructions on how to use it here.

Google Research with researchers at the University of California, Berkeley are also working on similar tools (reported in Gigazine on 30 Sept). DreamFusion uses NeRF tech to create 3D models which can be exported into 3D renderers and modeling software. You can find the tool on Github here.

DreamFusion

Meta has developed a text-to-video generator, called Make-A-Video. The tool uses a single image or can fill in between two images to create some motion. The tool currently generates five second videos which are perfect for background shots in your film. Check out the details on their website here (and sign up to their updates too). Let us know how you get on with this one too!

Make-A-Video

Runway has released a Stable Diffusion-based tool that allows creators to switch out bits of images they do not like and replace them with things they do like (reported in 80.lv on 19 Oct), called Erase and Replace. There are some introductory videos available on Runway’s YouTube channel (see below for the Introduction to the tool).

And finally, also available on Github, is Mubert, a text-to-music generator. This tool uses a Deforum Stable Diffusion colab. Described as proprietary tech, its creator provides a custom license but says anything created with it cannot be released on DSPs as your own. It can be used for free with attribution to sync with images and videos, mentioning @mubertapp and hashtag #mubert, with an option to contact them directly if a commercial license is needed.

Character Rigging

Reallusion‘s Character Creator 4.1 has launched with built in AccurRIG tech – this turns any static model into an animation ready character and also comes with cross-platform support. No doubt very useful for those assets you might want to import from any AI generators you use!

Motion Capture Developments

That every-ready multi-tool, the digital equivalent of the Swiss army knife, has come to the rescue once again: the iPhone can now be used for full body mocap in Unreal Engine 5.1, as illustrated by Jae Solina, aka JSFilmz, in his video (below). Jae has used move.ai, which is rapidly becoming the gold standard in markerless mocap tech and for which you can find a growing number of demo vids showing how detailed movement can be captured on YouTube. You can find move.ai tutorials on Vimeo here and for more details about which versions of which smart phones you can use, go to their website here – its very impressive.

Another form of mocap is the detail of the image itself. Reality Capture has launched a tool that you can use to capture yourself (or anyone else or that matter, including your best doggo buddy) and use the resulting mesh to import into Unreal’s MetaHuman. Even more impressive is that Reality Capture is free, download details from here.

We’d love to hear how you get on with any of the tools we’ve covered this week – hit the ‘talk’ button on the menu bar up top and let us know.