Reality Capture

Projects Update (Jan 2023)

Tracy Harwood Blog January 2, 2023 Leave a reply

To kick start 2023 with a virtual BANG, we are highlighting some projects we’ve seen that are great examples of machinima and virtual production, demonstrating a breadth of techniques, a range of technologies, and comprise good ole’ short-form storytelling. We also really enjoyed Steve Cutts tale of man… let’s hope for a peaceful and happy year. Enjoy!

Force of Unreal

We were massively impressed throughout last year with the scope of creative work being produced in Unreal Engine. So, we have a few more to tell you about!

RIFT by HaZimation is a Sci-Fi Anime style film with characters created in Reallusion’s Character Creator. The film debuted at the Spark Computer Graphics Society’s Spark Animation Festival last October. We love the stylized effects that have been used here, which Haz Dulull, director/producer, describes as a combination of 2D and 3D in this article (scroll to below half way). We are also impressed that those same 3D assets and environment used in the film making process have also been integrated into a FPS game. The game is currently available free on Steam in early access here. This is another great example of creators using virtual assets in multiple ways – and builds very much on the model that Epic envisaged when they first released the City sample last year, hot on the heals of the release of The Matrix Resurrections film and The Matrix Awakens: UE5 Expeirence for which the city was created. We also love HaZimation’s strategy of co-creation for the new RIFT game experience with players – “We at HaZimation believe that a great game is only possible with direct feedback from the audience as early as possible” (Steam). We fully expect to see more creative works using the RIFT content in future too. Congrats to everyone involved.

As any of you that have been following the podcast will have gathered, we love a good alien film too, and we have found another made in UE5 that we really enjoyed. This one is called The Lab, by Haylox (released 14 Sept 2022). The director/producer builds the suspense well in this although, of course, its the same Alien trope we’ve seen many times over. Nonetheless, this has nice effects and well balanced soundscape.

We also love a good music video. The next project is a dance video made by Guru Pradeep using the music ‘Urvashi’ – Kaadhalan (A R Rahman), released 2 Aug 2022. Its a little rough around the edges, having seemingly been cobbled together with Megascans, Sketchfab and items grabbed the UE Marketplace, but the mocap is done particularly well, although we don’t know what was used, as is the editing. We look forward to seeing more from this creator in future.

Aspiring Assets

We want to highlight the amazing content that’s being developed for use in UE with Reality Capture. In this video, which is not a film but a ‘show and tell’ more than a tut, William Faucher reveals how he created a Lofoten-inspired cabin environment from the 1800s. Its impressive stuff if you have an eye of photogrammetry as well as some of the challenges for asset creation and there are lots of tips and hints in here with more detailed tutorials on his channel.

We have also been impressed with the range of fabulous assets that are being created and used in the Kitbash 3D Mission to Minerva challenge (closed 2 Dec 2022) the outcome of which will be a new galaxy of the combined concept artworks and in-motion content being submitted. There are some really nice videos which you can find using #kb3dchallenge on YouTube that are definitely worth a looksee. We liked this one, which has a nice touch of a disaster about it, by Mike Seto.

With an impressive field of judges that included talent acquisition representatives from NASA Concept Labs, Netflix, Riot Games and ILM, winners were announced on 20 Dec.

And Finally?

Let’s hope for a more progressive year in 2023 than the hate-filled traps that befell so many across a whole plethora of virtual platforms and IRL… and maybe reflect on the message contained within this great fun short, created in Clip Studio Paint with Cinema 4D and After Effects. The film is by Steve Cutts, called A Brief Disagreement, released 30 Sept 2022. Steve is not a nOOb in the world of machinima (and the earlier days of Reallusion’s CrazyTalk) – his classic comedy about the fate of Roger and Jessica Rabbit, as well as every other iconic cartoon character you can think of, even 8 years after its release is still a good laugh for those of a certain age (and its the featured image for this article in case you were wondering)!

Tech Update 1 (Nov 2022)

Tracy Harwood Blog October 30, 2022 Leave a reply

Hot on the heels of our discussion on AI generators last week, we are interested to see tools already emerging that turn text prompts into 3D objects and also film content, alongside a tool for making music too. We have no less than five interesting updates to share here – plus a potentially very useful tool for rigging the character assets you create!

Another area of rapidly developing technological advancements is mo-cap, especially in the domain of markerless which lets face it is really the only way to think about creating naturalistic movement-based content. We share two interesting updates this week.

AI Generators

Nvidia has launched an AI tool that will generate 3D objects (see video). Called GET3D (which is derived from ‘Generate Explicit Textured 3D meshes’), the tool can generate characters and other 3D objects, as explained by Isha Salian on their blog (23 Sept). The code for the tool is currently available on Github, with instructions on how to use it here.

Google Research with researchers at the University of California, Berkeley are also working on similar tools (reported in Gigazine on 30 Sept). DreamFusion uses NeRF tech to create 3D models which can be exported into 3D renderers and modeling software. You can find the tool on Github here.

DreamFusion

Meta has developed a text-to-video generator, called Make-A-Video. The tool uses a single image or can fill in between two images to create some motion. The tool currently generates five second videos which are perfect for background shots in your film. Check out the details on their website here (and sign up to their updates too). Let us know how you get on with this one too!

Make-A-Video

Runway has released a Stable Diffusion-based tool that allows creators to switch out bits of images they do not like and replace them with things they do like (reported in 80.lv on 19 Oct), called Erase and Replace. There are some introductory videos available on Runway’s YouTube channel (see below for the Introduction to the tool).

And finally, also available on Github, is Mubert, a text-to-music generator. This tool uses a Deforum Stable Diffusion colab. Described as proprietary tech, its creator provides a custom license but says anything created with it cannot be released on DSPs as your own. It can be used for free with attribution to sync with images and videos, mentioning @mubertapp and hashtag #mubert, with an option to contact them directly if a commercial license is needed.

Character Rigging

Reallusion‘s Character Creator 4.1 has launched with built in AccurRIG tech – this turns any static model into an animation ready character and also comes with cross-platform support. No doubt very useful for those assets you might want to import from any AI generators you use!

Motion Capture Developments

That every-ready multi-tool, the digital equivalent of the Swiss army knife, has come to the rescue once again: the iPhone can now be used for full body mocap in Unreal Engine 5.1, as illustrated by Jae Solina, aka JSFilmz, in his video (below). Jae has used move.ai, which is rapidly becoming the gold standard in markerless mocap tech and for which you can find a growing number of demo vids showing how detailed movement can be captured on YouTube. You can find move.ai tutorials on Vimeo here and for more details about which versions of which smart phones you can use, go to their website here – its very impressive.

Another form of mocap is the detail of the image itself. Reality Capture has launched a tool that you can use to capture yourself (or anyone else or that matter, including your best doggo buddy) and use the resulting mesh to import into Unreal’s MetaHuman. Even more impressive is that Reality Capture is free, download details from here.

We’d love to hear how you get on with any of the tools we’ve covered this week – hit the ‘talk’ button on the menu bar up top and let us know.