Blog

Projects Update 1 (Mar 2023)

Tracy Harwood Blog March 20, 2023 Leave a reply

This month, we have two weeks of projects to share with you. This week, we focus on the Unreal film projects we found. The breadth of work folks are creating with this toolset is astounding – all these films highlight a range of talent, development of workflows and the accessibility of the tools being used. The films also demonstrate what great creative storytelling talent there is among the indie creator communities across the world. Exciting times!

NOPE by Red Render

Alessio Marciello’s (aka Red Render) film NOPE uses UE5, Blender and iClone 8 to create a Jordan Peel inspired film, released on 11 December 2022. The pace and soundscape are impressive, the lucid dream of a bored schoolboy is an interesting creative choice, and we love the hint of Enterprise at the end! Check it out here –

The Perilous Wager by Ethan Nester

Ethan Nester’s The Perilous Wager, released 28 November 2022, uses UE’s Metahumans in our next short project pick. This is reminiscent of film noir and crime drama, mixed with a twist of lime. Its a well managed story with some hidden depths, only really evidenced in the buzzing of flies. It ends a little abruptly but, as its creator says, its about ideas for larger projects. It demonstrates great voice acting and we also love that Ethan voiced all the characters himself, he said using Altered.AI to create vocal deepfakes. He highlights how going through the voice acting process helped him improve his acting skills too – impressive work! We look forward to seeing how these ideas develop in due course. Here’s the link –

Gloom by Bloom

Another dark and moody project (its also our feature image for this post), Gloom was created for the Australia and New Zealand short film challenge 2022, supported by Screen NSW and Epic. The film is by Bloom, released 17 December 2022, and was created in eight weeks. The sci-fi concept is great, voice acting impressive and the story is well told with some fab jumpscares in it too. The sound design is worth taking note of but we recommend you wear a headset to get the full sense of the expansive soundscape the team have devised. Overall, a great project and we look forward to seeing more work from Bloom too –

Adarnia by Adarnia Studio

Our next project is one that turns UE characters into ancient ones – a slightly longer format project, this has elements of Star Wars, Blade Runner and just a touch of Jason and the Argonauts mixed together with an expansive cityscape to boot. Adarnia is a sci-fi fantasy created by Clemhyn Escosora and released 19 March 2021. There’s an impressive vehicle chase which perhaps goes on just a little too long, but there’s an interesting use of assets that are replicated in different ways across the various scenes that is brought together nicely towards the end of the film. The birdsong is a little distracting in places, one of those ‘nuisance scores’ we highlighted in last week’s blog post (Tech Update 2). There’s clearly a lot of work that’s gone into this, and pehaps there’s scope for a game to be made with the expansiveness demonstrated in the project, but the film’s story needs to be just a little tighter. We guess the creators agree because their YouTube channel is full of excerpts focussing on key components of this work. Check out the film here –

Superman Awakens by Antonis Fylladitis

Our final project for this week is a Superman tale, created by VFX company called Floating House. The film, released on 13 February 2023, is inspired by Kingdom Come Superman and Alex Ross and is a very interesting mix of comic styling and narrative film, with great voice acting from Daniel Zbel. Its another great illustration of the quality of the UE assets for talented storytellers –

Next week, we take a look at films made with other engines.

Tech Update 2 (Mar 2023)

Tracy Harwood Blog March 13, 2023 Leave a reply

We’ve seen a number of tech developments in recent weeks that we’ll share in this post. Everything from free tools, great content packs, wrinkles for those of a certain age of course, mocap for newbies, nuisance scores, heads up on a lightweight headset, and more!

Lights, Camera, Action

A member of Chantal Harvey’s popular Machinima Mondays‘ Facebook Group posted a video recommendation by Kevin Stratvert of five free screen recording tools that all machinima and virtual production folks should have in their applications folder. He usefully goes through the process of using each of them in his tutorial here –

We highlight just a few of the exciting things we’ve seen in the last few weeks for Unreal Engine. A show and tell tutorial on making ragdoll puppets, reported in 80.lv, featuring 3D artist and animator Peter Javidpour, gives a great breakdown of the process, including how to rig the virtual camera. The process using Blueprints was used in his recent short release, My Breakfast with Barf, link here –

Also using Blueprints, Machina-Infinitum.com released a content pack for making procedural fractals. They look really beautiful – and perfect for that next cyberpunk-cum-inceptionist film. The pack isn’t free at $99, but it looks like a good investment, available on the Unreal store here. Here’s a link to their YouTube channel and tutorials for using the assets –

And also not free (£170.70), another excellent content pack. This one contains realistic building assets from what looks like the Whitechapel area of London, called a British City Pack, by Polyspherestudio.com. Here’s an overview on their YouTube channel –

Reallusion released a much awaited update to its Character Creator, introducing a dynamic wrinkle system. The plasticity of facial animations using CC4 is something we’ve often found ourselves commenting on in our film reviews, and this is a very interesting development. Check out the overview here –

Plask’s mocap app has been upgraded. This is an app we’ve mentioned before, which allows you to record, edit and animate projects in your browser. For pros, there’s a monthly fee, but for newbies, its freemium model looks like a great way to get started in mocap. Here’s an overview of it from their YouTube channel, which also contains tutorials of how to integrate the content with platforms like Blender, Unreal Engine and others –

With interoperability at its heart, ReadyPlayerMe is going from strength to strength. Its recent blog post sets out its ambition, and this highlights what great potential its avatars have to be cross-platform virtual storytellers, although as yet we’ve not seen much of that emerging.

For sound design tips, you can do no better than take a look at REAPER. Anne-Sophie Mongeau has written a great two-part article on Asoundeffect.com, which is definitely worth checking out, and whilst you’re there, you can check out the massive curated collection of sound effects on the website too.

For those exploring immersive experiences, we found another great article on Asoundeffect.com, this one discussed the impact of ‘nuisance scores‘ on the listener – we certainly have some experience of that in films we see too.

And for those seeking an alternative to the wearying headsets for virtual reality immersive experiences, Bigscreenvr.com‘s new system looks very impressive. Its just 127 grams and with a great resolution – most headsets weigh in around 450-650 grams, which is roughly a bag of sugar for those home chefs in the know – so surely will be much more usable than the current techs. It just released an overview of the new set and shipping begins in Q3 2023, and I’m more than tempted to get my order in early on this one…

You’re Welcome!

Finally this week, the Second Life endowment for the arts process is changing. For years, Second Life has been a massive advocate for its community of content creators, and the changes which give creators more time to develop their builds is another example of its fantastic support (notwithstanding the truly err colourful gif on its announcement page, our feature image this week). Here’s a link to its grant page.

Tech Update 1: AI Generators (Mar 2023)

Tracy Harwood Blog March 6, 2023 1 Comment

Genies are everywhere now. In this post, I’ll focus on some of the more interesting areas relating to the virtual production pipeline, which interestingly is becoming clearer day by day. Check out this mandala of the skills identified for virtual production by StoryFutures in the UK (published 2 March) but note that skills for using genies within the pipeline are not there (yet)!

Future of Filmmaking

Virtual Producer online magazine published an interesting article, by Noah Kadner (22 Feb), about the range of genie tools available for the film production pipeline, covering the key stages of pre-production, production and post-production. Alongside it, he gives an overview of some of the ethical considerations we’ve been highlighting too. Its nice to the see the structured analysis of the tools although, of course, what AIs do is change or emphasize aspects of processes, conflate parts and obviate the need for others. Many of the tools identified are ones we’ve already discussed in our blogs on this topic, but its fascinating to see the order being put on their use. I think the key thing all of us involved in the world of machinima have learned over the years, however, is that its often the indie creators that take things and do stuff that no one thought about before, so I for one will be interested to see how these neat categories evolve!

Bits and Pieces

It was never going to take long to showcase the ingenuity among users of genies: last month, whilst Futurism was reporting on the dilemma of ethical behaviour among users who have ‘jailbroken’ the ChatGPT safeguards, MidJourney was busy invoking even more governance over its use. MidJourney says its approach, which now bans the use of words about human reproductive systems, is to ‘temporarily prevent people from creating shocking or gory images’. All this very much reminds me of an AI experiment carried out by Microsoft almost seven years ago as we release this post, on 24 March 2016, and of the artist Zach Blas’ interpretation of that work showcased in 2017, called ‘Im here to learn so :))))))‘.

For those without long(ish) memories, Blas’ work was a video art installation visualizing Tay, which had been designed by Microsoft as a 19 years old American female chatbot. As an AI, it lived for just one day on its social media platform where it was subjected to a tyranny of misognyistic, abusive, hate-filled diatribe. Needless to say, corporate nervousness in its creative representation of the verbiage it generated from its learning processes resulted in it being terminated before it really got going. Blas’ interpretation of Tay, ironically using Reallusion’s CrazyTalk to animate it as an ‘undead AI’, is a useful reminder of how algorithms work and the nature of humanbeans. The link under the image below takes you to where you can watch the video of Tay reflecting on its experience and deepdreams. Salutary.

source: Zach Blas’ website

Speaking of dreams, Dreamix is a creative tool that uses an input video with a text prompt to create some other video output. In effect, it takes the user through the pre-production, production and post-production process in just one sweep. Here’s a video explainer –

In a not dissimilar vein, ControlNet takes an image generated in Stable Diffusion and applies a controller to inpaint the image in any style you’d like to see. Here’s an explainer by Software Engineering Courses –

and here’s the idea taken to a whole new level by Corridor Crew in their development of an anime film. The explainer takes you through the process they created from scratch, including training an AI –

They describe the process they’ve gone through really well, and its surely not going to be too long before this becomes automated with an app you can pick up in a virtual store near you.

Surprise, surprise, here is RunwayML’s Gen-1: not quite the automated app actually, but pretty close. Runway has created an AI that takes video input and an image with a style you would like to apply to it and with a little bit of genie magic, the output video has the style transferred to it. What makes this super interesting, however, is that Runway Studios is now a thing too – it is the entertainment and production division of Runway and aims to partner with ‘next gen’ storytellers. It has launched two initiaties worth following: an annual AI Film Festival, which just closed its first call for entries. Here’s a link to the panel discussion that took place in New York on 1 Mar, with Paul Trillo, Souki Mehdaoui, Cleo Abram and Darren Aronofsky –

The second initiative is its creative grants for ‘aspiring filmmakers from various backgrounds who are in need of production support’. On its Google formlet, it states grants take various shapes, including advanced access to the latest AI Magic Tools, funding allocations, as well as educational resources. Definitely worth bearing in mind for your next step in devising machine-cinema stories.

Genious?

Whilst we sit back and wait for the AI generated films to bubble to the top of our algorithmically controlled YouTube channel, or at least, the ones where Google tools have been part of the process, we bring you a new-old classic. Welcome to FrAIsier 3000. This is described as a parody show that combines surreal humor, philosophical musings and heartfelt moments from an alternate dimension, where an hallucinogenic FrAIsier reflects on the mysteries of existence and the human condition. Wonderful stuff, as ever. Here’s a link to episode 1 but do check out episode 2, waxing lyrically on ‘coq au vin’ as a perfect example of the balance between the dichotomy of discipline and carefreeness (and our feature image for this post) –

If you find inspiring examples of AI generated films, or yet more examples of genies that push at the boundaries of our virtual production world, do get and touch or share in the comments.

Projects Update: Feb 2023

Tracy Harwood Blog February 20, 2023 Leave a reply

This week’s #MondayMotivation post has some more great examples of machinima and virtual production projects. We have a selection of shorts made using Unreal Engine and another entirely made in Blender, plus a couple of ‘making ofs…’ and a ‘role of…’ also worth checking out.

Projects

An artist we’ve talked about before who has created extensive work over many years in Second Life is Bryn Oh, and now she has created a nostalgic experience called Lobby Cam which is available on Steam, made using UE5. The experience is a walking tour of an extensive environment and a story told through the pages of a ripped up diary. The project has been reviewed by James Wagner Au on his blog, New World Notes here. It is described as part of a larger narrative, and here’s a video sampler of the tour produced as part of that too… an interesting approach to virtual storytelling –

Off planet, another project which contains amazing detail of other worlds is by Melody Sheep, called The Sights of Space: A Voyage to Spectacular Alien Worlds (released 29 Nov 2022). This a 30 minutes-long film of speculative depictions of space scenes based on ‘current scientific understanding’ of the Milky Way, albeit with extensive creative license. If you ever wanted to get into a new type of documentary, this is probably the one to have on your watchlist –

We were also thrilled to see what promises to be a very interesting new series launching later this year by Melody Sheep, called The Human Future, check out the trailer on the channel here.

In our next project pick, called JOYCE by GTshortStories (released 14 Dec 2022), UE5 and every available tool with it has been used to create an interesting space story. This mixes live action with some well done animation, and the integration is done really well, so its a great example to check out. Joyce is a backchatting robot exploring a facility along with Sargeant Terry Brown – there are many references to popular sci-fi tropes, so do check this one out! GTshortStories is also putting out other creative content, so check out the channel too.

Our final space project for this week is Countdown, by Andrew Klimov on the CGChannel. This is a fast paced story of a crash landing onto an alien planet, all about the crash itself, and it certainly makes you feel it. The crash is the beginning of a new series and you can find out more about that on his website here. There’s also an interesting breakdown of the filmmaking process on his Vimeo channel here.

Our next project pick goes back to the 11th Century, inspired by an Umbrian folk tale in the novel ‘E poi si fece buio’ by Matteo Bebi. It is about a dream by Imiltrude who lived in a hidden village and was sentenced to death for having caused a fire that destroyed a city. The film HIMIL is by Tiziano Fioriti and Andrea Brunetti, made using UE5 and is a fascinating first person perspective with a very well done soundscape –

Our next project pick is a Blender-made movie and another example of great storytelling, this time in a cyberpunk environment with a really nice twist in the tale. Not sure it would be Ricky’s cup of tea, to his point about emotional representation, but I certainly loved it! The story has been created by the Blender HQ team, so its by no means an indie endeavour with a team of folks behind the processes employed but definitely worth watching – check out the pace of the action and sound design in particular. The film is called Charge – Blender Open Movie (released 15 Dec 2022) and you can access the production files and making of videos for the film here.

Making of…

We always love a good homage to Star Wars, and this week we have a feature from the Reallusion Magazine, which describes how iClone and the Vicon mocap system have been used to recreate that iconic ‘I am your father’ scene from the Empire Strikes Back episode. The short has been made by Luis Cepeda from Quitasueño Studios, based in the Dominican Republic, and he provides a great step-by-step guide to how the short was made with a video overview here –

Ever wondered how to use a midi controller with UE5 that lets you use the controller for all sorts of effects in real-time just with the keyboard? Well, here’s a fantastic video tutorial for you by Taiyaki Studios featuring Cory Williams –

Role of…

And finally this week, we share Loralee Sundra’s video on the Internet Archive about the value of public domain films from her perspective as a Frontline Fellow at the Documentary Film Legal Clinic at UCLA School of Law.  Her talk was part of the Internet Archive’s Public Domain Day 2023 celebration, held on 25 Jan 2023.



Tech Update 2 (Feb 2023)

Tracy Harwood Blog February 13, 2023 Leave a reply

This week, we highlight some time-saving examples for generating 3D models using – you guessed it – AIs, and we also take a look at some recent developments in motion tracking for creators.

3D Modelling

All these examples highlight that generating a 3D model isn’t the end of the process and that once its in Blender, or another animation toolset, there’s definitely more work to do. These add-ons are intended to help you reach your end result more quickly, cutting out some of the more tedious aspects of the creative process using AIs.

Blender is one of those amazing animation tools that has a very active community of users, and of course, a whole heap of folks looking for quick ways to solve challenges in their creative pipeline. We found folks that have integrated OpenAI’s ChatGPT into using the toolset by developing add-ons. Check out this illustration by Olav3D, whose comments about using ChatGPT for attempting to write Python scripts sum it up nicely, “better than search alone” –

Dreamtextures by Carson Katri is a Blender add-on using Stable Diffusion which is so clever that it even projects textures onto 3D models (with our thanks to Krad Productions for sharing this one). In this video, Default Cube talks about how to get results with as few glitches as possible –

and this short tells you how to integrate Dreamtextures into Blender, by Vertex Rage –

To check out Dreamtextures for yourself, you can find the Katri’s application on Github here and should you wish to support his work, subscribe to his Patreon channel here too.

OpenAI also launched its Point-E 3D model generator this month, which can then be imported into Blender but, as CGMatter has highlighted, using the published APIs takes a very long time sitting in cues to access the downloads, whilst downloading the code to your own machine to run it locally, well that’s easy – and once you have it, you can create point-cloud models in seconds. However, he’s running the code from Google’s CoLab, which means you can run the code in the cloud. Here’s his tutorial on how to use Point-E without the wait giving you access to your own version of the code (on Github) in CoLab –

We also found another very interesting Blender add-on, this one lets you import models from Google Maps into the toolset. The video is a little old, but the latest update of the mod on Github, version 0.6.0 (for RenderDoc 1.25 and Blender 3.4) has just released, created by Elie Michel –

We were also interested to see NVIDIA’s update at CES (in January). It announced a release for the Omniverse Launcher that supports 3D animation in Blender, with generative AIs that enhance characters’ movement and gestures, a future update to Canvas that includes 360 surround images for panoramic environments and also an AI ToyBox, that enables you to create 3D meshes from 2D inputs. Ostensibly, these tools are for creators to develop work for the metaverse and web3 applications, but we already know NVIDIA’s USD-based tools are incredibly powerful for supporting collaborative workflows including machinima and virtual production. Check out the update here and this is a nice little promo video that sums up the integrated collaborative capabilities –

Tracking

As fast as the 3D modelling scene is developing, so is motion tracking. Move.ai which launched late last year, announced its pricing strategy this month at $365 for 12 months of unlimited processing of recordings – this is markerless mocap at its very best, although not so much if you want to do live mocap (no pricing strategy announced yet). Move.ai (our feature image for this article) lets you record content using a mobile phone (a couple of old iPhones). You can find out more on its new website here and here’s a fun taster, called Gorillas in the mist, with ballet and 4 iPhones, released in December by the Move.ai team –

And another app although not 3D is Face 2D Live, released by Dayream Studios – Blueprints in January. This tool allows you to live link a Face app on your iPhone or iPad to make cartoons, including with your friends also using an iPhone app, out of just about anything. It costs just $14.99 and is available on the Unreal Marketplace here. Here’s a short video example to wet your appetite – we can see a lot of silliness ensuing with this for sure!

Not necessarily machinima but for those interested in more serious facial mocap, Weta has been talking about how it developed its facial mocap processes for Avatar, using something called an ‘anatomical plausible facial system’. This is an animator centric system that captures muscle movement rather than ‘facial action coding’ which focusses on identifying emotions. Weta stated its approach leads to a wider set of facial movements being integrated into the mocapped output – we’ll no doubt see more in due course. Here’s an article on the FX Guide website which discusses the approach being taken and for a wider ranging discussion on the types of performance tracking used by the Weta team, Corridor Crew have bagged a great interview with the Avatar VFX supervisor, Eric Saindon here –