Yearly Archives: 2022

Tech Update 1 (Nov 2022)

Tracy Harwood Blog October 30, 2022 Leave a reply

Hot on the heels of our discussion on AI generators last week, we are interested to see tools already emerging that turn text prompts into 3D objects and also film content, alongside a tool for making music too. We have no less than five interesting updates to share here – plus a potentially very useful tool for rigging the character assets you create!

Another area of rapidly developing technological advancements is mo-cap, especially in the domain of markerless which lets face it is really the only way to think about creating naturalistic movement-based content. We share two interesting updates this week.

AI Generators

Nvidia has launched an AI tool that will generate 3D objects (see video). Called GET3D (which is derived from ‘Generate Explicit Textured 3D meshes’), the tool can generate characters and other 3D objects, as explained by Isha Salian on their blog (23 Sept). The code for the tool is currently available on Github, with instructions on how to use it here.

Google Research with researchers at the University of California, Berkeley are also working on similar tools (reported in Gigazine on 30 Sept). DreamFusion uses NeRF tech to create 3D models which can be exported into 3D renderers and modeling software. You can find the tool on Github here.

DreamFusion

Meta has developed a text-to-video generator, called Make-A-Video. The tool uses a single image or can fill in between two images to create some motion. The tool currently generates five second videos which are perfect for background shots in your film. Check out the details on their website here (and sign up to their updates too). Let us know how you get on with this one too!

Make-A-Video

Runway has released a Stable Diffusion-based tool that allows creators to switch out bits of images they do not like and replace them with things they do like (reported in 80.lv on 19 Oct), called Erase and Replace. There are some introductory videos available on Runway’s YouTube channel (see below for the Introduction to the tool).

And finally, also available on Github, is Mubert, a text-to-music generator. This tool uses a Deforum Stable Diffusion colab. Described as proprietary tech, its creator provides a custom license but says anything created with it cannot be released on DSPs as your own. It can be used for free with attribution to sync with images and videos, mentioning @mubertapp and hashtag #mubert, with an option to contact them directly if a commercial license is needed.

Character Rigging

Reallusion‘s Character Creator 4.1 has launched with built in AccurRIG tech – this turns any static model into an animation ready character and also comes with cross-platform support. No doubt very useful for those assets you might want to import from any AI generators you use!

Motion Capture Developments

That every-ready multi-tool, the digital equivalent of the Swiss army knife, has come to the rescue once again: the iPhone can now be used for full body mocap in Unreal Engine 5.1, as illustrated by Jae Solina, aka JSFilmz, in his video (below). Jae has used move.ai, which is rapidly becoming the gold standard in markerless mocap tech and for which you can find a growing number of demo vids showing how detailed movement can be captured on YouTube. You can find move.ai tutorials on Vimeo here and for more details about which versions of which smart phones you can use, go to their website here – its very impressive.

Another form of mocap is the detail of the image itself. Reality Capture has launched a tool that you can use to capture yourself (or anyone else or that matter, including your best doggo buddy) and use the resulting mesh to import into Unreal’s MetaHuman. Even more impressive is that Reality Capture is free, download details from here.

We’d love to hear how you get on with any of the tools we’ve covered this week – hit the ‘talk’ button on the menu bar up top and let us know.

S3 E50 Little White Poney Inn by Olibith (Oct 2022)

Tracy Harwood Podcast Episodes October 26, 2022 Leave a reply

This week Ricky goes all out Halloween for us.  His selection is an old-style machinima (think Chaplin or Keaton) by one of the most prolific Warcraft movie makers from back in the day… actually 2008 for this one.  The tale by Olibith has been made in World of Warcraft and has shades of Lovecraft, the grimmest of Grimms’ fairy tales and The Flintstones!  We also discuss a bonus Lovecraftian film for all you Halloween buffs, It Lives Within the Sea by Orange Squadron (dir. Dominic Edwards) made in RDR2.



YouTube Version of this Episode

Links and Notes

Little White Poney Inn by Olibith, rel 22 September 2008

Olibith’s Warcraft movies page with tutorials on how to make machinima (from 2010)

Have a look at Olibith’s other work (Vimeo channel here), such as Le Terroriste

After we had recorded this show, we went looking for Olibith on social media and were dismayed to find that he passed away earlier this year. We were all very saddened by this news and of course extend our deepest condolences to his family.  We dedicate this episode to his memory.

It Lives Within the Sea by Squadron Orange (Dir. Dominic Edwards), rel 28 Aug 2021

Report: Creative AI Generators (Oct 2022)

Tracy Harwood Blog October 23, 2022 2 Comments

In this month’s special report, we take a look at some of the key challenges in using creative AI generators such as DALL-E, MidJourney, Stable Diffusion and others. Whilst we think they have FANTASTIC potential for creators, not least because they cut down the time in finding some of the creative ideas you want to use, there are some things that are emerging that need to be considered when using them.

Firstly, IP is a massive issue. As noted in this article on Kotaku (Luke Plunkett), the recent rise of AI-created art has brought to the fore some of the moral and legal problems in using it. In terms of the moral issues, some are afraid of a future where entry level art positions are taken over by AI and others see AI-created art as a reflection of what’s already occuring between artists – the influence of style and content… but this is an argument that came to the fore when computers were first used by artists back in the 1960s. Quite frankly we are now seeing some of the most creative work in a generation come to fruitition that just would not have happened without computational assistance. Take a look at the Lumen Prize annual entries, for example, to see what the state of the art is with creative possibilities of AI and other tech. Tracy even directs an Art AI Festival, aiming to showcase some of the latest AIs in creative applications, working in collaboration with one of the world’s leading creative AI curators, Luba Elliott.

As to the legal issues, these are really only just emerging and in a very disjointed and piecemeal way. It was interesting to note that Getty Images notified its contributors in an email (21 Sept 2022) that “Effective immediately, Getty Images will cease to accept all submissions created using AI generative models (e.g., Stable Diffusion, Dall‑E 2, MidJourney, etc.) and prior submissions utilizing such models will be removed.” It went on to state: “There are open questions with respect to the copyright of outputs from these models and there are unaddressed rights issues with respect to the underlying imagery and metadata used to train these models. These changes do not prevent the submission of 3D renders and do not impact the use of digital editing tools (e.g., Photoshop, Illustrator, etc.) with respect to modifying and creating imagery.” This is hot on the heals of a number of developments earlier in the year: in February 2022, the US Copyright Office refused to acknowledge that an AI could hold copyright of its creative endeavour (article here). By September 2022, an artwork created with MidJourney by Jason Allen that won the Colorado State Fair contest was causing a major stir across the art world as to what constitutes art, as outlined in this article (Smithsonian Magazine) and this short news report here –

Of course, the real dilemma is what happens to artists, particularly those at the lower end of the food chain. By way of another example, consider the UK actors’ union Equity’s response to recent proposals by the Government to include a data mining exemption for audio-visual content in its proposed new AI regulation. Why that’s interesting is because already a number of organizations that would otherwise employ these artists, say as graphic designers or concept artists, are rapidly replacing them with AI generated images – Cosmopolitan used its ‘first AI generated cover’ in June 2022 and advertising agencies the world over are doing likewise (Adage article). Some image users have even stated that in future they will ONLY use these tools as image sources, effectively cutting out the middle man, and indeed the originator of the contributory works. So, of course Getty is not going to be happy about this… and neither are the many contributors to their platforms.

And so here is the nub of the problem: in the rush that is now going to follow Getty’s stance (and probably others with similar influence to follow), how will the use of AI generators be policed? This has pretty serious consequences because it has implications for all content including on YouTube, in festivals and contests around the world – how would creative works like The Crow be judged (see our blog post here too)? It certainly places emphasis on the role of metadata and statements of authorship, but it is also as good an argument we can think of for using blockchain too! The Crow for example briefly mentions the AI generator tool it has used, which is freely available to use on Google CoLab here, but it doesn’t show the sources of the underlying training data set used.

AI code source is Pytii Colab Notebook (sportsracer48)

We contend, the only way to police the use of AI generated content is actually by using AI, say by analysing pixel level detail… and that’s because one of Getty’s points is no doubt going to be how their own stock images, even with copyright claims over them, have been used in training data sets. AI simply cuts out the stuff out that it doesn’t want and voila, something useful emerges! So, unless there is greater transparency and disclosure among the creators of AI generators AS A PRIORITY on where images have been scraped from and how they have been used, there is going to be a major problem for all types of content creators, including the machinima and virtual production creator using these tools as a way to infuse new ideas into their creative projects, and as the ability to turn 2D image into 3D object becomes more accessible to a wider range of creators. Watch this space!

In the meantime, we’ll be doing a podcast on the Completely Machinima YouTube channel some of the best creative ideas we’ve seen next month so do look out for that too.

We’d love to hear your views on this topic, so do drop them into the comments.

btw, our featured image was created in MidJourney using the prompt: ‘Diary of a Camper made in Quake engine’, by @tgharwood

S3 E49 Film Review: ‘Most Precious Gift’ by Shangyu Wang (Oct 2022)

Tracy Harwood Podcast Episodes October 19, 2022 Leave a reply

This week, Damien has picked a very interesting Eastern-made alien tale. Its been beautifully shot and rendered using Omniverse, and inspired him to try some of the techniques shown. Ricky is a little more critical of the nostalgic trope. Tracy reflects on the journey of the storytelling, and the nature of what it is to be human that is the heart of the story. Phil brings Solaris into the discussion, as only Phil can. Overall, we reflect on the different styles of animation used and how influential they were. And, finally, how on earth did the producer achieve that tendril effect?!



YouTube Version of this Episode

Link to Film

Fests & Contests Update (Oct 2022)

Tracy Harwood Blog October 17, 2022 Leave a reply

Prazinburk Ridge

Its no surprise to hear that Martin Bell’s Prazinburk Ridge has won its first award, Best Animation – and very fitting that it should be at the North of England’s Wigan and Leigh Film Festival, not a stone’s throw away from Huddersfield, where the main character in the story hailed from. Many congratulations, Martin!

You can see us review the film also on our YouTube channel here –

UE: Creep it Real

Possibly a bit late notifying you but a nice little Unreal contest launched earlier this month – Unreal Challenge: Creep It Real! Here’s the link – deadline is 29 October. There are some great prizes for video content created with the assets you use which is LESS THAN 1 MINUTE, so late as we are posting this, there’s still no excuse for not participating! There were 450 entries to their Better Light Than Never contest, held earlier in the year, so we’re looking forward to seeing the sizzle reel from entries to this one in due course.

Unreal Challenge: Creep It Real

MacInnes Studios’ Dance Challenge

Another contest has launched, hosted by John MacInnes aka MacInnes Studios, and its hot on the heals of his Mood Scene contest, the results for which we look forward to seeing soon. The new contest is all about dance moves – check out the details here – start date is 1st October and it runs for 30 days.

MacInnes Studios Dance Challenge – Oct 2022

and if you want to hear John talk more about his use of avatars and ‘the future of digital humans’, here’s a great webinar you can catch up on too, hosted by Faceware (one of the Dance Challenge sponsors).

Open Calls

There are numerous experimental film festivals that are currently calling for entries – check them out on ExpCinema.org – we liked the look of Underneath the Floorboards!