Starting Feb 2023 off with a meme, Tracy selects a machinima that has been released as part of a massively co-created story involving numerous creators since it kicked off in mid 2019, The Backrooms. This short is called The Backrooms – Reunion by @kanepixels, released 8 Dec 2022. Is this really machinima, yep it certainly is – but we’re all blown away with the realism of the actors’ performance in this, and of course, the whole Backrooms Creepypasta phenomenon is just something else to behold too. We give you a bit of the background, just in case you’ve not come across it in your travels across the internet.
YouTube Version of this Episode
Show Notes & Links
The Backrooms – Reunion by Kane Pixels, released 8 December 2022
The Backrooms (Found Footage), released 7 Jan 2022
This week’s #MondayMotivation gives you a selection of more projects to take a look at, these are from recent contests and challenges that have been taking place across different platforms.
Unreal Short Film Challenge: Australia & NZ
This is an annual contest that provides two weeks of training on Unreal, followed by eight weeks in which to create a film. We reviewed some of the films emanating from last year’s challenge and this contest resulted in some equally stunning films. Here’s the highlights reel –
But do check out the films too. Two we particularly loved are narrated, which is not a method we see used all that often in shorts these days. This one is Revolver and Heckler’s Black Wing –
and this one of a solo dancer is beautifully done, by Adam Walker Film, called vQsv –
This one mixed 3D and 2D, mentored by Spectre Studios, who’s 2020 Roborovski we shared a couple of months ago, is also very well done – Robo Ramen, by UTS Animal Logic Academy –
There are numerous other to check on Unreal’s channel too, link here.
KitBash3D: Mission to Minerva
Another time delimited contest, KitBash3D launched a free asset pack, its Mission to Minerva, and 40 days later, 32,000 entries from 174 countries answered the call to ‘create a new Galaxy’. Films were made using Blender and Unreal for the ‘in-motion’ category, and another category of concept artwork just required stills. What an astonishing feat to go through all those entries and select just a few winners! Here’s the sizzle reel –
In-motion winners compilation –
and here’s the winner, Secret Moon by Orencloud. This is stunning to say the least and we’ll be reviewing this as part of our February podcast film review too –
In the meantime, KitBash3D’s Mission to Minerva world kit is stil available as a free download, you can access it here.
Second Life Showcase
Not a contest as such, but we wanted to share a site that’s produced by a group of SL Video Creators, aimed at inspiring residents to create. Each month, they select the best films and share them on their website – check it out in the link here
MacInnes Studios Dance Challenge
Following hot on the heels of the outcome of the Mood Scene challenge, John MacInnes launched a TikTok challenge to create an avatar dancer. TikTok is an interesting choice for video sharing for machinima, and its one we’ll be commenting on more over the coming weeks. This contest was won by a virtual Freddie Mercury, created by Jean Campos (feature image) –
Runners up were Pooky Amsterdam, Bruschi Bruschmann, Alex Sura and Sergey Vereschagin.
This week, we have some more interesting project updates to share with you.
Recognition
We were thrilled to hear that Sam Crane’s GTA Online version of Shakespeare’s Hamlet has been recognized, being shortlisted for an award for innovation at the The Stage Awards 2023 – we love that they spelled Auto incorrectly too (Audio)! Awards are announced on 30 January and hopefully Sam (aka Rustic Mascara) will keep us posted on his Twitter feed during the event. All the best, Sam! You can still find the performance on YouTube, link here –
Unreal
Using Unreal Engine, this short is another example of how beautiful this toolset is. This short by The Blender Bender team (Thomas Thielemann and Alexander Korabelnikov), released 26 Sept 2022, was inspired by David Attenborough’s Our Planet series, and with an even more stark message than the original series –
Also using Unreal Engine, we were interested to see that Sava Zivkovic, who’s film Irradiation we reviewed back in October 2021, is working on a new project, this one called Beckoning. He’s also just been awarded an Epic Megagrant to support development of the project – well deserved for sure. Here’s the link to the trailer for the new project – must say, very much looking forward to seeing the finished work.
Not Unreal but seriously unreal, this is an ‘insane battle’ scene demonstrating the astonishing simulation capability of the Epic Battle Simulator 2 engine. This one, SPECIAL FORCES ARE LANDING ON THE ISLAND OCCUPIED BY SAURON, was released 5 November 2022, by the Battle Simulator Center team –
Virtual Production
A short film, called Goliath by DonBittersil (screencap is our featured image for this post), has showcased virtual production tools using Unreal Engine, having been shot at LA’s Orbital Virtual Studios. For those advancing from purely screen based production techniques, this is an interesting insight into scaled-up processes – check out the film and the ‘making of’ videos here –
You may remember we shared Jackson Wang’s beautifully choreographed music video called Cruel a few weeks back, well this is another one from his Magic Man album, using virtual production techniques. Its also a stunning example of his creative work and the usefulness of the VP process –
Avant Garde?
An interesting article appeared on MUBI’s website about a cutscene collective, called Total Refusal. This team of gamers do what machinima creators have done for 25+ years, that is, use the game for some other creative purpose. Its nice to see that MUBI is keeping up with the times of course, and they would certainly do well to follow our friends at the Milan Machinima Film Festival to keep up to date with this particular ‘Avant Garde’ scene format. In the meantime, this is an example of Total’s Refusal’s creative works, a trailer for Hardly Working (RDR2) –
This week, we highlight some character development tools, NeRFs, NFTs and environments for machinima and virtual production.
Characters
Beginning with the awe-inspiring toolset of Unreal Engine’s MetaHumans, the organization has released a FREE three-hours long online course for beginners on real-time animating with Faceware Analyzer and Retargeter tools. Here’s a taster of what you can expect –
A creator we’ve featured a number of times (his tutorials are awesome), JSFILMZ (our feature image) has posted a taster of MetaHuman’s Live Drive from Facegood, which launched in December. The demo shows straight from camera to Unreal but what’s amazing is the price for the head-mounted hardware of <$500! This obviously isn’t free but its good value compared to some of the other facial tracking hardware on the market, and Jae compares those to give you an overview of what you get for the money. The Facegood software itself, Avatary, is free though, which produces some impressive animations. Check out Jae’s introductory overview below, and then pick up his tutorials on each of the components he discusses on his channel –
Move.ai has launched its iPhone beta application for free markerless mocap (requires two phones). Ultimately, this isn’t going to be free to use so make the most of the beta sign-up opportunity – the official launch takes place in March 2023 and their main target in the first instance is professional studios, which will put this out of reach for many indies. This article gives you a quick overview (by 80.lv), and this short video explainer introduces their store –
And finally, on characters this month, we highlight Inworld AI. This organization is creating interactive conversational characters that can be exported and shared across various platforms, either as avatars or the underlying chatbot (think smart NPCs). Some of you may recall John Gaeta mentioned this in our interview with him last year, and since then, Inworld has become part of the Walt Disney Company’s Accelerator Programme, been awarded an Epic MegaGrant and raised a pot of money from investors. The application of the software is vast – everything from games to marketing, as well as machinima and virtual productions too… and that’s because of how the characters can be moulded. Inworld states: ‘When crafting your character’s brain, you are able to use the Studio to tailor many elements of cognition and behavior, such as goals and motivations, manners of speech, memories and knowledge, and voice‘. Inworld released a nice tutorial in December, link below. Its definitely one to try out –
NeRFing Around
We found a nice short on Neural Radiance Fields (aka NeRFs) by Corridor Crew, using the Luma AI app, which is truly stunning for recreating realistic anything. They highlight some of the key challenges, and present a very interesting test with a chrome ball – surely it is never going to possible to create this kind of object with dynamic reflections and all…? Check it out here –
As Corridor Crew states, this is clearly one of the next big tech things in image capture for CGI.
NFTs
The fluid waters of NFTs continues to muddy. This article (by NFT Now) highlights some of the recent class action law suits being brought against creator platforms, suggesting that the markets are being artificially inflated by celebrity endorsers, although this is surely true for so many other products too? Its more an argument about the nature of the endorsement process and the stake in the investment that the endorser has that’s the issue here seemingly. One of the main challenges here is the fundemantal role of community in NFTs, which is always going to mean there is a very fine line on ‘insider trading’. Its also interesting to note that IP owners are now becoming more actively involved in this nascent space. Once again, whenever the legals get involved, everyday creatives are the losers, so whilst some of the actions highlighted are less directly relevant, the outcomes of the legal disputes ultimately will be, so we’ll keep tracking this.
Environments
Finally, we want to highlight a couple of environments for you.
Firstly, Half Life Alyx has a new mod, courtesy of Corey Laddo! Corey has created a mod that allows you to view the game in the role of Alyx Vance. Its designed to be a free of charge for owners of the game, and provides a 4-5 hour experience for ‘average players’. Great if you want to shoot content from a first person perspective. You can support Corey on his Patreon account, should you want to give him something for his effort. Download the mod from Steam here. Meantime, here’s a taster for you –
Secondly, Damien shared a new sandbox environment that will be launching soon (well, we think it will since its apparently been in dev since 2012), called Outerra World by Microprose. This looks amazing, and will allow you create any kind of realistic 1:1 scale terrain simulation, which you can share and navigate using any asset that the community creates and shares too. Here’s the link to the Steam page (to add your details to the waitlist).
If you have comments or thoughts on any of the techs this week, do go ahead and comment.
A month is apparently a VERY long time in the world of artificial intelligence… since our last post on this topic, released at the beginning of December, we’ve seen even more amazing techs launch.
ChatGPT
This is another creative AI from the OpenAI stable (also creator of DALL-E 2) that produces text in a conversational way. ChatGPT is capable of answering pretty much any question or request you care to throw at it, and the answers are actually quite robust if a little fuzzy around the edges at times (don’t rely on it for anything critical, in the words of Sam Altman, OpenAI CEO). Here’s an example I asked it for this post, using the prompt: ‘In the style of H P Lovecraft, write a description of a character for someone who hunts the Chluthu’
Response:
Professor Alistair Dunsmore was a man obsessed. For years, he had devoted his life to studying the mysteries of the cosmos, and in particular, the myth of the Chluthu.
A brilliant but troubled man, Dunsmore was driven by a need to uncover the truth about the terrifying creature that lurked beyond the veil of our known universe. He had spent years pouring over ancient texts and forbidden tomes, piecing together the scattered clues that hinted at the Chluthu’s existence.
And then, one fateful day, he discovered something that would change his life forever. A fragment of an ancient artifact, imbued with otherworldly powers, that seemed to hold the key to finding the Chluthu.
Dunsmore knew that he had to have it, no matter the cost. And so he set out on a quest to track down the artifact and unlock its secrets. He traveled to the far corners of the earth, braving dangers and braving the unknown, all in the pursuit of his ultimate goal.
Some say that Dunsmore was mad, driven to the brink by his obsession. But to those who knew him, he was a hero, a brave and brilliant man who dared to stare into the darkness and face the horrors that lay within.
That’s impressive – and it took just seconds to generate. It has great potential to be a useful tool for scriptwriting for stories and character development that can be used in machinima and virtual productions, and also marketing assets you might use to promote your creative works too!
And as if that isn’t useful enough, some bright folks have already used it to write a game and even create a virtual world. Note the detail in the prompts being used – this one from Jon Radoff’s article (4 Dec 2022) for an adventure game concept: ‘I want you to act as if you are a classic text adventure game and we are playing. I don’t want you to ever break out of your character, and you must not refer to yourself in any way. If I want to give you instructions outside the context of the game, I will use curly brackets {like this} but otherwise you are to stick to being the text adventure program. In this game, the setting is a fantasy adventure world. Each room should have at least 3 sentence descriptions. Start by displaying the first room at the beginning of the game, and wait for my to give you my first command’.
The detail is obviously the key and no doubt we’ll all get better at writing prompts as we learn how the tools respond to our requests. It is interesting that some are also suggesting there may be a new role on the horizon… a ‘prompt engineer’ (check out this article in the UK’s Financial Times). Yup, that and a ‘script prompter’, or any other possible prompter-writer role you can think of… but can it tell jokes too?
Give it a go – we’d love to hear your thoughts on the ideas it generates. Of course, those of you with even more flAIre can then use the scripts to generate images, characters, videos, music and soundscapes. There’s no excuse for not giving these new tools for producing machine cinema a go, surely.
Link requires registration to use (it is currently free) and note the tool now also keeps all of your previous chats which enables you to build on themes as you go: ChatGPT
Image Generators
Building on ChatGPT, D-ID enables you to create photorealistic speaking avatars from text. You can even upload your own image to create a speaking avatar, which of course raises a few IP issues, as we’ve just seen from the LENSA debacle (see this article on FastCompany’s website), but JSFILMZ has highlighted some of the potentials of the tech for machinima and virtual production creators here –
An AI we’ve mentioned previously, Stable Diffusion version 2.1 released on 7 December 2022. This is an image generating AI, its creative tool is called Dream Studio (and the Pro version will create video). In this latest version of the algorithm, developers have improved the filter which removes adult content yet enables beautiful and realistic looking images of characters to be created (now with better defined anatomy and hands), as well as stunning architectural concepts, natural scenery, etc. in a wider range of aesthetic styles than previous versions. It also enables you to produce images with non standard aspect ratios such as panoramics. As with ChatGPT, a lot depends on the prompt written in generating a quality image. This image and prompt example is taken from the Stability.ai website –
So, just to show you how useful this can be, I took some text from the ChatGPT narrative for our imaginary character, Professor Alistair Dunsmore, and used a prompt to generate images of what he might look like and where he might be doing his research. The feature images for this post are some of the images it generated – and I guess I shouldn’t have been so surprised that the character looks vaguely reminiscent of Lovecraft himself. The prompt also produced some other images (below) and all you need to do is select the image you like best. Again, these are impressive outputs from a couple of minutes of playing around with the prompt.
For next month, we might even see if we can create a video for you, but in the meantime, here’s an explainer of a similar approach that Martin Nebelong has taken, using MidJourney instead to retell some classic stories –
Supporting the great potential for creative endeavour, ArtStation has taken a stance in favour of the use of AI in generating images with its portfolio website (which btw was bought by Epic Games in 2021). This is in spite of thousands of its users demanding that it remove AI generated work and prevent content being scraped. This request is predicated on the lack of transparency used by AI developers in training and generating datasets. Instead, ArtStation has removed those using the Ghostbuster-like logo on their portfolios (‘no to AI generated images’) from its homepage and issued a statement about how creatives using the platform can protect their work. The text of an email received on 16 December 2022 stated:
‘Our goal at ArtStation is to empower artists with tools to showcase their work. We have updated our Terms of Service to reflect new features added to ArtStation as it relates to the use of AI software in the creation of artwork posted on the platform.
First, we have introduced a “NoAI” tag. When you tag your projects using the “NoAI” tag, the project will automatically be assigned an HTML “NoAI” meta tag. This will mark the project so that AI systems know you explicitly disallow the use of the project and its contained content by AI systems.
We have also updated the Terms of Service to reflect that it is prohibited to collect, aggregate, mine, scrape, or otherwise use any content uploaded to ArtStation for the purposes of testing, inputting, or integrating such content with AI or other algorithmic methods where any content has been tagged, labeled, or otherwise marked “NoAI”.
You can also read an interesting article following the debate on The Verge’s website here, published 23 December 2022.
We’ve said it before, but AI is one of the tools that the digital arts community has commented on FOR YEARS. Its best use is as a means to support creatives to develop new pathways in their work. It does cut corners but it pushes people to think differently. I direct the UK’s Art AI Festival and the festival YouTube channel contains a number videos of live streamed discussions we’ve had with numerous international artists, such as Ernest Edmonds, a founder of the digital arts movement in the 1960s; Victoria and Albert Museum (London) digital arts curator Melanie Lenz; the first creative AI Lumen Prize winner, Cecilie Waagner Falkenstrom; and Eva Jäger, artist, researcher and assistant curator at Serpentine Galleries (London), among others. All discuss the role of AI in the development of their creative and curatorial practice, and AI is often described as a contemporary form of a paintbrush and canvas. As I’ve illustrated above with the H P Lovecraft character development process, its a means to generate some ideas through which it is possible to select and explore new directions that might otherwise take weeks to do. It is unfortunate that some have narrowed their view of its use rather than more actively engaged in discussion on how it might add to the creative processes employed by artists, but we also understand the concerns some have on the blatant exploitation of copyrighted material used without any real form of attribution. Surely AI can be part of the solution for that problem too although I have to admit so far I’ve seen very little effort being put into this part of the challenge – maybe you have?
In other developments, a new ‘globe’ plug-in for Unreal Engine has been developed by Blackshark. This is a fascinating world view, giving users access to synthetic 3D (#SYNTH3D) terrain data, including ground textures, buildings, infrastructure and vegetation of the entire Earth, based on satellite data. It contains some stunning sample sets and, according to Blackshark’s CEO Michael Putz, is the beginning of a new era of visualizing large scale models combined with georeferenced data. I’m sure we can all think of a few good stories that this one will be useful for too. Check out the video explainer here –
And Next…?
Who knows, but we’re looking forward to seeing how this fast action tech set evolves and we’ll be aiming to bring you more updates next month.
Don’t forget to drop us a line or add comments to continue the conversation with us on this.
Recent Comments