In comparison to the previous six months, the past month has not exactly been a damp squib but it has certainly revealed a few rather under-whelming releases and updates, notwithstanding Adobe’s Firefly release. We also share some great tutorials and explainers as well as some interesting content we’ve found.
Next Level?
Nvidia and Getty have announced a collaboration that will see visuals created with fully licensed content, using Nvidia’s Picasso model. The content generation process will also enable original IP owners to receive royalties. Here’s the link to the post on Nvidia’s blog.
Microsoft has released its Edge AI image generator, based on OpenAI’s DALL-E generator, into its Bing chatbot. Ricky has tried the tool and comments that whilst the images are good, they’re nowhere near the quality of Midjourney at the moment. Here’s an explainer on Microsoft’s YouTube channel –
Stability AI (Stable Diffusion) has released its SDK for animation creatives (11 May). This is an advancement on the text-to-image generator, although of course we’ve previously talked about similar tools, plus ones that advance this to include 3D processes. Here’s an explainer from the Stable Foundation –
RunwayML has released its Gen 1 version for the iPhone. Here’s the link to download the app. The app lets you use a video from your roll to apply either a text prompt or a reference image or a preset to create something entirely new. Of course, the benefit is that from within the phone’s existing apps, you can then share on social channels at your will. Its worth noting that at the time of writing we and many others are still waiting for access to Gen 2 for desktop!
Most notable of the month is Adobe’s release of Firefly for AdobeVideo. The tool enables generative AI to be used to select and create enhancements to images, music and sound effects, creating animated fonts, graphics and fonts and b-roll content – and all that, Adobe claims, without copyright infringements. Ricky has, however, come across some critics who say that Adobe’s claim that their database is clean is not correct. Works created in Midjourney have been uploaded to Adobe Stock and are still part of its underpinning database, meaning that there is a certain percent (small) of works in the Adobe Firefly database that ARE taken from online artist’s works. Here’s the toolset explainer –
Luma AI has released a plug-in for NeRFs in Unreal Engine, a technique for capturing realistic content. Here’s a link to the documentation and how-tos. In this video, Corridor Crew wax lyrical about the method –
Tuts and Explainers
Jae Solina aka JSFilmz has created a first impressions video about Kaiber AI. This is quite cheap at $5/month for 300 credits (it seems that content equates to appx 35 credits per short vid). In this explainer, you can see Jae’s aged self as well as a cyberpunk version, and the super-quick process this new toolset has to offer –
If you’re sick to the back teeth of video explainers (I’m not really), then Kris Kashtanova has taken the time to generate a whole series of graphic novel style explainers (you may recall the debate around her Zarya of the Dawn Midjourney copyright registration case a couple of months back) – these are excellent and somehow very digestible! Here’s the link. Of course, Kris also has a video channel for her tutorials too, latest one here looks at Adobe’s Firefly generative fill function –
In this explainer, Solomon Jagwe discussed his beta test of Wonder Studio’s AI mocap for body and finger capture although its not realtime unfortunately. This is however impressive and another tool that we can’t wait to try out once its developoer gets a link out to all those that have signed up –
Content
There has been a heap of hype about an advert created by Coca Cola using AI generators (we don’t know which exactly) but its certainly a lot of fun –
In this short by Curious Refuge, Midjourney has been used to re-imagine Lord of the Rings… in the style of Wes Anderson, with much humor and Benicio del Toro as Gimli (forever typecast and our feature image for this post). Enjoy –
We also found a trailer for an upcoming show, Not A Normal Podcast, but a digital broadcast where it seems AIs will interview humans in some alternative universe. Its not quite clear what this will be, but it looks intriguing –
although it probably has a way to go to compete with the subtle humor of FrAIsier 3000, which we’ve covered previously. Here is episode 4, released 21 March –
This week, tech updates cover Epic’s new tools for self-publishing, Omniverse’s USD rebrands, thoughts about the nascent metaverse and some throwbacks to good-old-fashioned machinima creative techniques.
Epic’s Games Store
Surely a move that will make rival Steam squirm, Epic announced on 9 March that it has launched new tools for self-publishing on the Games Store, all on the back of its 68M active monthly users. Publishers will receive 88% of the revenue through sales (compared to 70% on Steam). There are some interesting points raised in the T&Cs, such as the need for cross-playability (across all PC stores), achievement tracking for games, age rating requirements and an affiliate creator programme that enables publishers to share their takings with others – check out the T&Cs on their announcement here. The announcement intimates at much bigger things to come, relating to metaverse propositions, but its an interesting development for now. Here’s a walk through of the tools from their livestream about it –
Omniverse USD
Nvidia’s Omniverse Create and Omniverse View are rebranding, announced on 3 March. These will now be called, respectively, Omniverse USD Composer and Omniverse USD Presenter. The omnipresence of USD (Universal Scene Description) has become a driving force for 3D creative development in a very short space of time – just last August, Nvidia summarized its vision with embedding USD as the foundation of the metaverse for creatives (and also industrial teams, smart services providers and such), where content could be pushed across a vast array of different platforms. Less than a year later, workflows everywhere have evolved with it and USD is now a ubiquitous technology, much like the internet is the driving force for the web. What’s a little intriguing is why draw attention to it at this juncture, and what’s the point of editing archival videos to include the new names, like this one – recognition, reinforcement, repositioning or something new coming down the pipeline?
Blended
Beyond the hype, and clearly the practices as we’ve highlighted above, the metaverse is taking shape in interesting ways. An interesting article, published in VentureBeat on 4 March, highlights the lengths that media and entertainment companies such as Sony are going to in creating virtual worlds that transcend film, game and experiences, including in VR and theme parks. These are more than alignments of creative talent teams, but allude to the potential of vast new ecosystems for collaborators and partners. What’s interesting of course is that the inflection into such ecosystems can be from any creative medium (game, film or artwork presumably), with outputs that are going to be more visceral and consequently more immersive. Since toolsets such as USD facilitate the creation of these ecosystems, it will be interesting to see how indies get in on this action too – we’re already seeing a number of start-up enterprises pushing the boundaries, but there’s also scope for small studios to join in. Question is, where are they now?
Cinematics (the Old Way)
No Man’s Sky has been a machinina creators’ go-to for some time, and this short gives a great overview of how to create cinematics in the environment, by EvilDr.Porkchop (also our blog post feature image) –
Eve Online is another such environment, and now of course a [very] old one, but here’s a nice ‘how to’ for making epic looking machinimas, by WINGSPAN TT –
March was another astonishing month in the world of AI genies with the release of exponentially powerful updates (GPT4 released 14 March; Baidu released Ernie Bot on 16 March), new services and APIs. It is not surprising that by the end of the month, Musk-oil is being poured over the ‘troubling waters’ – will it work now the genie is out of the bottle? Its anyone’s guess and certainly it seems a bit of trickery is the only way to get it back into the bottle at this stage.
Rights
More importantly, and with immediate effect, the US Copyright Office issued a statement on 16 March in relation to the IP issues that have been hot on many lips for several months now: registrations pertaining to copyright are about the processes of human creativity, where the role of generative AI is simply seen as a toolset under current legal copyright registration guidance. Thus, for example, in the case of Zarya of the Dawn (refer our comments in the Feb 2023 Tech Update), whilst the graphic novel contains original concepts that are attributable to the author, the use of images generated by AI (in the case of Zarya, MidJourney) are not copyrightable. The statement also makes it clear that each copyright registration case will be viewed on its own merit which is surely going to make for a growing backlog of cases in the coming months. It requires detailed clarification of how generative AI is used by human creators in each copyright case to help with the evaluation processes.
The statement also highlights that an inquiry into copyright and generative AIs will be undertaken across agencies later in 2023, where it will seek general public and legal input to evaluate how the law should apply to the use of copyrighted works in “AI training and the resulting treatment of outputs”. Read the full statement here. So, for now at least, the main legal framework in the US remains one of human copyright, where it will be important to keep detailed notes about how creators generated (engineered) content from AIs, as well as adapted and used the outputs, irrespective of the tools used. This will no doubt be a very interesting debate to follow, quite possibly leading to new ways of classifying content generated by AIs… and through which some suggest AIs as autonomous entities with rights could become recognized. It is clear in the statement, for example, that the US Copyright Office recognizes that machines can create (and hallucinate).
The complex issues of the dataset creation and AI training processes will underpin much of the legal stances taken and a paper released at the beginning of Feb 2023 could become one of the defining pieces of research that undermines it all. The research extracted near exact copyrighted images of identified people from a diffusion model, suggesting that it can lead to privacy violations. See a review here and for the full paper go here.
In the meantime, more creative platforms used to showcase creative work are introducing tagging systems to help identify AI generated content – #NoAI, #CreatedWithAI. Sketchfab joined the list at the end of Feb with its update here, with updates relating to its own re-use of such content through its licensing system coming into effect on 23 March.
NVisionary
Nvidia’s progressive march with AI genies needs an AI to keep up with it! Here’s my attempt to review the last month of releases relevant to the world of machinima and virtual production.
In February, we highlighted ControlNet as a means to focus on specific aspects of image generation and this month, on 8 March, Nvidia released the opposite which takes the outline of an image and infills it, called Prismer. You can find the description and code on its NVlabs GitHub page here.
Alongside the portfolio of generative AI tools Nvidia has launched in recent months, with the advent of OpenAI’s GPT4 in March, Nvidia is expanding its tools for creating 3D content –
It is also providing an advanced means to search its already massive database of unclassified 3D objects, integrating with its previously launched Omniverse DeepSearch AI librarian –
It released its cloud-based Picasso generative AI service at GTC23 on 23 March, which is a means to create copyright cleared images, videos and 3D applications. A cloud service is of course a really great idea because who can afford to keep up with the graphics cards prices? The focus for this is enterprise level, however, which no doubt means its not targeting indies at this stage but then again, does it need to when indies are already using DALL-E, Stable Diffusion, MidJourney, etc. Here’s a link to the launch video and here is a link to the wait list –
Pro-seed-ural
A procedural content generator for creating alleyways has been released by Difffuse Studios in the Blender Marketplace, link here and see the video demo here –
We spotted a useful social thread that highlights how to create consistent characters in Midjourney, by Nick St Pierre, using seeds –
and you can see the result of the approach in his example of an aging girl here –
Animation
JSFilmz created an interesting character animation using MidJourney5 (which released on 17 March) with advanced character detail features. This really shows its potential alongside animation toolsets such as Character Creator and Metahumans –
Runway’s Gen-2 text-to-video platform launched on 20 March, with higher fidelity and consistency in the outputs than its previous version (which was actually video-to-video output). Here’s a link to the sign-up and website, which includes an outline of the workflow. Here’s the demo –
Gen-2 is also our feature image for this blog post, illustrating the stylization process stage which looks great.
Wonder Dynamics launched on 9 March as a new tool for automating CG animations from characters that you can upload to its cloud service, giving creators the ability to tell stories without all the technical paraphenalia (mmm?). The toolset is being heralded as a means to democratize VFX and it is impressive to see that Aaron Sims Creative are providing some free assets to use with this and even more so to see none other than Steven Spielberg on the Advisory Board. Here’s the demo reel, although so far we’ve not found anyone that’s given it a full trial (its in closed beta at the moment) and shared their overview –
Finally for this month, we close this post with Disney’s Aaron Blaise and his video response to Corridor Crew’s use of generative AI to create a ‘new’ anime workflow, which we commented on last month here. We love his open-minded response to their approach. Check out the video here –
This week, we highlight some time-saving examples for generating 3D models using – you guessed it – AIs, and we also take a look at some recent developments in motion tracking for creators.
3D Modelling
All these examples highlight that generating a 3D model isn’t the end of the process and that once its in Blender, or another animation toolset, there’s definitely more work to do. These add-ons are intended to help you reach your end result more quickly, cutting out some of the more tedious aspects of the creative process using AIs.
Blender is one of those amazing animation tools that has a very active community of users, and of course, a whole heap of folks looking for quick ways to solve challenges in their creative pipeline. We found folks that have integrated OpenAI’s ChatGPT into using the toolset by developing add-ons. Check out this illustration by Olav3D, whose comments about using ChatGPT for attempting to write Python scripts sum it up nicely, “better than search alone” –
Dreamtextures by Carson Katri is a Blender add-on using Stable Diffusion which is so clever that it even projects textures onto 3D models (with our thanks to Krad Productions for sharing this one). In this video, Default Cube talks about how to get results with as few glitches as possible –
and this short tells you how to integrate Dreamtextures into Blender, by Vertex Rage –
To check out Dreamtextures for yourself, you can find the Katri’s application on Github here and should you wish to support his work, subscribe to his Patreon channel here too.
OpenAI also launched its Point-E 3D model generator this month, which can then be imported into Blender but, as CGMatter has highlighted, using the published APIs takes a very long time sitting in cues to access the downloads, whilst downloading the code to your own machine to run it locally, well that’s easy – and once you have it, you can create point-cloud models in seconds. However, he’s running the code from Google’s CoLab, which means you can run the code in the cloud. Here’s his tutorial on how to use Point-E without the wait giving you access to your own version of the code (on Github) in CoLab –
We also found another very interesting Blender add-on, this one lets you import models from Google Maps into the toolset. The video is a little old, but the latest update of the mod on Github, version 0.6.0 (for RenderDoc 1.25 and Blender 3.4) has just released, created by Elie Michel –
We were also interested to see NVIDIA’s update at CES (in January). It announced a release for the Omniverse Launcher that supports 3D animation in Blender, with generative AIs that enhance characters’ movement and gestures, a future update to Canvas that includes 360 surround images for panoramic environments and also an AI ToyBox, that enables you to create 3D meshes from 2D inputs. Ostensibly, these tools are for creators to develop work for the metaverse and web3 applications, but we already know NVIDIA’s USD-based tools are incredibly powerful for supporting collaborative workflows including machinima and virtual production. Check out the update here and this is a nice little promo video that sums up the integrated collaborative capabilities –
Tracking
As fast as the 3D modelling scene is developing, so is motion tracking. Move.ai which launched late last year, announced its pricing strategy this month at $365 for 12 months of unlimited processing of recordings – this is markerless mocap at its very best, although not so much if you want to do live mocap (no pricing strategy announced yet). Move.ai (our feature image for this article) lets you record content using a mobile phone (a couple of old iPhones). You can find out more on its new website here and here’s a fun taster, called Gorillas in the mist, with ballet and 4 iPhones, released in December by the Move.ai team –
And another app although not 3D is Face 2D Live, released by Dayream Studios – Blueprints in January. This tool allows you to live link a Face app on your iPhone or iPad to make cartoons, including with your friends also using an iPhone app, out of just about anything. It costs just $14.99 and is available on the Unreal Marketplace here. Here’s a short video example to wet your appetite – we can see a lot of silliness ensuing with this for sure!
Not necessarily machinima but for those interested in more serious facial mocap, Weta has been talking about how it developed its facial mocap processes for Avatar, using something called an ‘anatomical plausible facial system’. This is an animator centric system that captures muscle movement rather than ‘facial action coding’ which focusses on identifying emotions. Weta stated its approach leads to a wider set of facial movements being integrated into the mocapped output – we’ll no doubt see more in due course. Here’s an article on the FX Guide website which discusses the approach being taken and for a wider ranging discussion on the types of performance tracking used by the Weta team, Corridor Crew have bagged a great interview with the Avatar VFX supervisor, Eric Saindon here –
In this post, we share our thoughts on some of the key trends weâve seen over the last 12 months in the world of machinima and virtual production. Itâs been quite a ride this year, and whatâs been fascinating to witness is how we are each trying to keep up with all the things going on. For example, two years ago when we started the Completely Machinima podcast, we werenât really sure that machinima was still a thing⊠but, as Ricky says so eloquently: âMachinima is alive and well. Two years ago, when I was asked to be part of this podcast, I said, No machinima is dead. And I am very happy to be proven profoundly wrong.â Yup! So here are this yearâs TOP observations.
Indie vs Pro
Machinima has been on the cusp of mainstream filmmaking for 25 years. In Tracy & Benâs Pioneers in Machinima book, there are frequent mentions of big budget Hollywood productions having dabbled with real-time techniques for primarily previz. But not exclusively, as Tracy discovered in her interview in February with John Gaeta (aka Dalt Wisney), founder of ILMxLAB and best known as the creator of The Matrix bullet-time shot. Of course, The Mando stood out as a marker in the sand in the adoption of virtual production and real-time techniques and, ever since COVID, its evolving practices have come to the fore, time and again.
Beyond the large-scale use of virtual production with LED walls and stages, this year weâve noticed more professionals are playing with the real-time, 3D virtual production processes at the desk. These are individuals wanting to explore a concept, tell a story that perhaps they wouldnât otherwise be able to or, as studios, explore the technologies as part of their pipeline. Many of these folks work in the film industry already in areas such as special effects, post production or some other role. Some great examples weâve reviewed on the podcast are â
And whilst pros have been dabbling with the ânewâ toolsets, the indies have stolen a march on them and are producing some truly astonishing high-quality work. It doesnât always get the attention it deserves, but certainly these are leading the way and some are now breaking into industry too. We review a selection every month, but a few weâll draw attention to are â
Heroes of Bronze âJourneysâ Teaser by Martin Klekner (S2 E35 Apr 2022)
MOVING OUT || Somewhere In Space || A No Manâs Sky Cinematic Series by Geeks Actively Making Entertainment (S2 E39 June 2022)
Itâs been fascinating to watch the beginnings of convergence of the pro and indie worlds and weâre excited to see more new projects emerge in 2023, as well as more indies getting the recognition they deserve with at least some chance of generating an income from their creative endeavour. Needless to say, as mainstream catches up, the indies are going to be much in demand although letâs hope that doesnât then result in the devastation of creative ideas as it did in 2008-9 when many of the original machinima folks were absorbed by games developers (notably to work on EAâs Mass Effect).
Unreal
In part, the opportunities for indies mentioned are because Epic Games, Unreal Engineâs creator, has had the foresight to devise a generous freemium model. It is free to use for projects netting below $1M, thereafter taking 5% profit. It has a vast marketplace of assets that can be used at low or no cost in productions. In turn, this enables creators to circumvent one of the most wicked problems faced when making game-based machinima: how best to deal with game-related intellectual property rights. Very few machinima, real-time virtual production short projects weâve seen are ever going to come close to $1M. Most donât even crack $100. And that is despite many games developers having in the past directly benefitted from the increased visibility of fan created content and the ideas included in them⊠but thatâs not a discussion point for now. It is the freemium model for UE that will massively grow the skillset of creatives which, in turn, will enable all sorts of applications to emerge as real-time and virtual production proliferates across industries. This is a win-win for creatives and techs alike.
Alongside this, a number of other engines which have traditionally been used for machinima, real-time and virtual production, have made it progressively more difficult and convoluted to create new works for a variety of reasons. Furthermore, quite simply the finished product just does not look so good in comparison to UE. For example, we were disappointed to hear that despite the potentially comparable quality in Warhammer 40K, Erasmus Brosdauâs The Lord Inquisitor never received the publisherâs backing even though many of those involved were associated with the game (we reviewed this in S2 E45 Sept 2022 but the film dated back to 2016). Rockstar hasnât supported the release of an editor for Red Dead Redemption 2. GTAV and Blizzardâs World of Warcraft are showing their age and despite a few leaks nothing concrete about updates has emerged. Roberts Space Industries (Star Citizen) appears to have shot itself in the foot in its attempt to protect actor assets (eg., Gary Oldman, Mark Hamill, Mark Strong, Gillian Anderson, Andy Serkis and others) in its upcoming Squadron 42. The latter in particular is such a pity because we were very much looking forward to seeing episode 2 of Adrift by Barely Not Monkeys, a highlight of our annual review last year.
Of course, the trade-off in using UE is that creating projects isnât anywhere near as speedily done as purely game-based machinima used to be, which could be captured on any game playing computer with a few tools or mods such as video capture and editor. During the year, weâve seen the release of UE5 and 5.1, each a step change on its predecessor. Access is a challenge because the phenomenal compute power thatâs needed to run all the different tools to create real-time 3D virtual sets and characters, render films, etc., is growing exponentially. Nonetheless, Epic has given the process impetus. It has put a lot of effort into making the UE toolset easy to use, relative to other platforms such as Unity and iClone. This, coupled with the huge number of tutorials created by a vast and growing community of users, alongside investment in short courses and guides most of which are free, has positioned it as a medium of choice. As Kim Libreri, Chief Technology Officer at Epic since 2014, is quoted as saying in the Pioneers in Machinima book: âThe main challenge is how to tell a great story. Now that the tools arenât a barrier, it is about understanding human comprehension, understanding the light-depth field, writing, and placing things so that people can understand the story you are trying to tell.â
At the end of last year, we felt that Nvidiaâs Omniverse would become the driving platform and were waiting with baited breath for the release of updates to its Machinima toolset, especially following Rickyâs interview with Dane Johnston. So far, we have been disappointed. One challenge with the Nvidia toolset is the lock-in to their hardware which is required to run Omniverse. With the squeeze on chip access and high price of kit, along with its astonishingly rapid advancements in all things AI and consequential new spec releases, it has probably lost ground in the creator filmmaking world â who can afford to replace their kit every 6 months? We are, however, very interested to see Nividiaâs cloud-based subscription model emerging which is surely going to help by improving access to high compute tools at least for as long as folks can afford the sub. Omniverse has amazing potential but all these challenges have resulted in us seeing only one notable film made entirely using the toolset to date, compared to UE5 in which we are seeing many more â
Unreal along with other amazing toolsets, platforms and hardware developers such as Reallusion, Blender, Nvidia, Autodesk and numerous others, has invested in Universal Scene Description. This is an open-source format originally developed by Pixar that allows for the interchange of assets between toolsets. USD is a game-changer and through its wide adoption, indies and pros alike can align and build their preferred pipelines which allows them to integrate content using a range of methods according to the skills they have in capture techniques such as photogrammetry, 360, mocap, etc. The tools and platforms, collectively, are touted as being the foundation of future metaverse applications but hitherto it has been UE that has been the backbone of this yearâs most exciting creative works, often integrating mocap with Reallusionâs Character Creator. Examples are â
Blu x @Teflon Sega meta-saga!! Ep4 by Xanadu (S2 E37 May 2022)
And check out also the range of other types of projects reviewed in our October blog post, such as SnoopDoggâs web3 Crip Ya Enthusiasm, Rick Pearceâs 2D/3D Roborovski and The Walker by AFK â The Webseries
Mo-Cap
Weâve witnessed the mass adoption of mocap for developing body and facial animation using all sorts of different mocap suits including markerless. The ability to integrate content easily into the virtual production pipeline has resulted in a plethora of works that attempt to illustrate more realistic movement. This has enabled creators to customize assets by adding detail to characters which results in greater depth to the process of characterization, building more empathy and better storytelling capability. As technologies have advanced from body to hand, face, mouth and eyes over the year, creators have become less reliant on traditional approaches to storytelling, such as narration and voice acting, and instead used more nuanced human-like behaviour that can be interpreted only subliminally. Examples are â
Of course, capture alone would be pointless without high quality models (bones and rigs) to which the movement can be mapped. UEâs Metahumans and Reallusionâs Character Creator have therefore rapidly become key tools in this part of the pipeline. Both provide a high bar output at even the most basic level of mapping and advanced users are leaving contrails in their wake. Check out the mouth movement detail in this â
The Talky Orcs by AFK â The Webseries (S3 E58 Dec 2022)
Challenges vs Festivals
In the tradition of the filmmaking and animation industries, there are many festivals that provide opportunities for creators to showcase their best work, get feedback from reviewers and have a chance to connect and reflect, through which new collaborations may be formed. There are actually very few festivals that celebrate machinima and virtual production these days. This year, however, weâve noticed a growing number of contests through which creators are able to test and showcase aspects of their skillsets, most of which are incentivised by prizes of latest kit and some of which may lead to new opportunities, such as John MacInnesâ Mood Scenes challenge. Whatâs particularly interesting is that last year Tracy said, âWe need more contests to promote the new toolsetsâ whereas this year, she says âWe need a different type of promotion than contestsâ!
Two things occur on this. Firstly, is it the case that more virtual production content is finally being accepted into animation festivals? This is something that weâve often lamented in the past, where machinima was always seen as the poor relation even though the creativity demonstrated has been innovative. In part, this attitude is what led the community to form its own festival series â the Machinima Film Festival, created by the Academy of Machinima Arts & Sciences, ran between 2002 and 2008; ran with an EU version in the European Machinima Film Festival in 2007; and was then taken over by the Machinima Expo and ran until 2014, including a virtual event held in Second Life. This was hugely popular among the community of creators because it attracted a breadth of talent using a multitude of different toolsets. So, we have been thrilled this year to see Damienâs Star Wars series Heir to the Empire being recognized in a host of festivals that have accepted the work he creates in iClone. Ditto Martin Bellâs Prazinburk Ridge made in UE and various others. Examples are, however, few so far â or maybe we are witnessing a change in the way works are distributed too!
Secondly, is it the case that contests and challenges are an excellent way for tech developers to promote their wares and their uses? This is very evidently the case. This year we have seen contests run in order to generate interest purely in a toolset. We have seen other contests run by creator channels whose goal appears to be simply to drive up their numbers of followers, and these use the same tech dev toolsets as incentives for participation. The outputs we have seen have generally been creatively poor albeit well marketed. Without greater emphasis on well-run independent festivals with prizes for creativity, this is unlikely to change â and thatâs a pity, because it doesnât result in development of good skills but simply drives the creation of content. It is very much a model we saw when Machinima.com incentivized content creation with is partner programme, where eyes equalled a share of the ad revenue. We will be dismayed if this continues. As the year has progressed, however, we have observed a growing range of films being promoted as creative works including through Reallusionâs Pitch and Produce programme â
We are also mindful that many mainstream festivals will only take content if it hasnât previously been released to online channels⊠and thatâs fine too but festivals need then to take some greater responsibility for supporting particularly indie creatives to promote their work.
Tutorials vs Creativity
The trends we observed in contests and festivals gives us hope that the tide is beginning to turn away from the hundreds and thousands of tutorials we have seen released over the year, covering every minute aspect of any toolset ever developed. There have been so many that Phil led a discussion on the process of taking online tutorials in this episode â
How I learned Unity without following Tutorials by Mark Brown (S3 E47 Oct 2022)
Of course, many tutorials are well produced and immeasurably useful, to a point. They clearly get thousands of views and, one assumes, help hundreds of followers out of holes in their pipeline. But what is the point unless it results in new creative works? And where are those creative works being shown? We just donât know!
Part of the problem is the way in which the dominant platforms share content using algorithms that favor numbers of views to serve the most popular work. This mechanism is never going to inspire anyone to create new work â it results in tedious trawls through mountains of trash and consequentially very low engagement levels. The only good thing about it is that the golden nuggets we find are treasured and our podcast is a trove for anyone interested in how machinima and virtual production techniques are evolving. We implore the tech devs to do more to promote creative practice beyond the endless pursuit of the next best tutorial â and we also ask someone, anyone, to figure out a way that folks can find it easily. We explored some potential distribution options in our Nov 2022 blog but this is something we will no doubt revisit in our review of 2023.
AI
This year has witnessed the rise of the AI generator and its use in devising everything from text, image, spoken word, emotion, music, movement and interaction. The speed of advancements being made is truly awe-inspiring. We followed the developments over the course of the year and by October decided we needed to give emerging projects some space in the podcast too. One notable example that stood out for us was The Crow by Glenn Marshall Neural Art (reviewed in our Oct 2022 Projects Update blog post). We then posted a more detailed report, with observations about the implications for machinima and virtual production creators later in the month (Report: Creative AI Generators, Oct 2022) and a follow up in December which highlights three very different ways in which AI generators can be used to animate content. All this has come about in less than 6 months!
Within games, we have witnessed the various roles of AI over many years, particularly in animating non-player characters and generating environments that have formed a good portion of content used in machinima and virtual productions. The current raft of tools present a potentially whole new way of realizing creative works, without the learning curve of understanding the principles of storytelling, filmmaking, animation and acting needed for use in UE and others. This could be liberating, leading to innovative creations, but at this juncture we are concerned that the mountain of work weâve seen develop will simply be absorbed without due recognition to concept originators. There are some attempts being made to address this, as we discuss in our Dec 2022 AI Generator Tech Update, but authentic creative works where game-based content has been used is clearly not yet on the agenda. This is our one to watch in 2023.
Over to you!
Weâd love to hear your thoughts on our year in review, maybe you have other highlights youâd like to share. If so, do please get in touch or post a comment below.
In the meantime, our thanks to you our readers, listeners and watchers for being part of the Completely Machinima podcast this year. Happy Christmas – and is it too early for Happy New Year?!
Recent Comments