To kick start 2023 with a virtual BANG, we are highlighting some projects we’ve seen that are great examples of machinima and virtual production, demonstrating a breadth of techniques, a range of technologies, and comprise good ole’ short-form storytelling. We also really enjoyed Steve Cutts tale of man… let’s hope for a peaceful and happy year. Enjoy!
Force of Unreal
We were massively impressed throughout last year with the scope of creative work being produced in Unreal Engine. So, we have a few more to tell you about!
RIFT by HaZimation is a Sci-Fi Anime style film with characters created in Reallusion’s Character Creator. The film debuted at the Spark Computer Graphics Society’s Spark Animation Festival last October. We love the stylized effects that have been used here, which Haz Dulull, director/producer, describes as a combination of 2D and 3D in this article (scroll to below half way). We are also impressed that those same 3D assets and environment used in the film making process have also been integrated into a FPS game. The game is currently available free on Steam in early access here. This is another great example of creators using virtual assets in multiple ways – and builds very much on the model that Epic envisaged when they first released the City sample last year, hot on the heals of the release of The Matrix Resurrections film and The Matrix Awakens: UE5 Expeirence for which the city was created. We also love HaZimation’s strategy of co-creation for the new RIFT game experience with players – “We at HaZimation believe that a great game is only possible with direct feedback from the audience as early as possible” (Steam). We fully expect to see more creative works using the RIFT content in future too. Congrats to everyone involved.
As any of you that have been following the podcast will have gathered, we love a good alien film too, and we have found another made in UE5 that we really enjoyed. This one is called The Lab, by Haylox (released 14 Sept 2022). The director/producer builds the suspense well in this although, of course, its the same Alien trope we’ve seen many times over. Nonetheless, this has nice effects and well balanced soundscape.
We also love a good music video. The next project is a dance video made by Guru Pradeep using the music ‘Urvashi’ – Kaadhalan (A R Rahman), released 2 Aug 2022. Its a little rough around the edges, having seemingly been cobbled together with Megascans, Sketchfab and items grabbed the UE Marketplace, but the mocap is done particularly well, although we don’t know what was used, as is the editing. We look forward to seeing more from this creator in future.
Aspiring Assets
We want to highlight the amazing content that’s being developed for use in UE with Reality Capture. In this video, which is not a film but a ‘show and tell’ more than a tut, William Faucher reveals how he created a Lofoten-inspired cabin environment from the 1800s. Its impressive stuff if you have an eye of photogrammetry as well as some of the challenges for asset creation and there are lots of tips and hints in here with more detailed tutorials on his channel.
We have also been impressed with the range of fabulous assets that are being created and used in the Kitbash 3D Mission to Minerva challenge (closed 2 Dec 2022) the outcome of which will be a new galaxy of the combined concept artworks and in-motion content being submitted. There are some really nice videos which you can find using #kb3dchallenge on YouTube that are definitely worth a looksee. We liked this one, which has a nice touch of a disaster about it, by Mike Seto.
With an impressive field of judges that included talent acquisition representatives from NASA Concept Labs, Netflix, Riot Games and ILM, winners were announced on 20 Dec.
And Finally?
Let’s hope for a more progressive year in 2023 than the hate-filled traps that befell so many across a whole plethora of virtual platforms and IRL… and maybe reflect on the message contained within this great fun short, created in Clip Studio Paint with Cinema 4D and After Effects. The film is by Steve Cutts, called A Brief Disagreement, released 30 Sept 2022. Steve is not a nOOb in the world of machinima (and the earlier days of Reallusion’s CrazyTalk) – his classic comedy about the fate of Roger and Jessica Rabbit, as well as every other iconic cartoon character you can think of, even 8 years after its release is still a good laugh for those of a certain age (and its the featured image for this article in case you were wondering)!
In this post, we share our thoughts on some of the key trends we’ve seen over the last 12 months in the world of machinima and virtual production. It’s been quite a ride this year, and what’s been fascinating to witness is how we are each trying to keep up with all the things going on. For example, two years ago when we started the Completely Machinima podcast, we weren’t really sure that machinima was still a thing… but, as Ricky says so eloquently: “Machinima is alive and well. Two years ago, when I was asked to be part of this podcast, I said, No machinima is dead. And I am very happy to be proven profoundly wrong.” Yup! So here are this year’s TOP observations.
Indie vs Pro
Machinima has been on the cusp of mainstream filmmaking for 25 years. In Tracy & Ben’s Pioneers in Machinima book, there are frequent mentions of big budget Hollywood productions having dabbled with real-time techniques for primarily previz. But not exclusively, as Tracy discovered in her interview in February with John Gaeta (aka Dalt Wisney), founder of ILMxLAB and best known as the creator of The Matrix bullet-time shot. Of course, The Mando stood out as a marker in the sand in the adoption of virtual production and real-time techniques and, ever since COVID, its evolving practices have come to the fore, time and again.
Beyond the large-scale use of virtual production with LED walls and stages, this year we’ve noticed more professionals are playing with the real-time, 3D virtual production processes at the desk. These are individuals wanting to explore a concept, tell a story that perhaps they wouldn’t otherwise be able to or, as studios, explore the technologies as part of their pipeline. Many of these folks work in the film industry already in areas such as special effects, post production or some other role. Some great examples we’ve reviewed on the podcast are –
And whilst pros have been dabbling with the ‘new’ toolsets, the indies have stolen a march on them and are producing some truly astonishing high-quality work. It doesn’t always get the attention it deserves, but certainly these are leading the way and some are now breaking into industry too. We review a selection every month, but a few we’ll draw attention to are –
Heroes of Bronze ‘Journeys’ Teaser by Martin Klekner (S2 E35 Apr 2022)
MOVING OUT || Somewhere In Space || A No Man’s Sky Cinematic Series by Geeks Actively Making Entertainment (S2 E39 June 2022)
It’s been fascinating to watch the beginnings of convergence of the pro and indie worlds and we’re excited to see more new projects emerge in 2023, as well as more indies getting the recognition they deserve with at least some chance of generating an income from their creative endeavour. Needless to say, as mainstream catches up, the indies are going to be much in demand although let’s hope that doesn’t then result in the devastation of creative ideas as it did in 2008-9 when many of the original machinima folks were absorbed by games developers (notably to work on EA’s Mass Effect).
Unreal
In part, the opportunities for indies mentioned are because Epic Games, Unreal Engine’s creator, has had the foresight to devise a generous freemium model. It is free to use for projects netting below $1M, thereafter taking 5% profit. It has a vast marketplace of assets that can be used at low or no cost in productions. In turn, this enables creators to circumvent one of the most wicked problems faced when making game-based machinima: how best to deal with game-related intellectual property rights. Very few machinima, real-time virtual production short projects we’ve seen are ever going to come close to $1M. Most don’t even crack $100. And that is despite many games developers having in the past directly benefitted from the increased visibility of fan created content and the ideas included in them… but that’s not a discussion point for now. It is the freemium model for UE that will massively grow the skillset of creatives which, in turn, will enable all sorts of applications to emerge as real-time and virtual production proliferates across industries. This is a win-win for creatives and techs alike.
Alongside this, a number of other engines which have traditionally been used for machinima, real-time and virtual production, have made it progressively more difficult and convoluted to create new works for a variety of reasons. Furthermore, quite simply the finished product just does not look so good in comparison to UE. For example, we were disappointed to hear that despite the potentially comparable quality in Warhammer 40K, Erasmus Brosdau’s The Lord Inquisitor never received the publisher’s backing even though many of those involved were associated with the game (we reviewed this in S2 E45 Sept 2022 but the film dated back to 2016). Rockstar hasn’t supported the release of an editor for Red Dead Redemption 2. GTAV and Blizzard’s World of Warcraft are showing their age and despite a few leaks nothing concrete about updates has emerged. Roberts Space Industries (Star Citizen) appears to have shot itself in the foot in its attempt to protect actor assets (eg., Gary Oldman, Mark Hamill, Mark Strong, Gillian Anderson, Andy Serkis and others) in its upcoming Squadron 42. The latter in particular is such a pity because we were very much looking forward to seeing episode 2 of Adrift by Barely Not Monkeys, a highlight of our annual review last year.
Of course, the trade-off in using UE is that creating projects isn’t anywhere near as speedily done as purely game-based machinima used to be, which could be captured on any game playing computer with a few tools or mods such as video capture and editor. During the year, we’ve seen the release of UE5 and 5.1, each a step change on its predecessor. Access is a challenge because the phenomenal compute power that’s needed to run all the different tools to create real-time 3D virtual sets and characters, render films, etc., is growing exponentially. Nonetheless, Epic has given the process impetus. It has put a lot of effort into making the UE toolset easy to use, relative to other platforms such as Unity and iClone. This, coupled with the huge number of tutorials created by a vast and growing community of users, alongside investment in short courses and guides most of which are free, has positioned it as a medium of choice. As Kim Libreri, Chief Technology Officer at Epic since 2014, is quoted as saying in the Pioneers in Machinima book: “The main challenge is how to tell a great story. Now that the tools aren’t a barrier, it is about understanding human comprehension, understanding the light-depth field, writing, and placing things so that people can understand the story you are trying to tell.”
At the end of last year, we felt that Nvidia’s Omniverse would become the driving platform and were waiting with baited breath for the release of updates to its Machinima toolset, especially following Ricky’s interview with Dane Johnston. So far, we have been disappointed. One challenge with the Nvidia toolset is the lock-in to their hardware which is required to run Omniverse. With the squeeze on chip access and high price of kit, along with its astonishingly rapid advancements in all things AI and consequential new spec releases, it has probably lost ground in the creator filmmaking world – who can afford to replace their kit every 6 months? We are, however, very interested to see Nividia’s cloud-based subscription model emerging which is surely going to help by improving access to high compute tools at least for as long as folks can afford the sub. Omniverse has amazing potential but all these challenges have resulted in us seeing only one notable film made entirely using the toolset to date, compared to UE5 in which we are seeing many more –
Unreal along with other amazing toolsets, platforms and hardware developers such as Reallusion, Blender, Nvidia, Autodesk and numerous others, has invested in Universal Scene Description. This is an open-source format originally developed by Pixar that allows for the interchange of assets between toolsets. USD is a game-changer and through its wide adoption, indies and pros alike can align and build their preferred pipelines which allows them to integrate content using a range of methods according to the skills they have in capture techniques such as photogrammetry, 360, mocap, etc. The tools and platforms, collectively, are touted as being the foundation of future metaverse applications but hitherto it has been UE that has been the backbone of this year’s most exciting creative works, often integrating mocap with Reallusion’s Character Creator. Examples are –
Blu x @Teflon Sega meta-saga!! Ep4 by Xanadu (S2 E37 May 2022)
And check out also the range of other types of projects reviewed in our October blog post, such as SnoopDogg’s web3 Crip Ya Enthusiasm, Rick Pearce’s 2D/3D Roborovski and The Walker by AFK – The Webseries
Mo-Cap
We’ve witnessed the mass adoption of mocap for developing body and facial animation using all sorts of different mocap suits including markerless. The ability to integrate content easily into the virtual production pipeline has resulted in a plethora of works that attempt to illustrate more realistic movement. This has enabled creators to customize assets by adding detail to characters which results in greater depth to the process of characterization, building more empathy and better storytelling capability. As technologies have advanced from body to hand, face, mouth and eyes over the year, creators have become less reliant on traditional approaches to storytelling, such as narration and voice acting, and instead used more nuanced human-like behaviour that can be interpreted only subliminally. Examples are –
Of course, capture alone would be pointless without high quality models (bones and rigs) to which the movement can be mapped. UE’s Metahumans and Reallusion’s Character Creator have therefore rapidly become key tools in this part of the pipeline. Both provide a high bar output at even the most basic level of mapping and advanced users are leaving contrails in their wake. Check out the mouth movement detail in this –
The Talky Orcs by AFK – The Webseries (S3 E58 Dec 2022)
Challenges vs Festivals
In the tradition of the filmmaking and animation industries, there are many festivals that provide opportunities for creators to showcase their best work, get feedback from reviewers and have a chance to connect and reflect, through which new collaborations may be formed. There are actually very few festivals that celebrate machinima and virtual production these days. This year, however, we’ve noticed a growing number of contests through which creators are able to test and showcase aspects of their skillsets, most of which are incentivised by prizes of latest kit and some of which may lead to new opportunities, such as John MacInnes’ Mood Scenes challenge. What’s particularly interesting is that last year Tracy said, “We need more contests to promote the new toolsets” whereas this year, she says “We need a different type of promotion than contests”!
Two things occur on this. Firstly, is it the case that more virtual production content is finally being accepted into animation festivals? This is something that we’ve often lamented in the past, where machinima was always seen as the poor relation even though the creativity demonstrated has been innovative. In part, this attitude is what led the community to form its own festival series – the Machinima Film Festival, created by the Academy of Machinima Arts & Sciences, ran between 2002 and 2008; ran with an EU version in the European Machinima Film Festival in 2007; and was then taken over by the Machinima Expo and ran until 2014, including a virtual event held in Second Life. This was hugely popular among the community of creators because it attracted a breadth of talent using a multitude of different toolsets. So, we have been thrilled this year to see Damien’s Star Wars series Heir to the Empire being recognized in a host of festivals that have accepted the work he creates in iClone. Ditto Martin Bell’s Prazinburk Ridge made in UE and various others. Examples are, however, few so far – or maybe we are witnessing a change in the way works are distributed too!
Secondly, is it the case that contests and challenges are an excellent way for tech developers to promote their wares and their uses? This is very evidently the case. This year we have seen contests run in order to generate interest purely in a toolset. We have seen other contests run by creator channels whose goal appears to be simply to drive up their numbers of followers, and these use the same tech dev toolsets as incentives for participation. The outputs we have seen have generally been creatively poor albeit well marketed. Without greater emphasis on well-run independent festivals with prizes for creativity, this is unlikely to change – and that’s a pity, because it doesn’t result in development of good skills but simply drives the creation of content. It is very much a model we saw when Machinima.com incentivized content creation with is partner programme, where eyes equalled a share of the ad revenue. We will be dismayed if this continues. As the year has progressed, however, we have observed a growing range of films being promoted as creative works including through Reallusion’s Pitch and Produce programme –
We are also mindful that many mainstream festivals will only take content if it hasn’t previously been released to online channels… and that’s fine too but festivals need then to take some greater responsibility for supporting particularly indie creatives to promote their work.
Tutorials vs Creativity
The trends we observed in contests and festivals gives us hope that the tide is beginning to turn away from the hundreds and thousands of tutorials we have seen released over the year, covering every minute aspect of any toolset ever developed. There have been so many that Phil led a discussion on the process of taking online tutorials in this episode –
How I learned Unity without following Tutorials by Mark Brown (S3 E47 Oct 2022)
Of course, many tutorials are well produced and immeasurably useful, to a point. They clearly get thousands of views and, one assumes, help hundreds of followers out of holes in their pipeline. But what is the point unless it results in new creative works? And where are those creative works being shown? We just don’t know!
Part of the problem is the way in which the dominant platforms share content using algorithms that favor numbers of views to serve the most popular work. This mechanism is never going to inspire anyone to create new work – it results in tedious trawls through mountains of trash and consequentially very low engagement levels. The only good thing about it is that the golden nuggets we find are treasured and our podcast is a trove for anyone interested in how machinima and virtual production techniques are evolving. We implore the tech devs to do more to promote creative practice beyond the endless pursuit of the next best tutorial – and we also ask someone, anyone, to figure out a way that folks can find it easily. We explored some potential distribution options in our Nov 2022 blog but this is something we will no doubt revisit in our review of 2023.
AI
This year has witnessed the rise of the AI generator and its use in devising everything from text, image, spoken word, emotion, music, movement and interaction. The speed of advancements being made is truly awe-inspiring. We followed the developments over the course of the year and by October decided we needed to give emerging projects some space in the podcast too. One notable example that stood out for us was The Crow by Glenn Marshall Neural Art (reviewed in our Oct 2022 Projects Update blog post). We then posted a more detailed report, with observations about the implications for machinima and virtual production creators later in the month (Report: Creative AI Generators, Oct 2022) and a follow up in December which highlights three very different ways in which AI generators can be used to animate content. All this has come about in less than 6 months!
Within games, we have witnessed the various roles of AI over many years, particularly in animating non-player characters and generating environments that have formed a good portion of content used in machinima and virtual productions. The current raft of tools present a potentially whole new way of realizing creative works, without the learning curve of understanding the principles of storytelling, filmmaking, animation and acting needed for use in UE and others. This could be liberating, leading to innovative creations, but at this juncture we are concerned that the mountain of work we’ve seen develop will simply be absorbed without due recognition to concept originators. There are some attempts being made to address this, as we discuss in our Dec 2022 AI Generator Tech Update, but authentic creative works where game-based content has been used is clearly not yet on the agenda. This is our one to watch in 2023.
Over to you!
We’d love to hear your thoughts on our year in review, maybe you have other highlights you’d like to share. If so, do please get in touch or post a comment below.
In the meantime, our thanks to you our readers, listeners and watchers for being part of the Completely Machinima podcast this year. Happy Christmas – and is it too early for Happy New Year?!
Even the everyday gamer knows how much graphics technology has advanced over the last few years. The days of the old, pixelated textures on walls and rocks are long gone. So much so that gamers with more advanced skills have gone back to classic games like Doom and have re-coded them to include advanced graphic technology like high-definition textures and ray-traced rendering.
I’m not going to go into detail about ray tracing in this short article. You can find a complete explanation here. Essentially it has to do with how light is reproduced in a 3D game engine. Ray-traced rendering makes everything in a 3D scene look more realistic and believable. This is why adding ray tracing to games like Doom and now Quake is so exciting. The original blocky look to the game is gone. In its place is a more believable environment that adds so much to the atmosphere of horror in the game. This is perfect for those who want to go back and play the original game: it’s a better experience. It is also great for those first-time gamers who didn’t grow up with Doom or Quake.
Although certainly not at the same level of realism as modern games like Elden Ring, the new Quake mod is pretty damn good if you ask me. The sultim_t team deserves a standing ovation for their hard work.
You can download the Quake Ray Traced Mod by clicking the link. We also have a short trailer that sultim_t put on out the mod. The comments for the video are worth a read as well. Of course, you need to buy Quake from Steam in order to get started. The mod is free.
This week, we share updates that will add to your repertoire of tools, tuts and libraries along with a bit of fighting inspriation for creating machinima and virtual production.
Just the Job!
Unreal Engine has released a FREE animation course. Their ‘starter’ course includes contributions from Disney and Reel FX and is an excellent introduction to some of the basics in UE. Thoroughly recommended, even as a refresher for those of you that already have some of the basics.
Alongside the release of UE5.1, a new KitBash3D Cyber District kit has also been released, created by David Baylis. It looks pretty impressive – read about it on their blog here.
Cineshare has released a tutorial on how to create a scene that comprises a pedestrian environment, using Reallusion’s ActorCore, iClone and Nvidia Omniverse. The tutorial has also been featured on Reallusion Magazine’s site here.
Nvidia Omniverse has released Create 2022.3.0 in beta. Check out the updates on its developer forum here and watch the highlights on this video –
Libraries
We came across this amazing 3D scan library, unimaginatively called ScansLibrary, but includes a wide range of 3D and texture assets. It’s not free but relatively low cost. For example, many assets a single credit, with a 60 package of credits being $29 per month. Make sure you check out the terms!
We also found a fantastic sound library, Freesound.org. The library includes 10s of thousands of audio clips, samples, recording and bleeps, all released under CC licenses, free to use for non-commercial purposes. Sounds can be browsed by key words, a ‘sounds like’ question and other methods. The database has running since 2005 and is supported by its community of users and maintained by the Universitat Pompeu Fabra, Barcelona, Spain.
Not really a library as such, but Altered AI is a tool that lets you change voices on your recordings, including those you directly make into the platform. Its a cloud-based service and its not free but it has a reasonably accessible pricing strategy. This is perfect if you’re an indie creator and want a bunch of voices but can’t find the actor you want! (Ricky, please close your ears to this.) The video link is a nice review by Jae Solina, JSFilmz – check it out –
Fighting Inspiration
Sifu is updating it’s fighting action game to allow for recording and playback. You can essentially create your own martial arts movies. If you’re interested in creating fight scenes then this might be something to check out.
Everything with AI has grown exponentially this year, and this week we show you AI for animation using different techniques as well as AR, VR and voice cloning. It is astonishing that some of these tools are already a part of our creative toolset, as illustrated in our highlighted projects by GUNSHIP and Fabien Stelzer. Of course, any new toolset comes with its discontents, and so we cover some of those we’ve picked up on this past month too. It is certainly fair to say there are many challenges with this emergent creative practice but it appears these are being thought through alongside the developing applications by those using it… although, of course, legislation is far from here.
Animation
Text-to-image generator Stable Diffusion raised $100M in October this year and is about to release its animation API. On 15 November it released DreamStudio, the first API on its web platform of future AI-based apps, and on 24 November it released Stable Diffusion 2.0. The animation API, DreamStudio Pro, will be a node-based animation suite enabling anyone to create videos, including with music, quickly and easily. It includes storyboarding and is compatible with a whole range of creative toolsets such as Blender, potentially making it a new part of the filmmaking workflow bringing imagination closer to reality without the pain, or so it claims. We’ll see about that shortly no doubt. And btw, 2.0 has higher resolution upscaling options, more filters on adult content, increased depth information that can be more easily transformed into 3D and text-guided in-painting which helps to switch out parts of an image more quickly. You can catch up with the announcements on Robert Scoble’s Youtube channel here –
As if that isn’t amazing enough, Google is creating another method for animating using photographs, think image-to-video, called Google AI FLY. Its approach will make use of pre-existing methods of in-painting, out-painting and super resolution of images to animate a single photo, creating a similar effect to nerf (photogrammetry) but without the requirement for many images. Check out this ‘how its done’ review by Károly Zsolnai-Fehér on the Two Minute Papers channel –
For more information, this article on Petapixel.com‘s site is worth a read too.
And finally this week, Ebsynth by Secret Weapon is an interesting approach that uses a video and a painted keyframe to create a new video resembling the aesthetic style used in the painted frame. It is a type of generative style transfer with an animated output that could only really be achieved in post production but this is soooo much simpler to do and it looks pretty impressive. There is a review of the technique on 80.lv’s website here and an overview by its creators on their Youtube channel here –
We’d love to see anyone’s examples of outputs with these different animation tools, so get in touch if you’d like to share them!
AR & VR
For those of you into AR, AI enthusiast Bjorn Karmann also demonstrated how Stable Diffusion’s in-painting feature can be used to create new experiences – check this out on his Twitter feed here –
For those of you into 360 and VR, Stephen Coorlas has used MidJourney to create some neat spherical images. Here is his tutorial on the approach –
Also Ran?
Almost late to the AI generator party (mmm….), China has released ERNIE-ViLG 2.0 by Baidu, a Chinese text-to-image AI which Alan Thompson claims is even better than DALL-E and Stable Diffusion albeit using much a smaller model. Check out his review which certainly looks impressive –
Voice
NVidia has done it again – their amazing Riva AI clones a voice using just 30 minutes of voice samples. The application of this is anticipated to be conversational virtual assistants, including multi-lingual assistants and its already been touted as frontrunner with Alexa, Meta and Google – but in terms of virtual production and creative content, it is also possible it could be used to replace actors when, say, they are double booked or poorly. So, make sure you get that covered in your voice-acting contract in future too.
Projects
We found a couple of beautiful projects that push the boundaries this month. Firstly GUNSHIP’s music video is a great example of how this technology can be applied to enhance their creative work. Their video focusses on the aesthetics of cybernetics (and is our headline image for this article). Nice!
Secondly, an audience participation film by Fabien Stelzer which is being released on Twitter. The project uses AI generators for image and voice and also for scriptwriting. After each episode is released, viewers vote on what should happen next which the creator then integrates into the subsequent episode of the story. The series is called Salt and its aesthetic style is intended to be 1970s sci-fi. You can read about his approach on the CNN Business website and be a part of the project here –
Emerging Issues
Last month we considered the disruption that AI generators are causing in the art world and this month its the film industry’s turn. Just maybe we are seeing an end to Hollywood’s fetish with Marvellizing everything or perhaps AI generators will result in extended stories with the same old visual aesthetic, out-painted and stylized… which is highly likely since AI has to be trained on pre-existing images, text and audio. In this article, Pinar Seyhan Demirdag gives us some thoughts about what might happen but our experience with the emergence of machinima and its transmogriphication into virtual production (and vice versa) teaches us that anything which cuts a few corners will ultimately become part of the process. In this case, AI can be used to supplement everything from concept development, to storyboarding, to animation and visual effects. If that results in new ideas, then all well and good.
When those new ideas get integrated into the workflow using AI generators, however, there is clearly potential for some to be less happy. This is illustrated by Greg Rutkowski, a Polish digital artist whose aesthetic style of ethereal fantasy landscapes is a popular inclusion in text-to-image generators. According to this article in MIT Technology Review, Rutkowski’s name has appeared on more than 10M images and used as a prompt more than 93,000 times in Stable Diffusion alone – and it appears that this is becasue data on which the AI has been trained includes ArtStation, one of the main platforms used by concept artists to share their portfolios. Needless to say, the work is being scaped without attribution – as we have previously discussed.
What’s interesting here is the emerging groundswell of people and companies calling for legislative action. An industry initiative has formed and is evolving rapidly, spearheaded by Adobe in partnership with Twitter and the New York Times called Content Authentication Initiative. CAI aims to authenticate content and is a publishing platform – check out their blog here and note you can become a member for free. To date, it doesn’t appear that the popular AI generators we have reviewed are part of the initiative but it is highly likely they will at some point, so watch this space. In the meantime, Stability AI, creator of Stable Diffusion, is putting effort into listening to its community to address at least some of these issues.
Of course, much game-based machinima will immediately fall foul of such initiatives, especially if content is commercialized in some way – and that’s a whole other dimension to explore as we track the emerging issues… What of the roles of platforms owned by Amazon, Meta and Google, when so much of their content is fan-generated work? And what of those games devs and publishers who have made much hay from the distribution of creative endeavour by their fans? We’ll have to wait and see but so far there’s been no real kick-back from the game publishers that we’ve seen. The anime community in South Korea and Japan has, however, collectively taken action against a former French game developer, 5you. The company used a favored artist’s work, Jung Gi, to create an homage to his practice and aesthetic style after he had died but the community didn’t agree with the use of an AI generator to do that. You can read the article on Rest of World’s website here. Community action is of course very powerful and voting with feet is something that invokes fear in the hearts of all industries.
Recent Comments