Nvidia

Tech Update 2 (Feb 2023)

Tracy Harwood Blog February 13, 2023 Leave a reply

This week, we highlight some time-saving examples for generating 3D models using – you guessed it – AIs, and we also take a look at some recent developments in motion tracking for creators.

3D Modelling

All these examples highlight that generating a 3D model isn’t the end of the process and that once its in Blender, or another animation toolset, there’s definitely more work to do. These add-ons are intended to help you reach your end result more quickly, cutting out some of the more tedious aspects of the creative process using AIs.

Blender is one of those amazing animation tools that has a very active community of users, and of course, a whole heap of folks looking for quick ways to solve challenges in their creative pipeline. We found folks that have integrated OpenAI’s ChatGPT into using the toolset by developing add-ons. Check out this illustration by Olav3D, whose comments about using ChatGPT for attempting to write Python scripts sum it up nicely, “better than search alone” –

Dreamtextures by Carson Katri is a Blender add-on using Stable Diffusion which is so clever that it even projects textures onto 3D models (with our thanks to Krad Productions for sharing this one). In this video, Default Cube talks about how to get results with as few glitches as possible –

and this short tells you how to integrate Dreamtextures into Blender, by Vertex Rage –

To check out Dreamtextures for yourself, you can find the Katri’s application on Github here and should you wish to support his work, subscribe to his Patreon channel here too.

OpenAI also launched its Point-E 3D model generator this month, which can then be imported into Blender but, as CGMatter has highlighted, using the published APIs takes a very long time sitting in cues to access the downloads, whilst downloading the code to your own machine to run it locally, well that’s easy – and once you have it, you can create point-cloud models in seconds. However, he’s running the code from Google’s CoLab, which means you can run the code in the cloud. Here’s his tutorial on how to use Point-E without the wait giving you access to your own version of the code (on Github) in CoLab –

We also found another very interesting Blender add-on, this one lets you import models from Google Maps into the toolset. The video is a little old, but the latest update of the mod on Github, version 0.6.0 (for RenderDoc 1.25 and Blender 3.4) has just released, created by Elie Michel –

We were also interested to see NVIDIA’s update at CES (in January). It announced a release for the Omniverse Launcher that supports 3D animation in Blender, with generative AIs that enhance characters’ movement and gestures, a future update to Canvas that includes 360 surround images for panoramic environments and also an AI ToyBox, that enables you to create 3D meshes from 2D inputs. Ostensibly, these tools are for creators to develop work for the metaverse and web3 applications, but we already know NVIDIA’s USD-based tools are incredibly powerful for supporting collaborative workflows including machinima and virtual production. Check out the update here and this is a nice little promo video that sums up the integrated collaborative capabilities –

Tracking

As fast as the 3D modelling scene is developing, so is motion tracking. Move.ai which launched late last year, announced its pricing strategy this month at $365 for 12 months of unlimited processing of recordings – this is markerless mocap at its very best, although not so much if you want to do live mocap (no pricing strategy announced yet). Move.ai (our feature image for this article) lets you record content using a mobile phone (a couple of old iPhones). You can find out more on its new website here and here’s a fun taster, called Gorillas in the mist, with ballet and 4 iPhones, released in December by the Move.ai team –

And another app although not 3D is Face 2D Live, released by Dayream Studios – Blueprints in January. This tool allows you to live link a Face app on your iPhone or iPad to make cartoons, including with your friends also using an iPhone app, out of just about anything. It costs just $14.99 and is available on the Unreal Marketplace here. Here’s a short video example to wet your appetite – we can see a lot of silliness ensuing with this for sure!

Not necessarily machinima but for those interested in more serious facial mocap, Weta has been talking about how it developed its facial mocap processes for Avatar, using something called an ‘anatomical plausible facial system’. This is an animator centric system that captures muscle movement rather than ‘facial action coding’ which focusses on identifying emotions. Weta stated its approach leads to a wider set of facial movements being integrated into the mocapped output – we’ll no doubt see more in due course. Here’s an article on the FX Guide website which discusses the approach being taken and for a wider ranging discussion on the types of performance tracking used by the Weta team, Corridor Crew have bagged a great interview with the Avatar VFX supervisor, Eric Saindon here –

Report: Our Year in Review, 2022

Tracy Harwood Blog December 26, 2022 Leave a reply

In this post, we share our thoughts on some of the key trends we’ve seen over the last 12 months in the world of machinima and virtual production.  It’s been quite a ride this year, and what’s been fascinating to witness is how we are each trying to keep up with all the things going on.  For example, two years ago when we started the Completely Machinima podcast, we weren’t really sure that machinima was still a thing… but, as Ricky says so eloquently: “Machinima is alive and well.  Two years ago, when I was asked to be part of this podcast, I said, No machinima is dead. And I am very happy to be proven profoundly wrong.”  Yup!  So here are this year’s TOP observations.

Indie vs Pro

Machinima has been on the cusp of mainstream filmmaking for 25 years. In Tracy & Ben’s Pioneers in Machinima book, there are frequent mentions of big budget Hollywood productions having dabbled with real-time techniques for primarily previz.  But not exclusively, as Tracy discovered in her interview in February with John Gaeta (aka Dalt Wisney), founder of ILMxLAB and best known as the creator of The Matrix bullet-time shot.  Of course, The Mando stood out as a marker in the sand in the adoption of virtual production and real-time techniques and, ever since COVID, its evolving practices have come to the fore, time and again.

Beyond the large-scale use of virtual production with LED walls and stages, this year we’ve noticed more professionals are playing with the real-time, 3D virtual production processes at the desk.  These are individuals wanting to explore a concept, tell a story that perhaps they wouldn’t otherwise be able to or, as studios, explore the technologies as part of their pipeline.  Many of these folks work in the film industry already in areas such as special effects, post production or some other role.  Some great examples we’ve reviewed on the podcast are –

Alien: The Message by Rene Jacob (CM Interview Apr 2022; review S2 E35 Apr 2022)

Prazinburk Ridge by Martin Bell (S2 E45, Sept 2022)

The Eye: Calanthek by Aaron Sims (S3 E48, Oct 2022)

And whilst pros have been dabbling with the ‘new’ toolsets, the indies have stolen a march on them and are producing some truly astonishing high-quality work.  It doesn’t always get the attention it deserves, but certainly these are leading the way and some are now breaking into industry too.  We review a selection every month, but a few we’ll draw attention to are –

Heroes of Bronze ‘Journeys’ Teaser by Martin Klekner (S2 E35 Apr 2022)

MOVING OUT || Somewhere In Space || A No Man’s Sky Cinematic Series by Geeks Actively Making Entertainment (S2 E39 June 2022)

Tiny Elden Ring | Tilt Shift by Flurdeh (S2 E43 Aug 2022)

It’s been fascinating to watch the beginnings of convergence of the pro and indie worlds and we’re excited to see more new projects emerge in 2023, as well as more indies getting the recognition they deserve with at least some chance of generating an income from their creative endeavour.  Needless to say, as mainstream catches up, the indies are going to be much in demand although let’s hope that doesn’t then result in the devastation of creative ideas as it did in 2008-9 when many of the original machinima folks were absorbed by games developers (notably to work on EA’s Mass Effect).

Unreal

In part, the opportunities for indies mentioned are because Epic Games, Unreal Engine’s creator, has had the foresight to devise a generous freemium model.  It is free to use for projects netting below $1M, thereafter taking 5% profit. It has a vast marketplace of assets that can be used at low or no cost in productions.  In turn, this enables creators to circumvent one of the most wicked problems faced when making game-based machinima: how best to deal with game-related intellectual property rights.  Very few machinima, real-time virtual production short projects we’ve seen are ever going to come close to $1M.  Most don’t even crack $100.  And that is despite many games developers having in the past directly benefitted from the increased visibility of fan created content and the ideas included in them… but that’s not a discussion point for now.  It is the freemium model for UE that will massively grow the skillset of creatives which, in turn, will enable all sorts of applications to emerge as real-time and virtual production proliferates across industries.  This is a win-win for creatives and techs alike. 

Alongside this, a number of other engines which have traditionally been used for machinima, real-time and virtual production, have made it progressively more difficult and convoluted to create new works for a variety of reasons.  Furthermore, quite simply the finished product just does not look so good in comparison to UE.  For example, we were disappointed to hear that despite the potentially comparable quality in Warhammer 40K, Erasmus Brosdau’s The Lord Inquisitor never received the publisher’s backing even though many of those involved were associated with the game (we reviewed this in S2 E45 Sept 2022 but the film dated back to 2016). Rockstar hasn’t supported the release of an editor for Red Dead Redemption 2.  GTAV and Blizzard’s World of Warcraft are showing their age and despite a few leaks nothing concrete about updates has emerged. Roberts Space Industries (Star Citizen) appears to have shot itself in the foot in its attempt to protect actor assets (eg., Gary Oldman, Mark Hamill, Mark Strong, Gillian Anderson, Andy Serkis and others) in its upcoming Squadron 42.  The latter in particular is such a pity because we were very much looking forward to seeing episode 2 of Adrift by Barely Not Monkeys, a highlight of our annual review last year.  

Of course, the trade-off in using UE is that creating projects isn’t anywhere near as speedily done as purely game-based machinima used to be, which could be captured on any game playing computer with a few tools or mods such as video capture and editor.  During the year, we’ve seen the release of UE5 and 5.1, each a step change on its predecessor.  Access is a challenge because the phenomenal compute power that’s needed to run all the different tools to create real-time 3D virtual sets and characters, render films, etc., is growing exponentially.  Nonetheless, Epic has given the process impetus.  It has put a lot of effort into making the UE toolset easy to use, relative to other platforms such as Unity and iClone. This, coupled with the huge number of tutorials created by a vast and growing community of users, alongside investment in short courses and guides most of which are free, has positioned it as a medium of choice.  As Kim Libreri, Chief Technology Officer at Epic since 2014, is quoted as saying in the Pioneers in Machinima book: “The main challenge is how to tell a great story.  Now that the tools aren’t a barrier, it is about understanding human comprehension, understanding the light-depth field, writing, and placing things so that people can understand the story you are trying to tell.”  

At the end of last year, we felt that Nvidia’s Omniverse would become the driving platform and were waiting with baited breath for the release of updates to its Machinima toolset, especially following Ricky’s interview with Dane Johnston.  So far, we have been disappointed.  One challenge with the Nvidia toolset is the lock-in to their hardware which is required to run Omniverse.  With the squeeze on chip access and high price of kit, along with its astonishingly rapid advancements in all things AI and consequential new spec releases, it has probably lost ground in the creator filmmaking world – who can afford to replace their kit every 6 months?  We are, however, very interested to see Nividia’s cloud-based subscription model emerging which is surely going to help by improving access to high compute tools at least for as long as folks can afford the sub.  Omniverse has amazing potential but all these challenges have resulted in us seeing only one notable film made entirely using the toolset to date, compared to UE5 in which we are seeing many more –

Most Precious Gift by Shangyu Wang (S3 E49 Oct 2022)

Platform Connectivity

Unreal along with other amazing toolsets, platforms and hardware developers such as Reallusion, Blender, Nvidia, Autodesk and numerous others, has invested in Universal Scene Description.  This is an open-source format originally developed by Pixar that allows for the interchange of assets between toolsets.  USD is a game-changer and through its wide adoption, indies and pros alike can align and build their preferred pipelines which allows them to integrate content using a range of methods according to the skills they have in capture techniques such as photogrammetry, 360, mocap, etc.  The tools and platforms, collectively, are touted as being the foundation of future metaverse applications but hitherto it has been UE that has been the backbone of this year’s most exciting creative works, often integrating mocap with Reallusion’s Character Creator.  Examples are –

Metaverse Music Video by JSFilmz (S3 E52 Nov 2022)

Blu x @Teflon Sega meta-saga!! Ep4 by Xanadu (S2 E37 May 2022)

And check out also the range of other types of projects reviewed in our October blog post, such as SnoopDogg’s web3 Crip Ya Enthusiasm, Rick Pearce’s 2D/3D Roborovski and The Walker by AFK – The Webseries

Mo-Cap

We’ve witnessed the mass adoption of mocap for developing body and facial animation using all sorts of different mocap suits including markerless.  The ability to integrate content easily into the virtual production pipeline has resulted in a plethora of works that attempt to illustrate more realistic movement.  This has enabled creators to customize assets by adding detail to characters which results in greater depth to the process of characterization, building more empathy and better storytelling capability.  As technologies have advanced from body to hand, face, mouth and eyes over the year, creators have become less reliant on traditional approaches to storytelling, such as narration and voice acting, and instead used more nuanced human-like behaviour that can be interpreted only subliminally.  Examples are –

ALONE by Playard Studios (S3 E54 Nov 2022)

The Cloud Racer by Impossible Objects (S2 E45 Sept 2022)

Of course, capture alone would be pointless without high quality models (bones and rigs) to which the movement can be mapped.  UE’s Metahumans and Reallusion’s Character Creator have therefore rapidly become key tools in this part of the pipeline.  Both provide a high bar output at even the most basic level of mapping and advanced users are leaving contrails in their wake.  Check out the mouth movement detail in this –

The Talky Orcs by AFK – The Webseries (S3 E58 Dec 2022)

Challenges vs Festivals

In the tradition of the filmmaking and animation industries, there are many festivals that provide opportunities for creators to showcase their best work, get feedback from reviewers and have a chance to connect and reflect, through which new collaborations may be formed.  There are actually very few festivals that celebrate machinima and virtual production these days.  This year, however, we’ve noticed a growing number of contests through which creators are able to test and showcase aspects of their skillsets, most of which are incentivised by prizes of latest kit and some of which may lead to new opportunities, such as John MacInnes’ Mood Scenes challenge.  What’s particularly interesting is that last year Tracy said, “We need more contests to promote the new toolsets” whereas this year, she says “We need a different type of promotion than contests”!

Two things occur on this.  Firstly, is it the case that more virtual production content is finally being accepted into animation festivals?  This is something that we’ve often lamented in the past, where machinima was always seen as the poor relation even though the creativity demonstrated has been innovative.  In part, this attitude is what led the community to form its own festival series – the Machinima Film Festival, created by the Academy of Machinima Arts & Sciences, ran between 2002 and 2008; ran with an EU version in the European Machinima Film Festival in 2007; and was then taken over by the Machinima Expo and ran until 2014, including a virtual event held in Second Life.  This was hugely popular among the community of creators because it attracted a breadth of talent using a multitude of different toolsets.  So, we have been thrilled this year to see Damien’s Star Wars series Heir to the Empire being recognized in a host of festivals that have accepted the work he creates in iClone.  Ditto Martin Bell’s Prazinburk Ridge made in UE and various others.   Examples are, however, few so far – or maybe we are witnessing a change in the way works are distributed too!

Secondly, is it the case that contests and challenges are an excellent way for tech developers to promote their wares and their uses?  This is very evidently the case.  This year we have seen contests run in order to generate interest purely in a toolset.  We have seen other contests run by creator channels whose goal appears to be simply to drive up their numbers of followers, and these use the same tech dev toolsets as incentives for participation.  The outputs we have seen have generally been creatively poor albeit well marketed.  Without greater emphasis on well-run independent festivals with prizes for creativity, this is unlikely to change – and that’s a pity, because it doesn’t result in development of good skills but simply drives the creation of content.  It is very much a model we saw when Machinima.com incentivized content creation with is partner programme, where eyes equalled a share of the ad revenue.  We will be dismayed if this continues.  As the year has progressed, however, we have observed a growing range of films being promoted as creative works including through Reallusion’s Pitch and Produce programme –

The Remnants by Stan Petruk (S3 E53 Nov 2022)

We are also mindful that many mainstream festivals will only take content if it hasn’t previously been released to online channels… and that’s fine too but festivals need then to take some greater responsibility for supporting particularly indie creatives to promote their work.

Tutorials vs Creativity

The trends we observed in contests and festivals gives us hope that the tide is beginning to turn away from the hundreds and thousands of tutorials we have seen released over the year, covering every minute aspect of any toolset ever developed.  There have been so many that Phil led a discussion on the process of taking online tutorials in this episode –

How I learned Unity without following Tutorials by Mark Brown (S3 E47 Oct 2022)

Of course, many tutorials are well produced and immeasurably useful, to a point.  They clearly get thousands of views and, one assumes, help hundreds of followers out of holes in their pipeline.  But what is the point unless it results in new creative works?  And where are those creative works being shown?  We just don’t know! 

Part of the problem is the way in which the dominant platforms share content using algorithms that favor numbers of views to serve the most popular work.  This mechanism is never going to inspire anyone to create new work – it results in tedious trawls through mountains of trash and consequentially very low engagement levels.  The only good thing about it is that the golden nuggets we find are treasured and our podcast is a trove for anyone interested in how machinima and virtual production techniques are evolving.  We implore the tech devs to do more to promote creative practice beyond the endless pursuit of the next best tutorial – and we also ask someone, anyone, to figure out a way that folks can find it easily.  We explored some potential distribution options in our Nov 2022 blog but this is something we will no doubt revisit in our review of 2023. 

AI

This year has witnessed the rise of the AI generator and its use in devising everything from text, image, spoken word, emotion, music, movement and interaction.  The speed of advancements being made is truly awe-inspiring.  We followed the developments over the course of the year and by October decided we needed to give emerging projects some space in the podcast too.  One notable example that stood out for us was The Crow by Glenn Marshall Neural Art (reviewed in our Oct 2022 Projects Update blog post).  We then posted a more detailed report, with observations about the implications for machinima and virtual production creators later in the month (Report: Creative AI Generators, Oct 2022) and a follow up in December which highlights three very different ways in which AI generators can be used to animate content.  All this has come about in less than 6 months!

Within games, we have witnessed the various roles of AI over many years, particularly in animating non-player characters and generating environments that have formed a good portion of content used in machinima and virtual productions.  The current raft of tools present a potentially whole new way of realizing creative works, without the learning curve of understanding the principles of storytelling, filmmaking, animation and acting needed for use in UE and others.  This could be liberating, leading to innovative creations, but at this juncture we are concerned that the mountain of work we’ve seen develop will simply be absorbed without due recognition to concept originators.  There are some attempts being made to address this, as we discuss in our Dec 2022 AI Generator Tech Update, but authentic creative works where game-based content has been used is clearly not yet on the agenda.  This is our one to watch in 2023. 

Over to you!

We’d love to hear your thoughts on our year in review, maybe you have other highlights you’d like to share.  If so, do please get in touch or post a comment below. 

In the meantime, our thanks to you our readers, listeners and watchers for being part of the Completely Machinima podcast this year. Happy Christmas – and is it too early for Happy New Year?!

Tech Update 2 (Dec 2022)

Tracy Harwood Blog December 12, 2022 Leave a reply

This week, we share updates that will add to your repertoire of tools, tuts and libraries along with a bit of fighting inspriation for creating machinima and virtual production.

Just the Job!

Unreal Engine has released a FREE animation course. Their ‘starter’ course includes contributions from Disney and Reel FX and is an excellent introduction to some of the basics in UE. Thoroughly recommended, even as a refresher for those of you that already have some of the basics.

Alongside the release of UE5.1, a new KitBash3D Cyber District kit has also been released, created by David Baylis. It looks pretty impressive – read about it on their blog here.

Kitbash3D Cyber District kit

Cineshare has released a tutorial on how to create a scene that comprises a pedestrian environment, using Reallusion’s ActorCore, iClone and Nvidia Omniverse. The tutorial has also been featured on Reallusion Magazine’s site here.

Nvidia Omniverse has released Create 2022.3.0 in beta. Check out the updates on its developer forum here and watch the highlights on this video –

Libraries

We came across this amazing 3D scan library, unimaginatively called ScansLibrary, but includes a wide range of 3D and texture assets. It’s not free but relatively low cost. For example, many assets a single credit, with a 60 package of credits being $29 per month. Make sure you check out the terms!

example of a flower, ScansLibrary

We also found a fantastic sound library, Freesound.org. The library includes 10s of thousands of audio clips, samples, recording and bleeps, all released under CC licenses, free to use for non-commercial purposes. Sounds can be browsed by key words, a ‘sounds like’ question and other methods. The database has running since 2005 and is supported by its community of users and maintained by the Universitat Pompeu Fabra, Barcelona, Spain.

Freesound.org

Not really a library as such, but Altered AI is a tool that lets you change voices on your recordings, including those you directly make into the platform. Its a cloud-based service and its not free but it has a reasonably accessible pricing strategy. This is perfect if you’re an indie creator and want a bunch of voices but can’t find the actor you want! (Ricky, please close your ears to this.) The video link is a nice review by Jae Solina, JSFilmz – check it out –

Fighting Inspiration

Sifu is updating it’s fighting action game to allow for recording and playback. You can essentially create your own martial arts movies. If you’re interested in creating fight scenes then this might be something to check out.

Sifu

Tech Update 1: AI Generators (Dec 2022)

Tracy Harwood Blog December 5, 2022 3 Comments

Everything with AI has grown exponentially this year, and this week we show you AI for animation using different techniques as well as AR, VR and voice cloning. It is astonishing that some of these tools are already a part of our creative toolset, as illustrated in our highlighted projects by GUNSHIP and Fabien Stelzer. Of course, any new toolset comes with its discontents, and so we cover some of those we’ve picked up on this past month too. It is certainly fair to say there are many challenges with this emergent creative practice but it appears these are being thought through alongside the developing applications by those using it… although, of course, legislation is far from here.

Animation

Text-to-image generator Stable Diffusion raised $100M in October this year and is about to release its animation API. On 15 November it released DreamStudio, the first API on its web platform of future AI-based apps, and on 24 November it released Stable Diffusion 2.0. The animation API, DreamStudio Pro, will be a node-based animation suite enabling anyone to create videos, including with music, quickly and easily. It includes storyboarding and is compatible with a whole range of creative toolsets such as Blender, potentially making it a new part of the filmmaking workflow bringing imagination closer to reality without the pain, or so it claims. We’ll see about that shortly no doubt. And btw, 2.0 has higher resolution upscaling options, more filters on adult content, increased depth information that can be more easily transformed into 3D and text-guided in-painting which helps to switch out parts of an image more quickly. You can catch up with the announcements on Robert Scoble’s Youtube channel here –

As if that isn’t amazing enough, Google is creating another method for animating using photographs, think image-to-video, called Google AI FLY. Its approach will make use of pre-existing methods of in-painting, out-painting and super resolution of images to animate a single photo, creating a similar effect to nerf (photogrammetry) but without the requirement for many images. Check out this ‘how its done’ review by Károly Zsolnai-Fehér on the Two Minute Papers channel –

For more information, this article on Petapixel.com‘s site is worth a read too.

And finally this week, Ebsynth by Secret Weapon is an interesting approach that uses a video and a painted keyframe to create a new video resembling the aesthetic style used in the painted frame. It is a type of generative style transfer with an animated output that could only really be achieved in post production but this is soooo much simpler to do and it looks pretty impressive. There is a review of the technique on 80.lv’s website here and an overview by its creators on their Youtube channel here –

We’d love to see anyone’s examples of outputs with these different animation tools, so get in touch if you’d like to share them!

AR & VR

For those of you into AR, AI enthusiast Bjorn Karmann also demonstrated how Stable Diffusion’s in-painting feature can be used to create new experiences – check this out on his Twitter feed here –

For those of you into 360 and VR, Stephen Coorlas has used MidJourney to create some neat spherical images. Here is his tutorial on the approach –

Also Ran?

Almost late to the AI generator party (mmm….), China has released ERNIE-ViLG 2.0 by Baidu, a Chinese text-to-image AI which Alan Thompson claims is even better than DALL-E and Stable Diffusion albeit using much a smaller model. Check out his review which certainly looks impressive –

Voice

NVidia has done it again – their amazing Riva AI clones a voice using just 30 minutes of voice samples. The application of this is anticipated to be conversational virtual assistants, including multi-lingual assistants and its already been touted as frontrunner with Alexa, Meta and Google – but in terms of virtual production and creative content, it is also possible it could be used to replace actors when, say, they are double booked or poorly. So, make sure you get that covered in your voice-acting contract in future too.

Projects

We found a couple of beautiful projects that push the boundaries this month. Firstly GUNSHIP’s music video is a great example of how this technology can be applied to enhance their creative work. Their video focusses on the aesthetics of cybernetics (and is our headline image for this article). Nice!

Secondly, an audience participation film by Fabien Stelzer which is being released on Twitter. The project uses AI generators for image and voice and also for scriptwriting. After each episode is released, viewers vote on what should happen next which the creator then integrates into the subsequent episode of the story. The series is called Salt and its aesthetic style is intended to be 1970s sci-fi. You can read about his approach on the CNN Business website and be a part of the project here –

Emerging Issues

Last month we considered the disruption that AI generators are causing in the art world and this month its the film industry’s turn. Just maybe we are seeing an end to Hollywood’s fetish with Marvellizing everything or perhaps AI generators will result in extended stories with the same old visual aesthetic, out-painted and stylized… which is highly likely since AI has to be trained on pre-existing images, text and audio. In this article, Pinar Seyhan Demirdag gives us some thoughts about what might happen but our experience with the emergence of machinima and its transmogriphication into virtual production (and vice versa) teaches us that anything which cuts a few corners will ultimately become part of the process. In this case, AI can be used to supplement everything from concept development, to storyboarding, to animation and visual effects. If that results in new ideas, then all well and good.

When those new ideas get integrated into the workflow using AI generators, however, there is clearly potential for some to be less happy. This is illustrated by Greg Rutkowski, a Polish digital artist whose aesthetic style of ethereal fantasy landscapes is a popular inclusion in text-to-image generators. According to this article in MIT Technology Review, Rutkowski’s name has appeared on more than 10M images and used as a prompt more than 93,000 times in Stable Diffusion alone – and it appears that this is becasue data on which the AI has been trained includes ArtStation, one of the main platforms used by concept artists to share their portfolios. Needless to say, the work is being scaped without attribution – as we have previously discussed.

What’s interesting here is the emerging groundswell of people and companies calling for legislative action. An industry initiative has formed and is evolving rapidly, spearheaded by Adobe in partnership with Twitter and the New York Times called Content Authentication Initiative. CAI aims to authenticate content and is a publishing platform – check out their blog here and note you can become a member for free. To date, it doesn’t appear that the popular AI generators we have reviewed are part of the initiative but it is highly likely they will at some point, so watch this space. In the meantime, Stability AI, creator of Stable Diffusion, is putting effort into listening to its community to address at least some of these issues.

Of course, much game-based machinima will immediately fall foul of such initiatives, especially if content is commercialized in some way – and that’s a whole other dimension to explore as we track the emerging issues… What of the roles of platforms owned by Amazon, Meta and Google, when so much of their content is fan-generated work? And what of those games devs and publishers who have made much hay from the distribution of creative endeavour by their fans? We’ll have to wait and see but so far there’s been no real kick-back from the game publishers that we’ve seen. The anime community in South Korea and Japan has, however, collectively taken action against a former French game developer, 5you. The company used a favored artist’s work, Jung Gi, to create an homage to his practice and aesthetic style after he had died but the community didn’t agree with the use of an AI generator to do that. You can read the article on Rest of World’s website here. Community action is of course very powerful and voting with feet is something that invokes fear in the hearts of all industries.

Fests & Contests Update (Nov 2022)

Tracy Harwood Blog November 21, 2022 Leave a reply

There are a growing number of ‘challenges’ that we’ve been finding over the last few months – many are opportunities to learn new tools or use assets created by studios such as MacInnes Studio. They are also incentivised with some great prizes, generally involving something offered by the contest organizer, such as that by Kitbash3D we link to in this post. This week, we were however light on actual live contests to call out, but have found someone who is always in the know, Winbush!

Mission to Minerva (deadline 2 Dec 2022)

Kitbash3D’s challenge is for you to contribute to the development of a new galaxy! On their website, they state: ‘Your mission, should you choose to accept, is to build a settlement on a planet within the galaxy. What will yours look like?’ Their ultimate aim is to outsource all the creatie work to their community, combining artworks contests participants submit. There are online tutorials to assist, where they show you how to use Kitbash3D in Blender and Unreal Engine 5, and your work can be either concept are or animation. Entry into the contest couldn’t be simpler: you just need to share on social media (Twitter, FB, IG, Artstation) and use the hashtag #KB3Dchallenge. Winners will be announced on 20 December and there are some great prizes, sponsored by the likes off Unreal, Nvidia, CG Spectrum, WACOM, The Gnoman Workshop, The Rookies and ArtStation (platforms). Entry details and more info here.

Pug Forest Challenge

This contest has already wrapped – but there are now a few of this type of thing emerging – challenges which give you an asset to play with for a period of time, a submission guideline process, and some fabulous prizes – all geared towards incentivising you to learn a new toolset, this one being UE5! So if you need the incentivisation to motivate you – its definitely worth looking out for these. Jonathan Winbush is also one of those folks whose tutorials are legendary in the UE5 community, so even if you don’t want to enter, this is someone to follow.

McInnes Studios’ Mood Scene Challenge

John McInnes recently announced the winners of his Mood Scene challenge contest that we reported on back in August – we must say, the winners have certainly delivered some amazing moods. Check the show reel out here –