Adobe

S4 E97 Elden Ring: Monty Python (Oct 2023)

Tracy Harwood Podcast Episodes October 4, 2023 Leave a reply

Kicking off Season 4 of the podcast, we review a Monty Python inspired film, integrating one of their greatest films with Elden Ring – Holy Grail.  The film has been made by The Escapist in collaboration with eli_handle_b.wav and is a brilliantly edited and composited mashup.  It is also a very appropriate pick for this episode since Monty Python were the inspiration for this podcast in the first place, so we reflect in the show that we’ve now been working on this podcast longer than the original Star Trek series ran + another 20 years collaborating on top of that too!

We also discuss news items: the launch of Starfield; Nexus Mods; Unity’s faux pas with the community of creators; Ricky’s attempt to install an AMD 7800T graphics card; and, the Sims Machinima & Animation Convention.



YouTube Version of This Episode

Show Notes and Links

Monty Python & the Elden Ring | Multiverse by The Escapist, released 8 August 2023

The Escapist is a website by gamers, for gamers, about gamers, releasing new videos every day at its website: http://www.escapistmagazine.com

A ‘how to’ using AI with largely free tools including Adobe Express to remove the background of images and put anything generated by AI back into the image, by Guy Parsons – https://twitter.com/GuyP/status/1704886297649631324

Starfield promotion –

Unity blog posts –

Original news post

Retraction post

Interview by Jason Weimann with Marc Whitten, Unity’s general manager –

Sims Machinima & Animation Convention 2023 website here and event recordings here –

Tech Update 2 (June 2023)

Tracy Harwood Blog June 12, 2023 1 Comment

Its a week of mono|meta|omni-versal updates!

Mono

We’ve been following the debate on copyright, fair use and transformative use of IP for what seems like 30 years in the world of machinima (see some of our posts here, here and here) – oh, actually its 27 years…! On 18 May, the world was exercised a little further on the issue of transformative use when the Supreme Court (US) reached its decision on Andy Warhol’s use of a photograph of Prince in a magazine – a case that’s been running since 2016, following Prince’s death. Many suggested this decision is the beginning of end of transformative use – or at least ‘narrows the ‘fair use’ doctrine‘ – and will have massive detrimental impacts on all things created, such as machinima from games engines… however, with the particular scenario fully outlined, this was probably the right outcome for this case. The scenario relates to an unattributed use of an image from a private collecton of works (created and held by Warhol/foundation), where other works involving the same creatives in the collection had previously been attributed and the photographer recompensed when having been used in magazines, and the fact that both Warhol and the photographer (Lynn Goldsmith) made money from selling images individually. So, this decision is about context of use involving the individuals as much as it is ‘fair use’ per se. Justice Sotomayor stated the important factor in the fair-use analysis was that “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes” pushed the decision in favour of the photographer, arguing that “licenses, for photographs or derivatives of them, are how photographers like Goldsmith make a living. They provide an economic incentive to create original works, which is the goal of copyright.” You can read the ruling in full here – or use your favorite search tool for a link to any one of the numerous news articles covering the case.

So, until such time as the principle applied in this case is actually applied to a creator context, where income is rarely a goal of productions beyond individual recognition and perhaps the meagre YouTube % share for eyeballs it receives, and transformation is generally well beyond that originally intended by say a game dev, it feels like there’s nothing to see here.

Meta

On 23 June, Second Life turns 20 years old! There will be virtual parties, exhibitions, product sales and more – for 20 days of course, and you can find out more on the community website here. Happy Birthday to all the Lindens – the first open world environment to truly embrace metaversal themes.

If you want to catch up on some light reading, then its also worth noting that Wagner James Au’s new book releases a week later on 27 June, called Making a Metaverse that Matters. Au also regularly writes some great updates for what has to be one of the longest-running metaverse blogs. Its called New World Notes, which he founded in 2006. Au was the first metaverse journalist and marketer for SL back in 2003. Links to the book here –


Omni

Nvidia are releasing a monthly update on its blog of all things Omniverse, including latest advancements for the OpenUSD framework that has so quickly become the gold standard for integrating a wide range of creator tools in a 3D workflow. Here‘s the link to the first part of the ‘Into the Omniverse’ series (our feature image for this post) which includes an overview of an update to the connector for Adobe Substance 3D Painter. Substance 3D releases its latest version 203.0 in mid June. This series is a must follow for all content creators, whether or not you own an RTX!

-Versal

For those seeking advice on devising a virtual production pipeline, Unreal Engine has helpfully released a visualisation guide here and a nice vid here –

Unreal Engine released version 5.2 on 11 May, which includes some fab new features including a preview of its still in dev Procedural Content Generation framework, enabling creators to populate large scenes more efficiently; Substrate, that supports a greater range of surface appearances such as the opalescent finish showcased in this vid –

an enhanced virtual production set of tools for realtime filmmaking support; enhanced VCam system for multi-camera control; and nDisplay extended support, which is setting the scene for the next version 5.3. A link to the release notes is here.

We also spotted a useful tool in the UE Marketplace albeit pricey at $249 for indies: MetaShoot. It includes lighting and render presets for assistance with creating sophisticated lighting setups in your VP studio, released by VINZI – Code Plugins, link here.

Also super helpful is Kitbash3D’s new Cargo asset browser, including some 10,000 searchable assets. The basic account, which is free, allows you to 1-click upload content to your project and manage the assets you have but for a fee of $65/month, the pro version will let you search and access the full model and media library. Its another layer of cost so do check out the small print.

Tech Update 1: AI Generators (Dec 2022)

Tracy Harwood Blog December 5, 2022 3 Comments

Everything with AI has grown exponentially this year, and this week we show you AI for animation using different techniques as well as AR, VR and voice cloning. It is astonishing that some of these tools are already a part of our creative toolset, as illustrated in our highlighted projects by GUNSHIP and Fabien Stelzer. Of course, any new toolset comes with its discontents, and so we cover some of those we’ve picked up on this past month too. It is certainly fair to say there are many challenges with this emergent creative practice but it appears these are being thought through alongside the developing applications by those using it… although, of course, legislation is far from here.

Animation

Text-to-image generator Stable Diffusion raised $100M in October this year and is about to release its animation API. On 15 November it released DreamStudio, the first API on its web platform of future AI-based apps, and on 24 November it released Stable Diffusion 2.0. The animation API, DreamStudio Pro, will be a node-based animation suite enabling anyone to create videos, including with music, quickly and easily. It includes storyboarding and is compatible with a whole range of creative toolsets such as Blender, potentially making it a new part of the filmmaking workflow bringing imagination closer to reality without the pain, or so it claims. We’ll see about that shortly no doubt. And btw, 2.0 has higher resolution upscaling options, more filters on adult content, increased depth information that can be more easily transformed into 3D and text-guided in-painting which helps to switch out parts of an image more quickly. You can catch up with the announcements on Robert Scoble’s Youtube channel here –

As if that isn’t amazing enough, Google is creating another method for animating using photographs, think image-to-video, called Google AI FLY. Its approach will make use of pre-existing methods of in-painting, out-painting and super resolution of images to animate a single photo, creating a similar effect to nerf (photogrammetry) but without the requirement for many images. Check out this ‘how its done’ review by Károly Zsolnai-Fehér on the Two Minute Papers channel –

For more information, this article on Petapixel.com‘s site is worth a read too.

And finally this week, Ebsynth by Secret Weapon is an interesting approach that uses a video and a painted keyframe to create a new video resembling the aesthetic style used in the painted frame. It is a type of generative style transfer with an animated output that could only really be achieved in post production but this is soooo much simpler to do and it looks pretty impressive. There is a review of the technique on 80.lv’s website here and an overview by its creators on their Youtube channel here –

We’d love to see anyone’s examples of outputs with these different animation tools, so get in touch if you’d like to share them!

AR & VR

For those of you into AR, AI enthusiast Bjorn Karmann also demonstrated how Stable Diffusion’s in-painting feature can be used to create new experiences – check this out on his Twitter feed here –

For those of you into 360 and VR, Stephen Coorlas has used MidJourney to create some neat spherical images. Here is his tutorial on the approach –

Also Ran?

Almost late to the AI generator party (mmm….), China has released ERNIE-ViLG 2.0 by Baidu, a Chinese text-to-image AI which Alan Thompson claims is even better than DALL-E and Stable Diffusion albeit using much a smaller model. Check out his review which certainly looks impressive –

Voice

NVidia has done it again – their amazing Riva AI clones a voice using just 30 minutes of voice samples. The application of this is anticipated to be conversational virtual assistants, including multi-lingual assistants and its already been touted as frontrunner with Alexa, Meta and Google – but in terms of virtual production and creative content, it is also possible it could be used to replace actors when, say, they are double booked or poorly. So, make sure you get that covered in your voice-acting contract in future too.

Projects

We found a couple of beautiful projects that push the boundaries this month. Firstly GUNSHIP’s music video is a great example of how this technology can be applied to enhance their creative work. Their video focusses on the aesthetics of cybernetics (and is our headline image for this article). Nice!

Secondly, an audience participation film by Fabien Stelzer which is being released on Twitter. The project uses AI generators for image and voice and also for scriptwriting. After each episode is released, viewers vote on what should happen next which the creator then integrates into the subsequent episode of the story. The series is called Salt and its aesthetic style is intended to be 1970s sci-fi. You can read about his approach on the CNN Business website and be a part of the project here –

Emerging Issues

Last month we considered the disruption that AI generators are causing in the art world and this month its the film industry’s turn. Just maybe we are seeing an end to Hollywood’s fetish with Marvellizing everything or perhaps AI generators will result in extended stories with the same old visual aesthetic, out-painted and stylized… which is highly likely since AI has to be trained on pre-existing images, text and audio. In this article, Pinar Seyhan Demirdag gives us some thoughts about what might happen but our experience with the emergence of machinima and its transmogriphication into virtual production (and vice versa) teaches us that anything which cuts a few corners will ultimately become part of the process. In this case, AI can be used to supplement everything from concept development, to storyboarding, to animation and visual effects. If that results in new ideas, then all well and good.

When those new ideas get integrated into the workflow using AI generators, however, there is clearly potential for some to be less happy. This is illustrated by Greg Rutkowski, a Polish digital artist whose aesthetic style of ethereal fantasy landscapes is a popular inclusion in text-to-image generators. According to this article in MIT Technology Review, Rutkowski’s name has appeared on more than 10M images and used as a prompt more than 93,000 times in Stable Diffusion alone – and it appears that this is becasue data on which the AI has been trained includes ArtStation, one of the main platforms used by concept artists to share their portfolios. Needless to say, the work is being scaped without attribution – as we have previously discussed.

What’s interesting here is the emerging groundswell of people and companies calling for legislative action. An industry initiative has formed and is evolving rapidly, spearheaded by Adobe in partnership with Twitter and the New York Times called Content Authentication Initiative. CAI aims to authenticate content and is a publishing platform – check out their blog here and note you can become a member for free. To date, it doesn’t appear that the popular AI generators we have reviewed are part of the initiative but it is highly likely they will at some point, so watch this space. In the meantime, Stability AI, creator of Stable Diffusion, is putting effort into listening to its community to address at least some of these issues.

Of course, much game-based machinima will immediately fall foul of such initiatives, especially if content is commercialized in some way – and that’s a whole other dimension to explore as we track the emerging issues… What of the roles of platforms owned by Amazon, Meta and Google, when so much of their content is fan-generated work? And what of those games devs and publishers who have made much hay from the distribution of creative endeavour by their fans? We’ll have to wait and see but so far there’s been no real kick-back from the game publishers that we’ve seen. The anime community in South Korea and Japan has, however, collectively taken action against a former French game developer, 5you. The company used a favored artist’s work, Jung Gi, to create an homage to his practice and aesthetic style after he had died but the community didn’t agree with the use of an AI generator to do that. You can read the article on Rest of World’s website here. Community action is of course very powerful and voting with feet is something that invokes fear in the hearts of all industries.

Completely Machinima S2 Ep 30 News & Discussion (February 2022)

Tracy Harwood Podcast Episodes February 3, 2022 Leave a reply

In this episode, Tracy, Ricky, Phil and Damien cover the relevance of Nvidia’s special address at CES for machinima creators, Adobe’s Project Shasta, Kerbal Space Programme, the uptake in VR kit over the Christmas period, growth in machinima, NFTs, Philip Rosedale’s return to the Second Life fold, the nail in RoosterTeeth’s RVB saga, Minecraft’s and Rockstar’s astonishing achievements, Ben Grussi’s history episodes and discuss two great questions posed by our followers: what’s the difference between machinima and animation and what’s the advice for adapting prose to visual media formats.



YouTube version of podcast

Show Links

1:10 Nvidia’s special address at CES, points relevant for machinima creators eg., Omniverse, AI

12:55 RDR2 images in the news!

13:40 Austin Film Festival

14:24 Adobe Project Shasta for audio recording

14:56 Kerbal Space Programme 2 impending launch

Kerbal Space Programme 2 screencap

16:25 Machinima growth observations

17:28 VR growth observations

19:46 NFTs observations – Peter Molyneux and John Gaeta

23:57 Philip Rosedale and the future of Second Life for creators

40:30 Halo Xbox 360 multiplayer servers close – the end of the story for Rooster Teeth’s RVB series?

42:50 Ben Grussi’s history of machinima episodes of the Completely Machinima podcast

44:34 Matthew Loris/Zeke: what are the differences between machinima and animation discussion; Completely Machinima interview with Mr Anymation, Tom Jantol

1:00:48 Rockstar’s lawsuit against a modding group

1:02:21 Minecraft’s astonishing video reach

1:03:43 Pandora’s 3d Films: adapting prose to visual media formats preliminary comments