Nvidia

Tech Update 2 (Dec 2022)

Tracy Harwood Blog December 12, 2022 Leave a reply

This week, we share updates that will add to your repertoire of tools, tuts and libraries along with a bit of fighting inspriation for creating machinima and virtual production.

Just the Job!

Unreal Engine has released a FREE animation course. Their ‘starter’ course includes contributions from Disney and Reel FX and is an excellent introduction to some of the basics in UE. Thoroughly recommended, even as a refresher for those of you that already have some of the basics.

Alongside the release of UE5.1, a new KitBash3D Cyber District kit has also been released, created by David Baylis. It looks pretty impressive – read about it on their blog here.

Kitbash3D Cyber District kit

Cineshare has released a tutorial on how to create a scene that comprises a pedestrian environment, using Reallusion’s ActorCore, iClone and Nvidia Omniverse. The tutorial has also been featured on Reallusion Magazine’s site here.

Nvidia Omniverse has released Create 2022.3.0 in beta. Check out the updates on its developer forum here and watch the highlights on this video –

Libraries

We came across this amazing 3D scan library, unimaginatively called ScansLibrary, but includes a wide range of 3D and texture assets. It’s not free but relatively low cost. For example, many assets a single credit, with a 60 package of credits being $29 per month. Make sure you check out the terms!

example of a flower, ScansLibrary

We also found a fantastic sound library, Freesound.org. The library includes 10s of thousands of audio clips, samples, recording and bleeps, all released under CC licenses, free to use for non-commercial purposes. Sounds can be browsed by key words, a ‘sounds like’ question and other methods. The database has running since 2005 and is supported by its community of users and maintained by the Universitat Pompeu Fabra, Barcelona, Spain.

Freesound.org

Not really a library as such, but Altered AI is a tool that lets you change voices on your recordings, including those you directly make into the platform. Its a cloud-based service and its not free but it has a reasonably accessible pricing strategy. This is perfect if you’re an indie creator and want a bunch of voices but can’t find the actor you want! (Ricky, please close your ears to this.) The video link is a nice review by Jae Solina, JSFilmz – check it out –

Fighting Inspiration

Sifu is updating it’s fighting action game to allow for recording and playback. You can essentially create your own martial arts movies. If you’re interested in creating fight scenes then this might be something to check out.

Sifu

Tech Update 1: AI Generators (Dec 2022)

Tracy Harwood Blog December 5, 2022 3 Comments

Everything with AI has grown exponentially this year, and this week we show you AI for animation using different techniques as well as AR, VR and voice cloning. It is astonishing that some of these tools are already a part of our creative toolset, as illustrated in our highlighted projects by GUNSHIP and Fabien Stelzer. Of course, any new toolset comes with its discontents, and so we cover some of those we’ve picked up on this past month too. It is certainly fair to say there are many challenges with this emergent creative practice but it appears these are being thought through alongside the developing applications by those using it… although, of course, legislation is far from here.

Animation

Text-to-image generator Stable Diffusion raised $100M in October this year and is about to release its animation API. On 15 November it released DreamStudio, the first API on its web platform of future AI-based apps, and on 24 November it released Stable Diffusion 2.0. The animation API, DreamStudio Pro, will be a node-based animation suite enabling anyone to create videos, including with music, quickly and easily. It includes storyboarding and is compatible with a whole range of creative toolsets such as Blender, potentially making it a new part of the filmmaking workflow bringing imagination closer to reality without the pain, or so it claims. We’ll see about that shortly no doubt. And btw, 2.0 has higher resolution upscaling options, more filters on adult content, increased depth information that can be more easily transformed into 3D and text-guided in-painting which helps to switch out parts of an image more quickly. You can catch up with the announcements on Robert Scoble’s Youtube channel here –

As if that isn’t amazing enough, Google is creating another method for animating using photographs, think image-to-video, called Google AI FLY. Its approach will make use of pre-existing methods of in-painting, out-painting and super resolution of images to animate a single photo, creating a similar effect to nerf (photogrammetry) but without the requirement for many images. Check out this ‘how its done’ review by Károly Zsolnai-Fehér on the Two Minute Papers channel –

For more information, this article on Petapixel.com‘s site is worth a read too.

And finally this week, Ebsynth by Secret Weapon is an interesting approach that uses a video and a painted keyframe to create a new video resembling the aesthetic style used in the painted frame. It is a type of generative style transfer with an animated output that could only really be achieved in post production but this is soooo much simpler to do and it looks pretty impressive. There is a review of the technique on 80.lv’s website here and an overview by its creators on their Youtube channel here –

We’d love to see anyone’s examples of outputs with these different animation tools, so get in touch if you’d like to share them!

AR & VR

For those of you into AR, AI enthusiast Bjorn Karmann also demonstrated how Stable Diffusion’s in-painting feature can be used to create new experiences – check this out on his Twitter feed here –

For those of you into 360 and VR, Stephen Coorlas has used MidJourney to create some neat spherical images. Here is his tutorial on the approach –

Also Ran?

Almost late to the AI generator party (mmm….), China has released ERNIE-ViLG 2.0 by Baidu, a Chinese text-to-image AI which Alan Thompson claims is even better than DALL-E and Stable Diffusion albeit using much a smaller model. Check out his review which certainly looks impressive –

Voice

NVidia has done it again – their amazing Riva AI clones a voice using just 30 minutes of voice samples. The application of this is anticipated to be conversational virtual assistants, including multi-lingual assistants and its already been touted as frontrunner with Alexa, Meta and Google – but in terms of virtual production and creative content, it is also possible it could be used to replace actors when, say, they are double booked or poorly. So, make sure you get that covered in your voice-acting contract in future too.

Projects

We found a couple of beautiful projects that push the boundaries this month. Firstly GUNSHIP’s music video is a great example of how this technology can be applied to enhance their creative work. Their video focusses on the aesthetics of cybernetics (and is our headline image for this article). Nice!

Secondly, an audience participation film by Fabien Stelzer which is being released on Twitter. The project uses AI generators for image and voice and also for scriptwriting. After each episode is released, viewers vote on what should happen next which the creator then integrates into the subsequent episode of the story. The series is called Salt and its aesthetic style is intended to be 1970s sci-fi. You can read about his approach on the CNN Business website and be a part of the project here –

Emerging Issues

Last month we considered the disruption that AI generators are causing in the art world and this month its the film industry’s turn. Just maybe we are seeing an end to Hollywood’s fetish with Marvellizing everything or perhaps AI generators will result in extended stories with the same old visual aesthetic, out-painted and stylized… which is highly likely since AI has to be trained on pre-existing images, text and audio. In this article, Pinar Seyhan Demirdag gives us some thoughts about what might happen but our experience with the emergence of machinima and its transmogriphication into virtual production (and vice versa) teaches us that anything which cuts a few corners will ultimately become part of the process. In this case, AI can be used to supplement everything from concept development, to storyboarding, to animation and visual effects. If that results in new ideas, then all well and good.

When those new ideas get integrated into the workflow using AI generators, however, there is clearly potential for some to be less happy. This is illustrated by Greg Rutkowski, a Polish digital artist whose aesthetic style of ethereal fantasy landscapes is a popular inclusion in text-to-image generators. According to this article in MIT Technology Review, Rutkowski’s name has appeared on more than 10M images and used as a prompt more than 93,000 times in Stable Diffusion alone – and it appears that this is becasue data on which the AI has been trained includes ArtStation, one of the main platforms used by concept artists to share their portfolios. Needless to say, the work is being scaped without attribution – as we have previously discussed.

What’s interesting here is the emerging groundswell of people and companies calling for legislative action. An industry initiative has formed and is evolving rapidly, spearheaded by Adobe in partnership with Twitter and the New York Times called Content Authentication Initiative. CAI aims to authenticate content and is a publishing platform – check out their blog here and note you can become a member for free. To date, it doesn’t appear that the popular AI generators we have reviewed are part of the initiative but it is highly likely they will at some point, so watch this space. In the meantime, Stability AI, creator of Stable Diffusion, is putting effort into listening to its community to address at least some of these issues.

Of course, much game-based machinima will immediately fall foul of such initiatives, especially if content is commercialized in some way – and that’s a whole other dimension to explore as we track the emerging issues… What of the roles of platforms owned by Amazon, Meta and Google, when so much of their content is fan-generated work? And what of those games devs and publishers who have made much hay from the distribution of creative endeavour by their fans? We’ll have to wait and see but so far there’s been no real kick-back from the game publishers that we’ve seen. The anime community in South Korea and Japan has, however, collectively taken action against a former French game developer, 5you. The company used a favored artist’s work, Jung Gi, to create an homage to his practice and aesthetic style after he had died but the community didn’t agree with the use of an AI generator to do that. You can read the article on Rest of World’s website here. Community action is of course very powerful and voting with feet is something that invokes fear in the hearts of all industries.

Fests & Contests Update (Nov 2022)

Tracy Harwood Blog November 21, 2022 Leave a reply

There are a growing number of ‘challenges’ that we’ve been finding over the last few months – many are opportunities to learn new tools or use assets created by studios such as MacInnes Studio. They are also incentivised with some great prizes, generally involving something offered by the contest organizer, such as that by Kitbash3D we link to in this post. This week, we were however light on actual live contests to call out, but have found someone who is always in the know, Winbush!

Mission to Minerva (deadline 2 Dec 2022)

Kitbash3D’s challenge is for you to contribute to the development of a new galaxy! On their website, they state: ‘Your mission, should you choose to accept, is to build a settlement on a planet within the galaxy. What will yours look like?’ Their ultimate aim is to outsource all the creatie work to their community, combining artworks contests participants submit. There are online tutorials to assist, where they show you how to use Kitbash3D in Blender and Unreal Engine 5, and your work can be either concept are or animation. Entry into the contest couldn’t be simpler: you just need to share on social media (Twitter, FB, IG, Artstation) and use the hashtag #KB3Dchallenge. Winners will be announced on 20 December and there are some great prizes, sponsored by the likes off Unreal, Nvidia, CG Spectrum, WACOM, The Gnoman Workshop, The Rookies and ArtStation (platforms). Entry details and more info here.

Pug Forest Challenge

This contest has already wrapped – but there are now a few of this type of thing emerging – challenges which give you an asset to play with for a period of time, a submission guideline process, and some fabulous prizes – all geared towards incentivising you to learn a new toolset, this one being UE5! So if you need the incentivisation to motivate you – its definitely worth looking out for these. Jonathan Winbush is also one of those folks whose tutorials are legendary in the UE5 community, so even if you don’t want to enter, this is someone to follow.

McInnes Studios’ Mood Scene Challenge

John McInnes recently announced the winners of his Mood Scene challenge contest that we reported on back in August – we must say, the winners have certainly delivered some amazing moods. Check the show reel out here –

Tech Update 2 (Nov 2022)

Tracy Harwood Blog November 7, 2022 Leave a reply

This week, we take a look at some potentially useful tech platforms, starting with an inspired new service from Nvidia, then a new service and mod hub for The Sims 4, followed by some interesting distribution options linked to blockchain tech and another for festivals and events.

Cloud Services for Artists

With the ongoing challenges of access to kit for using many of the new render tools we’ve reviewed on the show over the months we’ve been running, its interesting to see that Nvidia are now launching Omniverse Cloud services. Ostensibly, the service is aimed at powering future ‘metaverse’ applications and those working on digital twin-type projects, but clearly its a very good way for content creators to finally be able to access contemporary tools without the hassles of continually updating their hardware to do it – or indeed ever worrying about acquiring the latest desirable RTX card! You can find out more about the services here – and we’d love to hear from anyone using the services about their experiences with the services.

Nvidia Omniverse Cloud Nucleus

Anyone for Sims?

The Sims 4 is now FREE to use (announced 18 Oct 2022), although we note that specific content packs will still be paid only accessible. No doubt Phil will be peeved since we all advised him to go for Unreal as a creative option when he switched his attention from RDR2 last year! Their glitzy Summit vid is clearly pitching itself against the Fortnite user, but with an entirely different heritage and more adult trajectory. They are even partnering with a new content creator curation platform, a mod hub hosted by Overwolf (coming soon).

Distribution Options

With rapid progression towards Web3, and the growing demand for 3D content that will fill the platforms and sites people create, Josephyine If has usefully created a spreadsheet that you can access here. The XLS file lists platforms and their creators including website addresses for film and video content can be shared (at the time of writing, some 18 different platforms such as Hyphenova, MContent – see video below – and Eluv.io). The main point of the platforms, at least at this stage, is to manage IP of content, so the emphasis is on how to share blockchain-marked film. Its probably one of the most interesting aspects and benefits that Web3 has for content creators, the ability to sell, track and manage content over time. This is something that’s been a major flaw of the YouTube platform process over the years since it evolved into an ad revenue driven distribution model. If you find any of the platforms particularly useful (or not), or others not mentioned on the list, do drop us a line and let us know.

We also found a potentially interesting distribution platform for festivals and events primarily, called VisualContainerTV. The platform launched in 2009 makes content available for free and therefore competes directly with the likes of YouTube (which frankly it can’t easily do) but more importantly, it can make content accessible behind a paywall. This means artists, creators and curators can receive payment for ticketed content shown over the platform via the internet and also have that branded and associated with particular curated events. At this stage of its development, it appears to be primarily targeting college students and courses based in Europe (the platform has been developed in Italy) but it is certainly something that looks interesting for small scale user groups. There are some very interesting arts projects on the site, so if nothing else, add it to your streaming platforms folder to check out periodically for interesting new works coming out.

VisualContainerTV

Tech Update 1 (Nov 2022)

Tracy Harwood Blog October 30, 2022 Leave a reply

Hot on the heels of our discussion on AI generators last week, we are interested to see tools already emerging that turn text prompts into 3D objects and also film content, alongside a tool for making music too. We have no less than five interesting updates to share here – plus a potentially very useful tool for rigging the character assets you create!

Another area of rapidly developing technological advancements is mo-cap, especially in the domain of markerless which lets face it is really the only way to think about creating naturalistic movement-based content. We share two interesting updates this week.

AI Generators

Nvidia has launched an AI tool that will generate 3D objects (see video). Called GET3D (which is derived from ‘Generate Explicit Textured 3D meshes’), the tool can generate characters and other 3D objects, as explained by Isha Salian on their blog (23 Sept). The code for the tool is currently available on Github, with instructions on how to use it here.

Google Research with researchers at the University of California, Berkeley are also working on similar tools (reported in Gigazine on 30 Sept). DreamFusion uses NeRF tech to create 3D models which can be exported into 3D renderers and modeling software. You can find the tool on Github here.

DreamFusion

Meta has developed a text-to-video generator, called Make-A-Video. The tool uses a single image or can fill in between two images to create some motion. The tool currently generates five second videos which are perfect for background shots in your film. Check out the details on their website here (and sign up to their updates too). Let us know how you get on with this one too!

Make-A-Video

Runway has released a Stable Diffusion-based tool that allows creators to switch out bits of images they do not like and replace them with things they do like (reported in 80.lv on 19 Oct), called Erase and Replace. There are some introductory videos available on Runway’s YouTube channel (see below for the Introduction to the tool).

And finally, also available on Github, is Mubert, a text-to-music generator. This tool uses a Deforum Stable Diffusion colab. Described as proprietary tech, its creator provides a custom license but says anything created with it cannot be released on DSPs as your own. It can be used for free with attribution to sync with images and videos, mentioning @mubertapp and hashtag #mubert, with an option to contact them directly if a commercial license is needed.

Character Rigging

Reallusion‘s Character Creator 4.1 has launched with built in AccurRIG tech – this turns any static model into an animation ready character and also comes with cross-platform support. No doubt very useful for those assets you might want to import from any AI generators you use!

Motion Capture Developments

That every-ready multi-tool, the digital equivalent of the Swiss army knife, has come to the rescue once again: the iPhone can now be used for full body mocap in Unreal Engine 5.1, as illustrated by Jae Solina, aka JSFilmz, in his video (below). Jae has used move.ai, which is rapidly becoming the gold standard in markerless mocap tech and for which you can find a growing number of demo vids showing how detailed movement can be captured on YouTube. You can find move.ai tutorials on Vimeo here and for more details about which versions of which smart phones you can use, go to their website here – its very impressive.

Another form of mocap is the detail of the image itself. Reality Capture has launched a tool that you can use to capture yourself (or anyone else or that matter, including your best doggo buddy) and use the resulting mesh to import into Unreal’s MetaHuman. Even more impressive is that Reality Capture is free, download details from here.

We’d love to hear how you get on with any of the tools we’ve covered this week – hit the ‘talk’ button on the menu bar up top and let us know.