JSFilmz

Tech Update 1: AI Generators (Apr 2023)

Tracy Harwood Blog April 3, 2023 Leave a reply

March was another astonishing month in the world of AI genies with the release of exponentially powerful updates (GPT4 released 14 March; Baidu released Ernie Bot on 16 March), new services and APIs. It is not surprising that by the end of the month, Musk-oil is being poured over the ‘troubling waters’ – will it work now the genie is out of the bottle? Its anyone’s guess and certainly it seems a bit of trickery is the only way to get it back into the bottle at this stage.

Rights

More importantly, and with immediate effect, the US Copyright Office issued a statement on 16 March in relation to the IP issues that have been hot on many lips for several months now: registrations pertaining to copyright are about the processes of human creativity, where the role of generative AI is simply seen as a toolset under current legal copyright registration guidance. Thus, for example, in the case of Zarya of the Dawn (refer our comments in the Feb 2023 Tech Update), whilst the graphic novel contains original concepts that are attributable to the author, the use of images generated by AI (in the case of Zarya, MidJourney) are not copyrightable. The statement also makes it clear that each copyright registration case will be viewed on its own merit which is surely going to make for a growing backlog of cases in the coming months. It requires detailed clarification of how generative AI is used by human creators in each copyright case to help with the evaluation processes.

The statement also highlights that an inquiry into copyright and generative AIs will be undertaken across agencies later in 2023, where it will seek general public and legal input to evaluate how the law should apply to the use of copyrighted works in “AI training and the resulting treatment of outputs”. Read the full statement here. So, for now at least, the main legal framework in the US remains one of human copyright, where it will be important to keep detailed notes about how creators generated (engineered) content from AIs, as well as adapted and used the outputs, irrespective of the tools used. This will no doubt be a very interesting debate to follow, quite possibly leading to new ways of classifying content generated by AIs… and through which some suggest AIs as autonomous entities with rights could become recognized. It is clear in the statement, for example, that the US Copyright Office recognizes that machines can create (and hallucinate).

The complex issues of the dataset creation and AI training processes will underpin much of the legal stances taken and a paper released at the beginning of Feb 2023 could become one of the defining pieces of research that undermines it all. The research extracted near exact copyrighted images of identified people from a diffusion model, suggesting that it can lead to privacy violations. See a review here and for the full paper go here.

In the meantime, more creative platforms used to showcase creative work are introducing tagging systems to help identify AI generated content – #NoAI, #CreatedWithAI. Sketchfab joined the list at the end of Feb with its update here, with updates relating to its own re-use of such content through its licensing system coming into effect on 23 March.

NVisionary

Nvidia’s progressive march with AI genies needs an AI to keep up with it! Here’s my attempt to review the last month of releases relevant to the world of machinima and virtual production.

In February, we highlighted ControlNet as a means to focus on specific aspects of image generation and this month, on 8 March, Nvidia released the opposite which takes the outline of an image and infills it, called Prismer. You can find the description and code on its NVlabs GitHub page here.

Alongside the portfolio of generative AI tools Nvidia has launched in recent months, with the advent of OpenAI’s GPT4 in March, Nvidia is expanding its tools for creating 3D content –

It is also providing an advanced means to search its already massive database of unclassified 3D objects, integrating with its previously launched Omniverse DeepSearch AI librarian –

It released its cloud-based Picasso generative AI service at GTC23 on 23 March, which is a means to create copyright cleared images, videos and 3D applications. A cloud service is of course a really great idea because who can afford to keep up with the graphics cards prices? The focus for this is enterprise level, however, which no doubt means its not targeting indies at this stage but then again, does it need to when indies are already using DALL-E, Stable Diffusion, MidJourney, etc. Here’s a link to the launch video and here is a link to the wait list –

Pro-seed-ural

A procedural content generator for creating alleyways has been released by Difffuse Studios in the Blender Marketplace, link here and see the video demo here –

We spotted a useful social thread that highlights how to create consistent characters in Midjourney, by Nick St Pierre, using seeds –

and you can see the result of the approach in his example of an aging girl here –

Animation

JSFilmz created an interesting character animation using MidJourney5 (which released on 17 March) with advanced character detail features. This really shows its potential alongside animation toolsets such as Character Creator and Metahumans –

Runway’s Gen-2 text-to-video platform launched on 20 March, with higher fidelity and consistency in the outputs than its previous version (which was actually video-to-video output). Here’s a link to the sign-up and website, which includes an outline of the workflow. Here’s the demo –

Gen-2 is also our feature image for this blog post, illustrating the stylization process stage which looks great.

Wonder Dynamics launched on 9 March as a new tool for automating CG animations from characters that you can upload to its cloud service, giving creators the ability to tell stories without all the technical paraphenalia (mmm?). The toolset is being heralded as a means to democratize VFX and it is impressive to see that Aaron Sims Creative are providing some free assets to use with this and even more so to see none other than Steven Spielberg on the Advisory Board. Here’s the demo reel, although so far we’ve not found anyone that’s given it a full trial (its in closed beta at the moment) and shared their overview –

Finally for this month, we close this post with Disney’s Aaron Blaise and his video response to Corridor Crew’s use of generative AI to create a ‘new’ anime workflow, which we commented on last month here. We love his open-minded response to their approach. Check out the video here –

Tech Update 2 (Jan 2023)

Tracy Harwood Blog January 16, 2023 Leave a reply

This week, we highlight some character development tools, NeRFs, NFTs and environments for machinima and virtual production.

Characters

Beginning with the awe-inspiring toolset of Unreal Engine’s MetaHumans, the organization has released a FREE three-hours long online course for beginners on real-time animating with Faceware Analyzer and Retargeter tools. Here’s a taster of what you can expect –

A creator we’ve featured a number of times (his tutorials are awesome), JSFILMZ (our feature image) has posted a taster of MetaHuman’s Live Drive from Facegood, which launched in December. The demo shows straight from camera to Unreal but what’s amazing is the price for the head-mounted hardware of <$500! This obviously isn’t free but its good value compared to some of the other facial tracking hardware on the market, and Jae compares those to give you an overview of what you get for the money. The Facegood software itself, Avatary, is free though, which produces some impressive animations. Check out Jae’s introductory overview below, and then pick up his tutorials on each of the components he discusses on his channel –

Move.ai has launched its iPhone beta application for free markerless mocap (requires two phones). Ultimately, this isn’t going to be free to use so make the most of the beta sign-up opportunity – the official launch takes place in March 2023 and their main target in the first instance is professional studios, which will put this out of reach for many indies. This article gives you a quick overview (by 80.lv), and this short video explainer introduces their store –

And finally, on characters this month, we highlight Inworld AI. This organization is creating interactive conversational characters that can be exported and shared across various platforms, either as avatars or the underlying chatbot (think smart NPCs). Some of you may recall John Gaeta mentioned this in our interview with him last year, and since then, Inworld has become part of the Walt Disney Company’s Accelerator Programme, been awarded an Epic MegaGrant and raised a pot of money from investors. The application of the software is vast – everything from games to marketing, as well as machinima and virtual productions too… and that’s because of how the characters can be moulded. Inworld states: ‘When crafting your character’s brain, you are able to use the Studio to tailor many elements of cognition and behavior, such as goals and motivations, manners of speech, memories and knowledge, and voice‘. Inworld released a nice tutorial in December, link below. Its definitely one to try out –

NeRFing Around

We found a nice short on Neural Radiance Fields (aka NeRFs) by Corridor Crew, using the Luma AI app, which is truly stunning for recreating realistic anything. They highlight some of the key challenges, and present a very interesting test with a chrome ball – surely it is never going to possible to create this kind of object with dynamic reflections and all…? Check it out here –

As Corridor Crew states, this is clearly one of the next big tech things in image capture for CGI.

NFTs

The fluid waters of NFTs continues to muddy. This article (by NFT Now) highlights some of the recent class action law suits being brought against creator platforms, suggesting that the markets are being artificially inflated by celebrity endorsers, although this is surely true for so many other products too? Its more an argument about the nature of the endorsement process and the stake in the investment that the endorser has that’s the issue here seemingly. One of the main challenges here is the fundemantal role of community in NFTs, which is always going to mean there is a very fine line on ‘insider trading’. Its also interesting to note that IP owners are now becoming more actively involved in this nascent space. Once again, whenever the legals get involved, everyday creatives are the losers, so whilst some of the actions highlighted are less directly relevant, the outcomes of the legal disputes ultimately will be, so we’ll keep tracking this.

Environments

Finally, we want to highlight a couple of environments for you.

Firstly, Half Life Alyx has a new mod, courtesy of Corey Laddo! Corey has created a mod that allows you to view the game in the role of Alyx Vance. Its designed to be a free of charge for owners of the game, and provides a 4-5 hour experience for ‘average players’. Great if you want to shoot content from a first person perspective. You can support Corey on his Patreon account, should you want to give him something for his effort. Download the mod from Steam here. Meantime, here’s a taster for you –

Secondly, Damien shared a new sandbox environment that will be launching soon (well, we think it will since its apparently been in dev since 2012), called Outerra World by Microprose. This looks amazing, and will allow you create any kind of realistic 1:1 scale terrain simulation, which you can share and navigate using any asset that the community creates and shares too. Here’s the link to the Steam page (to add your details to the waitlist).

If you have comments or thoughts on any of the techs this week, do go ahead and comment.

Tech Update 2 (Dec 2022)

Tracy Harwood Blog December 12, 2022 Leave a reply

This week, we share updates that will add to your repertoire of tools, tuts and libraries along with a bit of fighting inspriation for creating machinima and virtual production.

Just the Job!

Unreal Engine has released a FREE animation course. Their ‘starter’ course includes contributions from Disney and Reel FX and is an excellent introduction to some of the basics in UE. Thoroughly recommended, even as a refresher for those of you that already have some of the basics.

Alongside the release of UE5.1, a new KitBash3D Cyber District kit has also been released, created by David Baylis. It looks pretty impressive – read about it on their blog here.

Kitbash3D Cyber District kit

Cineshare has released a tutorial on how to create a scene that comprises a pedestrian environment, using Reallusion’s ActorCore, iClone and Nvidia Omniverse. The tutorial has also been featured on Reallusion Magazine’s site here.

Nvidia Omniverse has released Create 2022.3.0 in beta. Check out the updates on its developer forum here and watch the highlights on this video –

Libraries

We came across this amazing 3D scan library, unimaginatively called ScansLibrary, but includes a wide range of 3D and texture assets. It’s not free but relatively low cost. For example, many assets a single credit, with a 60 package of credits being $29 per month. Make sure you check out the terms!

example of a flower, ScansLibrary

We also found a fantastic sound library, Freesound.org. The library includes 10s of thousands of audio clips, samples, recording and bleeps, all released under CC licenses, free to use for non-commercial purposes. Sounds can be browsed by key words, a ‘sounds like’ question and other methods. The database has running since 2005 and is supported by its community of users and maintained by the Universitat Pompeu Fabra, Barcelona, Spain.

Freesound.org

Not really a library as such, but Altered AI is a tool that lets you change voices on your recordings, including those you directly make into the platform. Its a cloud-based service and its not free but it has a reasonably accessible pricing strategy. This is perfect if you’re an indie creator and want a bunch of voices but can’t find the actor you want! (Ricky, please close your ears to this.) The video link is a nice review by Jae Solina, JSFilmz – check it out –

Fighting Inspiration

Sifu is updating it’s fighting action game to allow for recording and playback. You can essentially create your own martial arts movies. If you’re interested in creating fight scenes then this might be something to check out.

Sifu

S3 E52 Film Review: Metaverse Music Video by JSFilmz (Nov 2022)

Tracy Harwood Podcast Episodes November 9, 2022 1 Comment

This week’s pick is a 360 music video – a ‘metaverse’ video – by a creator we’ve been following all year, Jae Solina aka JSFilmz. The film has been created in UE5 and includes some nifty mocap, great dance moves and some interesting lighting effects. Hear what the team have to say about the film and format and let us have your comments too!



YouTube Version of this Episode

Show Notes and Links

Metaverse Music Video, released 10 Sept 2022 (note, the video can be viewed as a VR experience or a 360 video) – where is the Batman Easter Egg?!!!

Our discussion on Friedrich Kirschner’s immersive machinima, person2184, in THIS episode

Nightmare Puppeteer allows 360 filmmaking – check out the engine on Steam HERE

Key questions: what new language might be needed for machinima story vs experience creators to get the most out of VR/360 formats?

Credits –

Speakers: Ricky Grove, Damien Valentine, Tracy Harwood (MIA Phil Rice, courtesy of Hurricane Ian)
Producer/Editor: Damien Valentine
Music: Scott Buckley – www.scottbuckley.com.au CC 00

Tech Update 1 (Nov 2022)

Tracy Harwood Blog October 30, 2022 Leave a reply

Hot on the heels of our discussion on AI generators last week, we are interested to see tools already emerging that turn text prompts into 3D objects and also film content, alongside a tool for making music too. We have no less than five interesting updates to share here – plus a potentially very useful tool for rigging the character assets you create!

Another area of rapidly developing technological advancements is mo-cap, especially in the domain of markerless which lets face it is really the only way to think about creating naturalistic movement-based content. We share two interesting updates this week.

AI Generators

Nvidia has launched an AI tool that will generate 3D objects (see video). Called GET3D (which is derived from ‘Generate Explicit Textured 3D meshes’), the tool can generate characters and other 3D objects, as explained by Isha Salian on their blog (23 Sept). The code for the tool is currently available on Github, with instructions on how to use it here.

Google Research with researchers at the University of California, Berkeley are also working on similar tools (reported in Gigazine on 30 Sept). DreamFusion uses NeRF tech to create 3D models which can be exported into 3D renderers and modeling software. You can find the tool on Github here.

DreamFusion

Meta has developed a text-to-video generator, called Make-A-Video. The tool uses a single image or can fill in between two images to create some motion. The tool currently generates five second videos which are perfect for background shots in your film. Check out the details on their website here (and sign up to their updates too). Let us know how you get on with this one too!

Make-A-Video

Runway has released a Stable Diffusion-based tool that allows creators to switch out bits of images they do not like and replace them with things they do like (reported in 80.lv on 19 Oct), called Erase and Replace. There are some introductory videos available on Runway’s YouTube channel (see below for the Introduction to the tool).

And finally, also available on Github, is Mubert, a text-to-music generator. This tool uses a Deforum Stable Diffusion colab. Described as proprietary tech, its creator provides a custom license but says anything created with it cannot be released on DSPs as your own. It can be used for free with attribution to sync with images and videos, mentioning @mubertapp and hashtag #mubert, with an option to contact them directly if a commercial license is needed.

Character Rigging

Reallusion‘s Character Creator 4.1 has launched with built in AccurRIG tech – this turns any static model into an animation ready character and also comes with cross-platform support. No doubt very useful for those assets you might want to import from any AI generators you use!

Motion Capture Developments

That every-ready multi-tool, the digital equivalent of the Swiss army knife, has come to the rescue once again: the iPhone can now be used for full body mocap in Unreal Engine 5.1, as illustrated by Jae Solina, aka JSFilmz, in his video (below). Jae has used move.ai, which is rapidly becoming the gold standard in markerless mocap tech and for which you can find a growing number of demo vids showing how detailed movement can be captured on YouTube. You can find move.ai tutorials on Vimeo here and for more details about which versions of which smart phones you can use, go to their website here – its very impressive.

Another form of mocap is the detail of the image itself. Reality Capture has launched a tool that you can use to capture yourself (or anyone else or that matter, including your best doggo buddy) and use the resulting mesh to import into Unreal’s MetaHuman. Even more impressive is that Reality Capture is free, download details from here.

We’d love to hear how you get on with any of the tools we’ve covered this week – hit the ‘talk’ button on the menu bar up top and let us know.