ChatGPT

Tech Update 2 (Feb 2023)

Tracy Harwood Blog February 13, 2023 Leave a reply

This week, we highlight some time-saving examples for generating 3D models using – you guessed it – AIs, and we also take a look at some recent developments in motion tracking for creators.

3D Modelling

All these examples highlight that generating a 3D model isn’t the end of the process and that once its in Blender, or another animation toolset, there’s definitely more work to do. These add-ons are intended to help you reach your end result more quickly, cutting out some of the more tedious aspects of the creative process using AIs.

Blender is one of those amazing animation tools that has a very active community of users, and of course, a whole heap of folks looking for quick ways to solve challenges in their creative pipeline. We found folks that have integrated OpenAI’s ChatGPT into using the toolset by developing add-ons. Check out this illustration by Olav3D, whose comments about using ChatGPT for attempting to write Python scripts sum it up nicely, “better than search alone” –

Dreamtextures by Carson Katri is a Blender add-on using Stable Diffusion which is so clever that it even projects textures onto 3D models (with our thanks to Krad Productions for sharing this one). In this video, Default Cube talks about how to get results with as few glitches as possible –

and this short tells you how to integrate Dreamtextures into Blender, by Vertex Rage –

To check out Dreamtextures for yourself, you can find the Katri’s application on Github here and should you wish to support his work, subscribe to his Patreon channel here too.

OpenAI also launched its Point-E 3D model generator this month, which can then be imported into Blender but, as CGMatter has highlighted, using the published APIs takes a very long time sitting in cues to access the downloads, whilst downloading the code to your own machine to run it locally, well that’s easy – and once you have it, you can create point-cloud models in seconds. However, he’s running the code from Google’s CoLab, which means you can run the code in the cloud. Here’s his tutorial on how to use Point-E without the wait giving you access to your own version of the code (on Github) in CoLab –

We also found another very interesting Blender add-on, this one lets you import models from Google Maps into the toolset. The video is a little old, but the latest update of the mod on Github, version 0.6.0 (for RenderDoc 1.25 and Blender 3.4) has just released, created by Elie Michel –

We were also interested to see NVIDIA’s update at CES (in January). It announced a release for the Omniverse Launcher that supports 3D animation in Blender, with generative AIs that enhance characters’ movement and gestures, a future update to Canvas that includes 360 surround images for panoramic environments and also an AI ToyBox, that enables you to create 3D meshes from 2D inputs. Ostensibly, these tools are for creators to develop work for the metaverse and web3 applications, but we already know NVIDIA’s USD-based tools are incredibly powerful for supporting collaborative workflows including machinima and virtual production. Check out the update here and this is a nice little promo video that sums up the integrated collaborative capabilities –

Tracking

As fast as the 3D modelling scene is developing, so is motion tracking. Move.ai which launched late last year, announced its pricing strategy this month at $365 for 12 months of unlimited processing of recordings – this is markerless mocap at its very best, although not so much if you want to do live mocap (no pricing strategy announced yet). Move.ai (our feature image for this article) lets you record content using a mobile phone (a couple of old iPhones). You can find out more on its new website here and here’s a fun taster, called Gorillas in the mist, with ballet and 4 iPhones, released in December by the Move.ai team –

And another app although not 3D is Face 2D Live, released by Dayream Studios – Blueprints in January. This tool allows you to live link a Face app on your iPhone or iPad to make cartoons, including with your friends also using an iPhone app, out of just about anything. It costs just $14.99 and is available on the Unreal Marketplace here. Here’s a short video example to wet your appetite – we can see a lot of silliness ensuing with this for sure!

Not necessarily machinima but for those interested in more serious facial mocap, Weta has been talking about how it developed its facial mocap processes for Avatar, using something called an ‘anatomical plausible facial system’. This is an animator centric system that captures muscle movement rather than ‘facial action coding’ which focusses on identifying emotions. Weta stated its approach leads to a wider set of facial movements being integrated into the mocapped output – we’ll no doubt see more in due course. Here’s an article on the FX Guide website which discusses the approach being taken and for a wider ranging discussion on the types of performance tracking used by the Weta team, Corridor Crew have bagged a great interview with the Avatar VFX supervisor, Eric Saindon here –

Tech Update: AI Generators (Feb 2023)

Tracy Harwood Blog February 5, 2023 1 Comment

Now the AI genie is in full flight, we’ve been anticipating the exponential growth in interest in creative applications – and also in the ethical and moral questions being asked. This month, we have not been disappointed! We start our review with some of the tools we’ve seen emerge and finish with a review of the legal situation that’s been taking shape over the last month, since our January update on the topic.

It takes Text-to-???

It seems most of the online world has bought into the hype around ChatGPT, and who can blame folks for wanting in on the action – it reached a million users faster than any platform has previously in the history of the internet ie., 5 days. Whilst it appears that Google and others have for once been caught sleeping on the job, Microsoft has stolen a march and helped OpenAI monetize its premium chat service for a mere $20/month (just a week after the extended partnership was announced and only if you are US-based) from which each partner will no doubt benefit massively. In the meantime, there has been a huge number of Chrome browser extensions launched based on ChatGPT for everything from search using voice commands, article summaries, writing Excel formulae, email assistance, LinkedIn comments management, to SEO optimization and a heap of other useful-ish applications. Go to the Chrome web store and search for the ones that will help with your creative pipeline – I’m sure someone somewhere will have thought of it before you.

I found a few uses for the YouTube summary assistants of which there are a couple of options, this being one (by Glasp) –

After adding the extension, it took a couple of seconds in total to transcribe the video, copy the text into my ChatGPT account and summarize an hour long interview I did with John Gaeta last year. This is the summary of that interview, which is a pretty good overview of what was discussed albeit the first part is almost verbatim from the intro –

The video is an interview with John Gaeta, who is known for creating the famous bullet time shot in the “Matrix” films. He won the Best Visual Effects Oscar for his work on the Matrix and co-founded Lucasfilm’s immersive entertainment division called ILMxLAB, where he acts as the Executive Creative Director. In the interview, he talks about his experience in creating a demo for the Sony PlayStation super computer, which was shown at Siggraph in 2000. He also mentions his interest in building big and complicated projects while also making entertainment products. Gaeta explains how the bullet time shot was a result of a philosophy they had during the Matrix trilogy of creating methods that might be used if one was making virtual reality. He also touches on how the rise of the internet and gaming helped audiences comprehend the shot better and how it carried on the underlying premise of the Matrix itself. (ChatGPT)

and here’s the full interview –

If, like me, you’re after nuggets and detail (my day job is as a researcher), then this won’t really help you but if you just want to get a sense of what’s being discussed, and you’re reviewing lots of material from various channels, or want a quick summary for promo, then its really a great way to generate an overview.

Creatively, though, we’re far more interested in the potential for Text-to-Otherstuff, such as 3D assets, video and 3D environments. Towards that end, although targeting game asset dev, Scenario.gg is a proposition (launched in 2019) that closed a round of significant seedcorn investment in January. With its creators’ backgrounds in gaming, AI and 3D technology, Scenario’s generative AI creates game assets using both image and text promps, albeit currently outputs are 2D images (see below). Its aim is to support creation of high fidelity assets, 3D models, sounds & music, animations, environments and more, based on users’ uploads of their own content (image and text description). The ownership model on generated content is interesting, which pushes the IP issue back to users since only images you have the right to use can be uploaded.

Scenario believes its product will cut creative production time for game artists (those who choose to work with AI). It surpassed .5M created images on 21 January, so is clearly gaining momentum. This is an interesting development, given comments by Aura Triolo (an Independent Games Festival 2023 judge and animation lead at Ivy Road) in an article covering AI devs for metaverse and games here (by Wagner James Au on his New World Notes blog, published in mid December). Triolo makes the point that time savings probably won’t be worth the effort given how much additional work is required to refine 3D models that AI generates, particularly in AAAs (as in the use of AI for procedural generation). That may well be true in a context where automation tools have been used for some time but this type of toolset will surely benefit thousands of indies, and not just in gaming but also machinima and virtual production. Time will tell.

source: Scenario in Techcrunch

Meta AI has published a paper that discusses taking their text-to-video (MAV) generator one step further to 4D (NeRFs or neural radiance fields), referring to it as MAV3D (Make-a-Video-3D). It optimizes scene appearance, density and motion consistency from text-to-video, generates a view from any camera location and angle and can be composited into any 3D environment. MAV3D does not require any 3D or 4D data. It is not yet available as a tool to use but here’s the paper to read. We look forward to hearing more about this in due course.

Text-to-fashion? Well, maybe not just yet, however ReadyPlayerMe, which is a cross platform avatar creator, has a new feature on its recently launched ‘labs’ web platform. Currently in beta and free, it allows you to customise avatar outfits using DALL-E’s generative AI art platform for text prompts. After faffing for a few minutes, I created this (but the hair is still a mess!) –


Text-to-music is an interesting area. There are no doubt going to be lots of training issues emerge with this type of AI, however, what fascinates me with Google‘s MusicLM is the ability to generate music from rich captions, using a ‘story mode’ (with a sequencer) and even from descriptions of paintings, places and epochs. I don’t think I’ve ever heard anything quite like the piece it generated for Munch’s The Scream, using a description by Iain Zaczek – not exactly melodic but certainly evocative of the artwork. It will also let you hum something and then apply a specific instrument to hear it played back, apparently. There is currently no API through which you can test your own ideas, however, but go to the github page here and check out the samples reported in the paper. Google it seems is only making the dataset of MusicCaps comprising 5.5k music-text pairs available, which includes rich text descriptions provided by human experts, and has obviously decided to let someone else create the API and take the rap with it. It will no doubt be one of many in due course, but there are some great ideas presented in the paper worth checking out.

Curation and Discovery

Curating content is one of the perenial problems of the internet – and its a problem that is getting more challenging because even with so much effort being put into the creator toolsets, no one is really paying much attention in the creator context of how work can be discovered (unless of course there is advertising embedded in it, which is a whole different agenda). One can only hope that when advanced AIs are embedded within search engines, new opportunities for content discovery will emerge – sadly, however, I suspect this will result in an even deeper quagmire, leaving it to the key platforms to find a way through. Related to which, Artstation has now improved its AI search and browsing filters – it can hide artwork generated with AI in search and marketplaces and thereby ‘make it easier to discover and connect with creators most relevant to you’, but strangely it doesn’t promote only work created with AI in search.

On the matter of curation, a website for AI generators has launched, called All Things AI – AI developers can submit their tool to the site for its potential inclusion. The site has been developed by Rick Waalders, and whilst there are numerous AI tools and services on the site, there’s not much information yet about its creator or indeed reviews of the AIs themselves. If the site takes off, it might just be the place to find the apps you want – time will tell. Until then, blogs such as Pinar Seyhan-Demirdag‘s Medium post, dated 11 January, are great sources for curated content. In this post, Pinar lists more than a dozen 3D asset and scene generation models – a very useful summary, thanks. Now, what we really need is a Fandom wiki for AIs…!

The Legals are Circling

What a busy month it has been in lawyerland.

On 17 January, in San Francisco US, a class action suit was filed on behalf of three artists. It claims that Stability AI’s Stable Diffusion and DreamStudio, MidJourney and DeviantArt have colluded in the use of an AI that has been trained on scraped content that infringes the rights of copyright holders (the AI being created by a company called LAION which has connections to Stability AI) and that the results of its application by users has a detrimental impact on the artists making profit from their own work as a consequence. One of the legal team has written a detailed blog about the action here, and here is the link to the action, should you want a quick scan through its 46 pages. The following day, 18 January, Getty Images stated that it has commenced proceedings against Stability AI in the High Court of Justice in London –

… Stability AI infringed intellectual property rights including copyright in content owned or represented by Getty Images. It is Getty Images’ position that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.

Getty Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights. Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long‑standing legal protections in pursuit of their stand‑alone commercial interests.

No more specifics are available on the Getty case. The latter comments are particularly interesting, however, given its stance with creative contributors whom it has banned from uploading AI generated content, something we highlighted in our December blog post.

On the US class action, notwithstanding the technicalities of its description of how the AI works (which some have already questioned as being incorrect), the action is primarily about two aspects of copyright infringement – one related to a company which is ‘licensing images’ for use in training the AIs (that a couple of the image generating companies are using); the other is the specific use of an artist’s name to generate an image ‘in the style of …’ which suggests that person’s specific work, tagged presumably with their name, has been used without their permission to train the AI. Those using the images ‘in the style of x’ are referred to as ‘imposters’ whom it is being argued are contributing to the fake economy (which different governments are currently trying to control). The suit is not against the imposters but those who allow imposters to profit.

The action holds that the companies ‘scraping’ the images (which is a metaphor for how images are actually used in the diffusion process) could provide a means to seek permission from those artists used but it has not done so because it is ultimately expensive and takes time to do. The action is for compensation of damages for lost revenue and damage to brand identity of artists. The premise for this is that the companies being sued are generating huge amounts of money that is not finding its way to those contributors whose work is used in the training processes. The money flows are therefore the areas where the ‘fair use doctrine’ is being brought to bear.

The substantive legal issue, however, seems to centre on transformative works (from derivative works). Corridor Crew has produced a nice summary in a video with California State attorney Jake Watson explaining how AI probably DOES transform the work sufficiently for its use of copyrighted images to constitute FAIR USE. So it still comes down to what fairness means in terms of money flows, ultimately. Here’s the video –

And for a line by line commentary on another perspective of the class action blog post by one of the artists’ legal team, a response has been created by a group of ‘tech enthusiasts uninvolved in the case, and not lawyers, for the purpose of fighting misinformation’. I hesitated to include this link, especially since the author/s are anonymous and the contact link is to an ancient Simpsons video sniping at profiteering lawyers, but it makes some interesting points and is well referenced.

It generally sounds like there is a fix to this problem, and its one we’ve highlighted in previous posts on this topic, where AI generator platforms pay artists to use their work… oh, wait, isn’t Shutterstock already doing that, working with OpenAI’s DALL-E? Yes, and here is its generator – it pays the artists (I couldn’t find how much) and users pay for the images downloaded ($19/month for 10 images or 1 video). The wicked problem here, Techcrunch argues, is will folks be willing to pay for AI generated artwork, suggesting yes they will if the generator service has the best selection, pricing, discovery, and overall experience for the user and the artist. And, DreamStudio Pro is already a paid for service that folks are using.

Or, you can opt out of your art being included in Stable Diffusion’s AI, using the third party HaveIBeenTrained web service by an artist-based startup called Spawning. It looks as though there have been a few problems using this service according to the comments, such as being able to tag anyone’s imagery, but even more surprising was that not many seem to have viewed the support video in comparison to all the hoo-haa in the media on this topic. Check out this video –

Alternatively, you can just head over to LAION’s website and opt out there (scroll to the bottom of the page), following the long-winded GDPR processes under the EU reparation system.

In the meantime, Artstation has updated its T&Cs to make it clear that scraping, reselling or redistributing content is not permited and furthermore, it has committed to not licensing content to AI generator platforms for training purposes. Epic’s stance is always interesting to note, but its business model is not tied to just this one type of offer as the other platforms are.

And finally on the legals this month, we were intrigued to note that the US Copyright Office appeared to have cancelled the registration of the first AI generated graphic novel, called Zarya of the Dawn (our feature image for this article, by Kristina Kashtanova), claiming it had ‘made an error’ in the registration process… turns out that they are still ‘working on a response’, stating their portal is in beta. This was not before the artist had made an extensive response to the apparent cancellation through her lawyer, Van Lindberg. Its worth taking a few minutes to read the claims for originality, using MidJourney to support her creative processes, here. In sum, the response states –

the use of that tool does not diminish the human mind that conceived, created, selected, refined, cropped, positioned, framed, and arranged all the different elements of the Work into a story that reflects Kashtanova’s personal experience and artistic vision. As such, the Work is the result of human authorship

So this is yet another situation where the outcome is awaited. More next month for sure!

Tech Update 1: AI Generators (Jan 2023)

Tracy Harwood Blog January 8, 2023 Leave a reply

A month is apparently a VERY long time in the world of artificial intelligence… since our last post on this topic, released at the beginning of December, we’ve seen even more amazing techs launch.

ChatGPT

This is another creative AI from the OpenAI stable (also creator of DALL-E 2) that produces text in a conversational way. ChatGPT is capable of answering pretty much any question or request you care to throw at it, and the answers are actually quite robust if a little fuzzy around the edges at times (don’t rely on it for anything critical, in the words of Sam Altman, OpenAI CEO). Here’s an example I asked it for this post, using the prompt: ‘In the style of H P Lovecraft, write a description of a character for someone who hunts the Chluthu’

Response:

Professor Alistair Dunsmore was a man obsessed. For years, he had devoted his life to studying the mysteries of the cosmos, and in particular, the myth of the Chluthu.

A brilliant but troubled man, Dunsmore was driven by a need to uncover the truth about the terrifying creature that lurked beyond the veil of our known universe. He had spent years pouring over ancient texts and forbidden tomes, piecing together the scattered clues that hinted at the Chluthu’s existence.

And then, one fateful day, he discovered something that would change his life forever. A fragment of an ancient artifact, imbued with otherworldly powers, that seemed to hold the key to finding the Chluthu.

Dunsmore knew that he had to have it, no matter the cost. And so he set out on a quest to track down the artifact and unlock its secrets. He traveled to the far corners of the earth, braving dangers and braving the unknown, all in the pursuit of his ultimate goal.

Some say that Dunsmore was mad, driven to the brink by his obsession. But to those who knew him, he was a hero, a brave and brilliant man who dared to stare into the darkness and face the horrors that lay within.

That’s impressive – and it took just seconds to generate. It has great potential to be a useful tool for scriptwriting for stories and character development that can be used in machinima and virtual productions, and also marketing assets you might use to promote your creative works too!

And as if that isn’t useful enough, some bright folks have already used it to write a game and even create a virtual world. Note the detail in the prompts being used – this one from Jon Radoff’s article (4 Dec 2022) for an adventure game concept: ‘I want you to act as if you are a classic text adventure game and we are playing. I don’t want you to ever break out of your character, and you must not refer to yourself in any way. If I want to give you instructions outside the context of the game, I will use curly brackets {like this} but otherwise you are to stick to being the text adventure program. In this game, the setting is a fantasy adventure world. Each room should have at least 3 sentence descriptions. Start by displaying the first room at the beginning of the game, and wait for my to give you my first command’.

The detail is obviously the key and no doubt we’ll all get better at writing prompts as we learn how the tools respond to our requests. It is interesting that some are also suggesting there may be a new role on the horizon… a ‘prompt engineer’ (check out this article in the UK’s Financial Times). Yup, that and a ‘script prompter’, or any other possible prompter-writer role you can think of… but can it tell jokes too?

Give it a go – we’d love to hear your thoughts on the ideas it generates. Of course, those of you with even more flAIre can then use the scripts to generate images, characters, videos, music and soundscapes. There’s no excuse for not giving these new tools for producing machine cinema a go, surely.

Link requires registration to use (it is currently free) and note the tool now also keeps all of your previous chats which enables you to build on themes as you go: ChatGPT

Image Generators

Building on ChatGPT, D-ID enables you to create photorealistic speaking avatars from text. You can even upload your own image to create a speaking avatar, which of course raises a few IP issues, as we’ve just seen from the LENSA debacle (see this article on FastCompany’s website), but JSFILMZ has highlighted some of the potentials of the tech for machinima and virtual production creators here –

An AI we’ve mentioned previously, Stable Diffusion version 2.1 released on 7 December 2022. This is an image generating AI, its creative tool is called Dream Studio (and the Pro version will create video). In this latest version of the algorithm, developers have improved the filter which removes adult content yet enables beautiful and realistic looking images of characters to be created (now with better defined anatomy and hands), as well as stunning architectural concepts, natural scenery, etc. in a wider range of aesthetic styles than previous versions. It also enables you to produce images with non standard aspect ratios such as panoramics. As with ChatGPT, a lot depends on the prompt written in generating a quality image. This image and prompt example is taken from the Stability.ai website –

source: Stability.ai

So, just to show you how useful this can be, I took some text from the ChatGPT narrative for our imaginary character, Professor Alistair Dunsmore, and used a prompt to generate images of what he might look like and where he might be doing his research. The feature images for this post are some of the images it generated – and I guess I shouldn’t have been so surprised that the character looks vaguely reminiscent of Lovecraft himself. The prompt also produced some other images (below) and all you need to do is select the image you like best. Again, these are impressive outputs from a couple of minutes of playing around with the prompt.

images of Professor Alistair Dunsmore, in his study, searching for the Chluthu, by Tracy & Stable Diffusion

For next month, we might even see if we can create a video for you, but in the meantime, here’s an explainer of a similar approach that Martin Nebelong has taken, using MidJourney instead to retell some classic stories –

Supporting the great potential for creative endeavour, ArtStation has taken a stance in favour of the use of AI in generating images with its portfolio website (which btw was bought by Epic Games in 2021). This is in spite of thousands of its users demanding that it remove AI generated work and prevent content being scraped. This request is predicated on the lack of transparency used by AI developers in training and generating datasets. Instead, ArtStation has removed those using the Ghostbuster-like logo on their portfolios (‘no to AI generated images’) from its homepage and issued a statement about how creatives using the platform can protect their work. The text of an email received on 16 December 2022 stated:

Our goal at ArtStation is to empower artists with tools to showcase their work. We have updated our Terms of Service to reflect new features added to ArtStation as it relates to the use of AI software in the creation of artwork posted on the platform.

First, we have introduced a “NoAI” tag. When you tag your projects using the “NoAI” tag, the project will automatically be assigned an HTML “NoAI” meta tag. This will mark the project so that AI systems know you explicitly disallow the use of the project and its contained content by AI systems.

We have also updated the Terms of Service to reflect that it is prohibited to collect, aggregate, mine, scrape, or otherwise use any content uploaded to ArtStation for the purposes of testing, inputting, or integrating such content with AI or other algorithmic methods where any content has been tagged, labeled, or otherwise marked “NoAI”.

For more information, visit our Help Center FAQ and check out the updated Terms of Service.

You can also read an interesting article following the debate on The Verge’s website here, published 23 December 2022.

example of a logo used by creators on ArtStation portfolios

We’ve said it before, but AI is one of the tools that the digital arts community has commented on FOR YEARS. Its best use is as a means to support creatives to develop new pathways in their work. It does cut corners but it pushes people to think differently. I direct the UK’s Art AI Festival and the festival YouTube channel contains a number videos of live streamed discussions we’ve had with numerous international artists, such as Ernest Edmonds, a founder of the digital arts movement in the 1960s; Victoria and Albert Museum (London) digital arts curator Melanie Lenz; the first creative AI Lumen Prize winner, Cecilie Waagner Falkenstrom; and Eva Jäger, artist, researcher and assistant curator at Serpentine Galleries (London), among others. All discuss the role of AI in the development of their creative and curatorial practice, and AI is often described as a contemporary form of a paintbrush and canvas. As I’ve illustrated above with the H P Lovecraft character development process, its a means to generate some ideas through which it is possible to select and explore new directions that might otherwise take weeks to do. It is unfortunate that some have narrowed their view of its use rather than more actively engaged in discussion on how it might add to the creative processes employed by artists, but we also understand the concerns some have on the blatant exploitation of copyrighted material used without any real form of attribution. Surely AI can be part of the solution for that problem too although I have to admit so far I’ve seen very little effort being put into this part of the challenge – maybe you have?

In other developments, a new ‘globe’ plug-in for Unreal Engine has been developed by Blackshark. This is a fascinating world view, giving users access to synthetic 3D (#SYNTH3D) terrain data, including ground textures, buildings, infrastructure and vegetation of the entire Earth, based on satellite data. It contains some stunning sample sets and, according to Blackshark’s CEO Michael Putz, is the beginning of a new era of visualizing large scale models combined with georeferenced data. I’m sure we can all think of a few good stories that this one will be useful for too. Check out the video explainer here –

And Next…?

Who knows, but we’re looking forward to seeing how this fast action tech set evolves and we’ll be aiming to bring you more updates next month.

Don’t forget to drop us a line or add comments to continue the conversation with us on this.