Blog

Fests & Contests Update (Nov 2022)

Tracy Harwood Blog November 21, 2022 Leave a reply

There are a growing number of ‘challenges’ that we’ve been finding over the last few months – many are opportunities to learn new tools or use assets created by studios such as MacInnes Studio. They are also incentivised with some great prizes, generally involving something offered by the contest organizer, such as that by Kitbash3D we link to in this post. This week, we were however light on actual live contests to call out, but have found someone who is always in the know, Winbush!

Mission to Minerva (deadline 2 Dec 2022)

Kitbash3D’s challenge is for you to contribute to the development of a new galaxy! On their website, they state: ‘Your mission, should you choose to accept, is to build a settlement on a planet within the galaxy. What will yours look like?’ Their ultimate aim is to outsource all the creatie work to their community, combining artworks contests participants submit. There are online tutorials to assist, where they show you how to use Kitbash3D in Blender and Unreal Engine 5, and your work can be either concept are or animation. Entry into the contest couldn’t be simpler: you just need to share on social media (Twitter, FB, IG, Artstation) and use the hashtag #KB3Dchallenge. Winners will be announced on 20 December and there are some great prizes, sponsored by the likes off Unreal, Nvidia, CG Spectrum, WACOM, The Gnoman Workshop, The Rookies and ArtStation (platforms). Entry details and more info here.

Pug Forest Challenge

This contest has already wrapped – but there are now a few of this type of thing emerging – challenges which give you an asset to play with for a period of time, a submission guideline process, and some fabulous prizes – all geared towards incentivising you to learn a new toolset, this one being UE5! So if you need the incentivisation to motivate you – its definitely worth looking out for these. Jonathan Winbush is also one of those folks whose tutorials are legendary in the UE5 community, so even if you don’t want to enter, this is someone to follow.

McInnes Studios’ Mood Scene Challenge

John McInnes recently announced the winners of his Mood Scene challenge contest that we reported on back in August – we must say, the winners have certainly delivered some amazing moods. Check the show reel out here –

Projects Update (Nov 2022)

Tracy Harwood Blog November 14, 2022 1 Comment

This week, we take a look at some interesting projects we’ve found in our monthly search of the inter-web for all things machinima / virtual production / real-time. We bring you projects using Web3, made with a HUGE cast, mix virtual and real, and 2D and 3D animation styles

Crip Ya Enthusiasm by SnoopDogg (rel 16 Oct 2022)

Apart from the typical SnoopDogg lingo, which you either love or loathe, this is an interesting short made in Unreal Engine 5. It is not so much interesting because it is a music video by a self-confessed creative tech lover with a novel storytelling approach to putting his content out, but because it is being distrubted through Snoop’s new Web3 platform called Astro Project as a gamified experience or, to use his term, a ‘metaverse music video’. The characters used in the video have been made available on the platform’s marketplace as NFTs and other creators are being encouraged to create and share content through the platform to unlock exclusive content and hosted events. Anyone buying the characters can do anything they like with them, include them in their own creative works for example, using the blockchain tech embedded in their creation and distribution. So, whether you like the content or not, its the platform process used that is particularly interesting in this project.

As with all things NFTs, it is worth noting that really, its success is only as good as the marketing effort through which you can achieve decent audiences in order to manipulate the market parameters. Obviously SnoopDogg has an upper hand on this.

SAPIENS by Lukas Klosel (rel 7 July 2022)

This is a cinematic short about the impact on man on our planet. Its a very provocative film, which does include some disturbing scenes (so if you’re sensitive, you may want to miss watching this one). We’re not exactly sure what creative tools have been used in this one, and the description doesn’t say, but certainly there’s a fair amount of post-production as well as mixing of real and virtual content so there’s bound to have been some use of virtual production tools. We include it though because of the way it mixes virtual and real scenes, how it portrays its focal story through visual concepts (and lens focus) and clever use of sound design.

Sandstorm by Wailander (rel 13 May 2022)

A more traditional machinima made in Star Citizen, this has some great dynamics, played out by the 97 players involved in shooting the scenes included in the finished video. Its an incredibly complex set of scenes with many participants involved in portraying the details of the rather loosely defined plot. Its creative goal, however, wasn’t so much to tell a story as to bring together as many different players as it could. We certainly think it delivered on, drawing in organizations from five different countries (France, USA, Switzerland, Belgium, Germany) and portraying as accurately as it could how fighting unfolds in this expansive engine. The story is held together with a front end briefing against which they do periodic updates. The credits section alone is something to just take a look at. The final scene intimates a continuing saga and we look forward to seeing that and perhaps more of a story integrated into the fighting action too.

Roborovski by Rick Pearce (rel 2020)

Too sentimental for Ricky perhaps, but certainly not one for Nemo-loving children, this is short that mixes 2D and 3D animation styles very effectively. Made in Unreal Engine 4.21, primarily used in order to test the creative pipeline in the engine, the film won Flickerfest’s best animation award in 2020. It was made by Pearce’s Spectre Studio and funded by Screen Australia, so it is by no means a naive creative endeavour. The video is above in the title link, but here’s a behind the scenes look at the making of the film, which is particularly interesting too.

The Walker by AFK – The Webseries (Rel 5 Aug 2022)

Finally, this month, a revisit to a little bit of old-style fun made in Unreal Engine 5, invoking all those great memories of RVB Series 1 (Rooster Teeth, for those in the know). In this short, some incredibly well done Star Wars comedy voice-acting, told through the suits of the Empire’s Snow Troopers, located deep in the bowels of an ATAT. Enjoy!

Tech Update 2 (Nov 2022)

Tracy Harwood Blog November 7, 2022 Leave a reply

This week, we take a look at some potentially useful tech platforms, starting with an inspired new service from Nvidia, then a new service and mod hub for The Sims 4, followed by some interesting distribution options linked to blockchain tech and another for festivals and events.

Cloud Services for Artists

With the ongoing challenges of access to kit for using many of the new render tools we’ve reviewed on the show over the months we’ve been running, its interesting to see that Nvidia are now launching Omniverse Cloud services. Ostensibly, the service is aimed at powering future ‘metaverse’ applications and those working on digital twin-type projects, but clearly its a very good way for content creators to finally be able to access contemporary tools without the hassles of continually updating their hardware to do it – or indeed ever worrying about acquiring the latest desirable RTX card! You can find out more about the services here – and we’d love to hear from anyone using the services about their experiences with the services.

Nvidia Omniverse Cloud Nucleus

Anyone for Sims?

The Sims 4 is now FREE to use (announced 18 Oct 2022), although we note that specific content packs will still be paid only accessible. No doubt Phil will be peeved since we all advised him to go for Unreal as a creative option when he switched his attention from RDR2 last year! Their glitzy Summit vid is clearly pitching itself against the Fortnite user, but with an entirely different heritage and more adult trajectory. They are even partnering with a new content creator curation platform, a mod hub hosted by Overwolf (coming soon).

Distribution Options

With rapid progression towards Web3, and the growing demand for 3D content that will fill the platforms and sites people create, Josephyine If has usefully created a spreadsheet that you can access here. The XLS file lists platforms and their creators including website addresses for film and video content can be shared (at the time of writing, some 18 different platforms such as Hyphenova, MContent – see video below – and Eluv.io). The main point of the platforms, at least at this stage, is to manage IP of content, so the emphasis is on how to share blockchain-marked film. Its probably one of the most interesting aspects and benefits that Web3 has for content creators, the ability to sell, track and manage content over time. This is something that’s been a major flaw of the YouTube platform process over the years since it evolved into an ad revenue driven distribution model. If you find any of the platforms particularly useful (or not), or others not mentioned on the list, do drop us a line and let us know.

We also found a potentially interesting distribution platform for festivals and events primarily, called VisualContainerTV. The platform launched in 2009 makes content available for free and therefore competes directly with the likes of YouTube (which frankly it can’t easily do) but more importantly, it can make content accessible behind a paywall. This means artists, creators and curators can receive payment for ticketed content shown over the platform via the internet and also have that branded and associated with particular curated events. At this stage of its development, it appears to be primarily targeting college students and courses based in Europe (the platform has been developed in Italy) but it is certainly something that looks interesting for small scale user groups. There are some very interesting arts projects on the site, so if nothing else, add it to your streaming platforms folder to check out periodically for interesting new works coming out.

VisualContainerTV

Tech Update 1 (Nov 2022)

Tracy Harwood Blog October 30, 2022 Leave a reply

Hot on the heels of our discussion on AI generators last week, we are interested to see tools already emerging that turn text prompts into 3D objects and also film content, alongside a tool for making music too. We have no less than five interesting updates to share here – plus a potentially very useful tool for rigging the character assets you create!

Another area of rapidly developing technological advancements is mo-cap, especially in the domain of markerless which lets face it is really the only way to think about creating naturalistic movement-based content. We share two interesting updates this week.

AI Generators

Nvidia has launched an AI tool that will generate 3D objects (see video). Called GET3D (which is derived from ‘Generate Explicit Textured 3D meshes’), the tool can generate characters and other 3D objects, as explained by Isha Salian on their blog (23 Sept). The code for the tool is currently available on Github, with instructions on how to use it here.

Google Research with researchers at the University of California, Berkeley are also working on similar tools (reported in Gigazine on 30 Sept). DreamFusion uses NeRF tech to create 3D models which can be exported into 3D renderers and modeling software. You can find the tool on Github here.

DreamFusion

Meta has developed a text-to-video generator, called Make-A-Video. The tool uses a single image or can fill in between two images to create some motion. The tool currently generates five second videos which are perfect for background shots in your film. Check out the details on their website here (and sign up to their updates too). Let us know how you get on with this one too!

Make-A-Video

Runway has released a Stable Diffusion-based tool that allows creators to switch out bits of images they do not like and replace them with things they do like (reported in 80.lv on 19 Oct), called Erase and Replace. There are some introductory videos available on Runway’s YouTube channel (see below for the Introduction to the tool).

And finally, also available on Github, is Mubert, a text-to-music generator. This tool uses a Deforum Stable Diffusion colab. Described as proprietary tech, its creator provides a custom license but says anything created with it cannot be released on DSPs as your own. It can be used for free with attribution to sync with images and videos, mentioning @mubertapp and hashtag #mubert, with an option to contact them directly if a commercial license is needed.

Character Rigging

Reallusion‘s Character Creator 4.1 has launched with built in AccurRIG tech – this turns any static model into an animation ready character and also comes with cross-platform support. No doubt very useful for those assets you might want to import from any AI generators you use!

Motion Capture Developments

That every-ready multi-tool, the digital equivalent of the Swiss army knife, has come to the rescue once again: the iPhone can now be used for full body mocap in Unreal Engine 5.1, as illustrated by Jae Solina, aka JSFilmz, in his video (below). Jae has used move.ai, which is rapidly becoming the gold standard in markerless mocap tech and for which you can find a growing number of demo vids showing how detailed movement can be captured on YouTube. You can find move.ai tutorials on Vimeo here and for more details about which versions of which smart phones you can use, go to their website here – its very impressive.

Another form of mocap is the detail of the image itself. Reality Capture has launched a tool that you can use to capture yourself (or anyone else or that matter, including your best doggo buddy) and use the resulting mesh to import into Unreal’s MetaHuman. Even more impressive is that Reality Capture is free, download details from here.

We’d love to hear how you get on with any of the tools we’ve covered this week – hit the ‘talk’ button on the menu bar up top and let us know.

Report: Creative AI Generators (Oct 2022)

Tracy Harwood Blog October 23, 2022 2 Comments

In this month’s special report, we take a look at some of the key challenges in using creative AI generators such as DALL-E, MidJourney, Stable Diffusion and others. Whilst we think they have FANTASTIC potential for creators, not least because they cut down the time in finding some of the creative ideas you want to use, there are some things that are emerging that need to be considered when using them.

Firstly, IP is a massive issue. As noted in this article on Kotaku (Luke Plunkett), the recent rise of AI-created art has brought to the fore some of the moral and legal problems in using it. In terms of the moral issues, some are afraid of a future where entry level art positions are taken over by AI and others see AI-created art as a reflection of what’s already occuring between artists – the influence of style and content… but this is an argument that came to the fore when computers were first used by artists back in the 1960s. Quite frankly we are now seeing some of the most creative work in a generation come to fruitition that just would not have happened without computational assistance. Take a look at the Lumen Prize annual entries, for example, to see what the state of the art is with creative possibilities of AI and other tech. Tracy even directs an Art AI Festival, aiming to showcase some of the latest AIs in creative applications, working in collaboration with one of the world’s leading creative AI curators, Luba Elliott.

As to the legal issues, these are really only just emerging and in a very disjointed and piecemeal way. It was interesting to note that Getty Images notified its contributors in an email (21 Sept 2022) that “Effective immediately, Getty Images will cease to accept all submissions created using AI generative models (e.g., Stable Diffusion, Dall‑E 2, MidJourney, etc.) and prior submissions utilizing such models will be removed.” It went on to state: “There are open questions with respect to the copyright of outputs from these models and there are unaddressed rights issues with respect to the underlying imagery and metadata used to train these models. These changes do not prevent the submission of 3D renders and do not impact the use of digital editing tools (e.g., Photoshop, Illustrator, etc.) with respect to modifying and creating imagery.” This is hot on the heals of a number of developments earlier in the year: in February 2022, the US Copyright Office refused to acknowledge that an AI could hold copyright of its creative endeavour (article here). By September 2022, an artwork created with MidJourney by Jason Allen that won the Colorado State Fair contest was causing a major stir across the art world as to what constitutes art, as outlined in this article (Smithsonian Magazine) and this short news report here –

Of course, the real dilemma is what happens to artists, particularly those at the lower end of the food chain. By way of another example, consider the UK actors’ union Equity’s response to recent proposals by the Government to include a data mining exemption for audio-visual content in its proposed new AI regulation. Why that’s interesting is because already a number of organizations that would otherwise employ these artists, say as graphic designers or concept artists, are rapidly replacing them with AI generated images – Cosmopolitan used its ‘first AI generated cover’ in June 2022 and advertising agencies the world over are doing likewise (Adage article). Some image users have even stated that in future they will ONLY use these tools as image sources, effectively cutting out the middle man, and indeed the originator of the contributory works. So, of course Getty is not going to be happy about this… and neither are the many contributors to their platforms.

And so here is the nub of the problem: in the rush that is now going to follow Getty’s stance (and probably others with similar influence to follow), how will the use of AI generators be policed? This has pretty serious consequences because it has implications for all content including on YouTube, in festivals and contests around the world – how would creative works like The Crow be judged (see our blog post here too)? It certainly places emphasis on the role of metadata and statements of authorship, but it is also as good an argument we can think of for using blockchain too! The Crow for example briefly mentions the AI generator tool it has used, which is freely available to use on Google CoLab here, but it doesn’t show the sources of the underlying training data set used.

AI code source is Pytii Colab Notebook (sportsracer48)

We contend, the only way to police the use of AI generated content is actually by using AI, say by analysing pixel level detail… and that’s because one of Getty’s points is no doubt going to be how their own stock images, even with copyright claims over them, have been used in training data sets. AI simply cuts out the stuff out that it doesn’t want and voila, something useful emerges! So, unless there is greater transparency and disclosure among the creators of AI generators AS A PRIORITY on where images have been scraped from and how they have been used, there is going to be a major problem for all types of content creators, including the machinima and virtual production creator using these tools as a way to infuse new ideas into their creative projects, and as the ability to turn 2D image into 3D object becomes more accessible to a wider range of creators. Watch this space!

In the meantime, we’ll be doing a podcast on the Completely Machinima YouTube channel some of the best creative ideas we’ve seen next month so do look out for that too.

We’d love to hear your views on this topic, so do drop them into the comments.

btw, our featured image was created in MidJourney using the prompt: ‘Diary of a Camper made in Quake engine’, by @tgharwood