Do you have experience optimizing custom neural network models for fast inference using the latest TensorRT plugins and cuDNN?  Then we’d love to talk to you – pls head on over to careers page and get in touch!

Our latest AI: Gen3 

 

Today, we are launching Gen3, the latest generation of our AI for our community of creators and developers. 

 

Gen3 has been a labor of love, skill and persistence. We’ve been on it for more than a year, because we knew that the Gen3 architecture would lay the foundation for a revolution in 3D motion tracking science. It is difficult to overstate how excited we are.  Not only does Gen3 provide far improved output, it also does so at significantly enhanced throughput.  So much so, that it’s now capable of being run in real time.  

 

With all those improvements now available, we’ll be releasing a range of new products, both in the cloud and for local (on prem) use.

 

RADiCAL consists of a small team of 3D graphics and AI enthusiasts.  We hope you enjoy the fruit of our labor as much as we do.  Below we have summarized just some of the highlights we want you to know about. 

 

Key features: 

 

RADiCAL is optimized for content creators, with the following priorities guiding everything we do: 

 

  1. Human aesthetics: because of our holistic approach to motion and deep learning, we’ve massively enhanced the human, organically expressive look and feel of our output, with smooth results that substantially reduce jitter and snapping; 
  2. Fidelity: Gen3 was designed to tease out much more detail in human motion than previous versions; 
  3. Speed: we want to ensure that our technology is capable of running in real time across most hardware and software environments. 

 

Going forward, Gen3 will support both CORE (our cloud-based motion capture technology) and new real time products (including an SDK) that we will announce and release shortly.  

 

While Gen3 has moved in massive leaps toward realizing those priorities, we also know that we have more work to do. More about that below. 

 

About our science:

 

There’s a lot of secret sauce in our science. But here’s what we can say: we’ve developed our AI to understand human motion holistically. Rather than creating a sequence of poses to create the impression of motion, we interpret the actor’s input through an understanding of human motion and biomechanics in three-dimensional space over time. In other words, our technology thinks in four dimensions: x, y, z and time.

 

We have more work to do:

 

As proud as we are of our progress, we want to do better in a few areas. One of our top priorities for the next few weeks and months is to better anchor our animations to the floor and reduce certain oscillations. 

 

We expect to roll out a first set of improvements within weeks, which should take us much closer to where we want to be in terms of reducing foot sliding  and oscillations. 

 

But we expect more work to be necessary after that. Those additional improvements will come with the next large release, in version 3.1 or 3.2.  We’ve already started to work on those improvements and we’re genuinely excited about making the results of our research public soon.

 

In the meantime, you can substantially mitigate these effects by following the guidance below.

 

How to get the best results:

 

To get the most out of our technology, you should: 

 

  • Static, stable camera: place your camera on a flat, stable surface (or a tripod, of course). Don’t adjust the zoom while recording. Don’t cut between different camera angles. 
  • Single actor: record a single person at a time; 
  • T-pose calibration:  ensure the actor strikes a T pose within the first five seconds with the entire body being clearly visible at a frontal angle to the camera; and 
  • Aspect ratio: record, use or upload videos with aspect ratios not wider than 4:3.  That’s because our AI only processes videos in a 4:3 ratio. While you can upload videos with wider ratios (we’ll crop them back automatically), you should keep your actor inside the 4:3 ratio to ensure they don’t get cropped out.    

 

Play nicely, and you’ll get best results!

 

* *

 

As ever, we’re forever grateful for the support of the RADiCAL community.  We’re excited about feedback, good and bad.  We’re even more excited about constructive criticism and assistance.  

– Team RADiCAL 

As we prepare for the transition to Gen3, we have to make a few changes.  The core of our platform will continue to run, but we’re limiting access to Gen2 through free accounts via some of our apps until Gen3 is out. 

 

  • Website: Our website will continue to operate as usual, with access to your completed scenes (Projects) and community scenes (Explore). You can register as a new user, including for a free account, download FBX files, manage your account and upload new videos via our custom upload page for processing through Gen2, if you hold a paid subscription.
  • Windows app: Our windows app will continue to be available on Steam. It will operate as usual, and we will continue to maintain it.
  • Mobile apps / MacOS app: To prepare for Gen3, we are suspending downloads of our iOS, Android and MacOS apps from the app stores.

    If you’ve already installed these apps, you can continue to use them by accessing your completed scenes and community scenes. However, you will no longer be able to upload new videos into our cloud from the mobile apps. The mobile apps won’t prevent you from recording videos or initiating uploads, but the upload will not reach our servers (and you will receive an email to confirm this). We will no longer update or maintain these legacy apps.

    If you’re a paying subscriber, you can continue to upload new videos for processing through Gen2 via our custom upload page on our website.

 

We hope you bear with us.  The transition to Gen3 is a momentous task.  We’re excited about it, and we hope you are, too. 

 

– Team RADiCAL  

Over the last 9 months, countless users have asked us to release a feature that would allow them to upload videos into our cloud-powered AI that were recorded independently by them, i.e. videos that were not recorded through the RADiCAL mobile apps.

We’ve heard you loud and clear.

Starting today, you’ll be able to upload your own videos through our custom uploader on our website.

You can get to the custom uploader in two ways:

 

  • Web: From the members area of our website: -> hit the UPLOAD button -> hit “Custom Uploader” in the pop-up dialogue
  • Desktop Apps: From our desktop apps: -> hit the NEW SCENE button -> hit “USE EXISTING VIDEO” in the dialogue -> hit NEXT (opens up the browser)

 

You can use the custom uploader with any Creator, Producer or Professional subscription, monthly or annual.

This is work in progress, and we’ll continually improve on what we do.  Please get in touch if you have any questions: support@getrad.co

 

*     *     *

 

Team RADiCAL

The response from our users has been absolutely overwhelming and we wanted to thank all of you who showed interest.

This also means that we will not be able to accommodate as many people as we would have liked for the beta.

We have started assigning seats to users, please bear with us as we are working through all of the sign-ups.

We will be allocating seats in batches of 20-30 users.

*              *               *

Please stay tuned!

Team RADiCAL

 

We’re excited to announce that we are opening up an early version of our latest AI, Gen3, for private beta testing. 

This will be available to users on a Windows PC with an NVIDIA GPU. Please click here to be taken to the private beta registration page. 

The purpose of this beta is to help benchmark the AI’s performance across devices and GPUs, collect feedback on the UI/UX choices we have made, and to give a glimpse of gameplay possibilities. 

Please make sure you use the same email you used to register for a RADiCAL account or you will not be able to access Gen3.       

 

Hello!

For all of our Blender users, we have created a standard T-pose that allows easy re-targeting of your animation to your character.

We have our gen2 t-pose available on our download page, and when Gen3 releases, we will update the skeleton.

Please also refer to the tutorial video by CG Geek here for step-by-step instructions on how to animate characters in Blender using RADiCAL.

*         *         *

Enjoy!

-Team RADiCAL

We’re excited to announce that the very talented CG Geek has released an easy-to-follow tutorial on how to use RADiCAL to animate characters in Blender.

In this video, Steve takes you through the entire process from start to finish on how to animate characters using RADiCAL FBX output in Blender.  He starts out by filming the scene, mapping the character, retargeting the animation, as well as rendering the character.

Steve’s video (and many more that are great) can be seen here on his YouTube channel.

Steve’s video comes on the heels of massive improvements in our product.  With our impending Gen3 release, we will be releasing a much improved skeleton with plugins for Unity, Unreal, and Blender.

For our Blender users, we are now also releasing a standard T-pose rig that will make the re-targeting process easier. You can currently download the t-pose file here, and it will soon be available through our download page as well.  Steve describes how to use the T-pose rig in his video.

Also, as a reminder, if you’d like to share your animated characters with us, we’ll post and promote your content on our social media channels.  Be in touch!

 

*          *          *

-Enjoy!

Team RADiCAL

We’ve heard your feedback loud and clear.  We have rolled out several changes designed to help make your life easier!

  • In version 2.5.10, we’re bringing to your iPhones and iPads some of the criticial features you’re used to seeing in our desktop apps and the website:

    – Scene management: you can now edit the name, adjust privacy settings and add a description for your own scenes. You can also delete those scenes you don’t like.

    – New character meshes: we’ve added more models, improved some of the existing ones with better shading, and introduced in-scene credits for the creators of those characters.

  • We’ve also released a bunch of bug fixes and stability improvements.

The iOS update is available on the app store.

Get the latest iOS app here.

If there’s something you wanted to see on the list we didn’t get to, feel free to email us any suggestions. We’re all ears.

Matteo Giuberti has joined our AI team after five years as a senior research engineer at Xsens, one of the world’s leading providers of motion capture solutions. Matteo brings to our AI team strong domain expertise around human motion and biomechanics. Matteo holds a PhD from Università degli Studi di Parma.

Roberto Capobianco started to work with our AI team in 2018 but has now completed the transition to look after the overall implementation of Gen3 across all hardware and software environments. Roberto holds a PhD in robotics and AI from Sapienza Università di Roma, where he also continues to teach as a lecturer.

Francesco Riccio has joined our AI team to strengthen our development and engineering efforts along the entire deep learning stack. Francesco holds a PhD from Sapienza Università di Roma, where he is also a postdoctoral researcher focusing on robotics and deep learning.

Anand Ravipati has joined the team as a product manager looking after user engagement, analytics, customer inquiries, marketing and industry relations. Anand is an experienced angel investor and has advised many startups.  Anand holds a BA from Penn State University and an MD from Ross University.

The new team! (Clockwise from top left) Matteo Giuberti, Roberto Capobianco, Anand Ravipati, Francesco Riccio