Our latest AI: Gen3
Today, we are launching Gen3, the latest generation of our AI for our community of creators and developers.
Gen3 has been a labor of love, skill and persistence. We’ve been on it for more than a year, because we knew that the Gen3 architecture would lay the foundation for a revolution in 3D motion tracking science. It is difficult to overstate how excited we are. Not only does Gen3 provide far improved output, it also does so at significantly enhanced throughput. So much so, that it’s now capable of being run in real time.
With all those improvements now available, we’ll be releasing a range of new products, both in the cloud and for local (on prem) use.
RADiCAL consists of a small team of 3D graphics and AI enthusiasts. We hope you enjoy the fruit of our labor as much as we do. Below we have summarized just some of the highlights we want you to know about.
RADiCAL is optimized for content creators, with the following priorities guiding everything we do:
- Human aesthetics: because of our holistic approach to motion and deep learning, we’ve massively enhanced the human, organically expressive look and feel of our output, with smooth results that substantially reduce jitter and snapping;
- Fidelity: Gen3 was designed to tease out much more detail in human motion than previous versions;
- Speed: we want to ensure that our technology is capable of running in real time across most hardware and software environments.
Going forward, Gen3 will support both CORE (our cloud-based motion capture technology) and new real time products (including an SDK) that we will announce and release shortly.
While Gen3 has moved in massive leaps toward realizing those priorities, we also know that we have more work to do. More about that below.
About our science:
There’s a lot of secret sauce in our science. But here’s what we can say: we’ve developed our AI to understand human motion holistically. Rather than creating a sequence of poses to create the impression of motion, we interpret the actor’s input through an understanding of human motion and biomechanics in three-dimensional space over time. In other words, our technology thinks in four dimensions: x, y, z and time.
We have more work to do:
As proud as we are of our progress, we want to do better in a few areas. One of our top priorities for the next few weeks and months is to better anchor our animations to the floor and reduce certain oscillations.
We expect to roll out a first set of improvements within weeks, which should take us much closer to where we want to be in terms of reducing foot sliding and oscillations.
But we expect more work to be necessary after that. Those additional improvements will come with the next large release, in version 3.1 or 3.2. We’ve already started to work on those improvements and we’re genuinely excited about making the results of our research public soon.
In the meantime, you can substantially mitigate these effects by following the guidance below.
How to get the best results:
To get the most out of our technology, you should:
- Static, stable camera: place your camera on a flat, stable surface (or a tripod, of course). Don’t adjust the zoom while recording. Don’t cut between different camera angles.
- Single actor: record a single person at a time;
- T-pose calibration: ensure the actor strikes a T pose within the first five seconds with the entire body being clearly visible at a frontal angle to the camera; and
- Aspect ratio: record, use or upload videos with aspect ratios not wider than 4:3. That’s because our AI only processes videos in a 4:3 ratio. While you can upload videos with wider ratios (we’ll crop them back automatically), you should keep your actor inside the 4:3 ratio to ensure they don’t get cropped out.
Play nicely, and you’ll get best results!
* * *
As ever, we’re forever grateful for the support of the RADiCAL community. We’re excited about feedback, good and bad. We’re even more excited about constructive criticism and assistance.
– Team RADiCAL