We’ve released plugins to make the re-targeting process faster and easier for Blender and Unreal users.

After months of innovation and tireless work by our team, the RADiCAL Studio is finally here.  It’s available through Steam here (you’ll need to sign up for a Studio product to log in). Before we get into the details, here are some quick pointers to materials we’re covering elsewhere:

 

  • Early bird pricing: We’re offering early bird pricing (>50% off) for a short time: details here.
  • Free trial: Studio comes with a free trial (no credit card required):  details here
  • Known issues: This release comes with a few known constraints and issues: details here

 

Studio brings our AI to your machine: 

 

With Studio, you are untethered from the cloud and have access to unlimited motion capture in your own home, studio or event space. At heart, the RADiCAL Studio brings our AI to your local machine. We call it step-by-step (SBS) processing.  SBS processing mimics the way we sequentially process your videos through the cloud: record video first, run the AI later.  

 

The big difference? Since this is your own workstation, we don’t have to meter your usage.  Your usage is only limited by your own time.  🙂  

 

Real time results (beta):

 

With Studio, we are also revealing our real-time functionality.  The real time feature is the product of multi-disciplinary efforts across not just deep learning, but also GPU optimization and a lot of great software engineering. It remains in beta because we still need to test and stabilize the AI’s output across a wider range of hardware configurations.

Our final objective, even with real time processing, is to achieve the fidelity, smoothness and range of motion Gen3 is capable of through the cloud.

At this time, we recommend that the real time feature is run on Windows machines with the strongest NVIDIA graphics cards (at least 1080, 1080 Ti, 2060, 2070, 2080, or 2080 Ti), although we’ve seen it do reasonably well on even smaller cards (1060 and 1070).

 

Live stream into Unreal Engine (Live Link):

 

With the right subscription, you can also use our real-time feature to stream your motion directly into a scene in Unreal Engine 4 (Unity, iClone, and Blender coming soon). For more about using UE4 LiveLink, go here (Change Log) and here (FAQs). 

Tip: if you’re a student or an indie, you may qualify for discounted pricing on LiveLink access, please get in touch.

 

Exporting your animation data:

 

Whether you use step-by-step (SbS) or real-time processing, you can export your animation data in FBX format through our website here. You can read more about how that works here (FAQs).

Tip: if you’re using the UE4 Live Link for real time streaming, your animation data can also be saved directly in UE4. 

 

Gen3.1 – new animation rig:

 

Studio also comes with Gen3.1, which features a new and improved using animation rig. The 3.1 skeleton more closely conforms to industry standards. It’s much easier to be used across modern and legacy workflows in Unity, Unreal, Blender, iClone and others.  See the details in this change log post

 

 

*     *     *

 

As always, thanks for being a part of our community, and don’t hesitate to reach out, we are always looking for constructive criticism and feedback.

– Team RADiCAL

Yesterday, September 4, 2020, we released the latest update to our AI: Gen3.1. With the release of Gen3, we signaled that all areas of our product were going to improve, including our FBX output. As the versioning suggests, while this is an upgrade to 3.0, version 3.1 doesn’t imply fundamental, visible, changes in our output.  Rather, it’s the structure of our animation data that has improved.

 

Key benefits of Gen3.1: 

Gen3.1 features an updated skeleton with more joints and a new naming convention that more narrowly conforms to industry standards.  As a consequence, 3.1 improves the ingestion of our animation data across software environments, whether that’s in FBX format or as raw animation data.

In short order, we will also be releasing plugins that make the retargeting process for Blender, Unreal, and Unity even easier.

 

Transitioning from 3.0 to 3.1: 

We understand many of our users have developed pipelines in reliance on the RADiCAL skeleton having a particular structure.

To help ease the transition, we’ve made a new 3.1 t-pose available in the download section.  You can also see a diagram of the new skeleton and the naming convention below.  As you start to align your pipelines for 3.1, we can promise that we don’t expect to make structural changes to our skeleton going forward.  3.1 will be our standard for years to come.

 

New RADiCAL Samples: 

New RADiCAL Samples with free FBX downloads can be found here.

 

How to export legacy animation scenes:

If you need to export FBX animation data for legacy Gen2 or Gen3.0 results, you can do so through our website for a period of one month going forward, ie from today through early October 2020. The user experience is the same: simply hit the FBX download button on the completed scene page.  After that initial transition period, from October 2020 onwards, exporting Gen2 or Gen3.0 results to FBX will require the help of the RADiCAL support team, so you should expect it to take more time. We therefore recommend you start exporting now.

 

Special note for Blender users: 

For Blender users, please select automatic bone orientation under the armature settings when you import the FBX.

Some users have reached out because they’re experiencing re-targeting issues with respect to our FBX in Blender, specifically they’re seeing some abnormal rotations in the skeleton.

We’re aware of the problem and have identified the root cause.  We’re now working on a temporary solution.  Please bear with us for the next few days while we generate a short video tutorial for the temporary fix.

You should also know that, hopefully within just a few weeks, we will release an add-on that will make re-targeting a drag drop process in Blender.

If you have specific thoughts on Blender, or these specific issues, feel free to drop us an email or book a meeting with our team.

As always, thank you!

Team RADiCAL

 

*   *   *

 

As always, feel free to drop us an email or book a meeting with our team.

Many of you have asked for the ability to process longer videos. We’ve heard you loud and clear.

We’ve increased the maximum duration, from 30 seconds to 15 minutes, for any one video you upload. We’re looking into raising that limit further, but we have more work to do on that topic. To support longer videos, we’re also making two related, and important, changes:

  • Playtime add-ons are now automatic: if you go over your remaining playtime credits, we’ll automatically add enough playtime to your account to get the job done and your account will be charged for the playtime add-on. This will be completely seamless. To make sure you have complete visibility into your charges, we’ve added your up-to-date playtime budget summary to the upload page, so you always know what to expect, given your workloads.
  • FBX on demand: converting raw animation data to the FBX format requires processing power. Rather than delaying your visual results, we’ll deliver your visual results without the FBX. You can then decide to request the FBX for the results you like. This means that you’ll get your visual results faster, but you’ll wait a bit longer for your FBX files.

 

Always consult our terms and conditions for details.  If you’re unsure about anything, drop us an email or book a video conf call meeting with our team.

As always, thanks for being part of our community!

– Team RADiCAL

We detected and fixed an issue today that, between June 13 and June 15, caused our web visualizer and notification systems to malfunction, such that many users were not told when their results were ready (even though they were).

We’re sorry if this has affected you. It has now been fixed, all of your scenes have been processed, and results can be viewed. If you were affected by these issues, please drop us a note at support@getrad.co so we can try to make up for it.

Thanks for your patience with us, as we roll out Gen3 across the entire platform.

Best –

Team RADiCAL

Our latest AI: Gen3 

 

Today, we are launching Gen3, the latest generation of our AI for our community of creators and developers. 

 

Gen3 has been a labor of love, skill and persistence. We’ve been on it for more than a year, because we knew that the Gen3 architecture would lay the foundation for a revolution in 3D motion tracking science. It is difficult to overstate how excited we are.  Not only does Gen3 provide far improved output, it also does so at significantly enhanced throughput.  So much so, that it’s now capable of being run in real time.  

 

With all those improvements now available, we’ll be releasing a range of new products, both in the cloud and for local (on prem) use.

 

RADiCAL consists of a small team of 3D graphics and AI enthusiasts.  We hope you enjoy the fruit of our labor as much as we do.  Below we have summarized just some of the highlights we want you to know about. 

 

Key features: 

 

RADiCAL is optimized for content creators, with the following priorities guiding everything we do: 

 

  1. Human aesthetics: because of our holistic approach to motion and deep learning, we’ve massively enhanced the human, organically expressive look and feel of our output, with smooth results that substantially reduce jitter and snapping; 
  2. Fidelity: Gen3 was designed to tease out much more detail in human motion than previous versions; 
  3. Speed: we want to ensure that our technology is capable of running in real time across most hardware and software environments. 

 

Going forward, Gen3 will support both CORE (our cloud-based motion capture technology) and new real time products (including an SDK) that we will announce and release shortly.  

 

While Gen3 has moved in massive leaps toward realizing those priorities, we also know that we have more work to do. More about that below. 

 

About our science:

 

There’s a lot of secret sauce in our science. But here’s what we can say: we’ve developed our AI to understand human motion holistically. Rather than creating a sequence of poses to create the impression of motion, we interpret the actor’s input through an understanding of human motion and biomechanics in three-dimensional space over time. In other words, our technology thinks in four dimensions: x, y, z and time.

 

We have more work to do:

 

As proud as we are of our progress, we want to do better in a few areas. One of our top priorities for the next few weeks and months is to better anchor our animations to the floor and reduce certain oscillations. 

 

We expect to roll out a first set of improvements within weeks, which should take us much closer to where we want to be in terms of reducing foot sliding  and oscillations. 

 

But we expect more work to be necessary after that. Those additional improvements will come with the next large release, in version 3.1 or 3.2.  We’ve already started to work on those improvements and we’re genuinely excited about making the results of our research public soon.

 

In the meantime, you can substantially mitigate these effects by following the guidance below.

 

How to get the best results:

 

To get the most out of our technology, you should: 

 

  • Static, stable camera: place your camera on a flat, stable surface (or a tripod, of course). Don’t adjust the zoom while recording. Don’t cut between different camera angles. 
  • Single actor: record a single person at a time; 
  • T-pose calibration:  ensure the actor strikes a T pose within the first five seconds with the entire body being clearly visible at a frontal angle to the camera; and 
  • Aspect ratio: record, use or upload videos with aspect ratios not wider than 4:3.  That’s because our AI only processes videos in a 4:3 ratio. While you can upload videos with wider ratios (we’ll crop them back automatically), you should keep your actor inside the 4:3 ratio to ensure they don’t get cropped out.    

 

Play nicely, and you’ll get best results!

 

* *

 

As ever, we’re forever grateful for the support of the RADiCAL community.  We’re excited about feedback, good and bad.  We’re even more excited about constructive criticism and assistance.  

– Team RADiCAL 

As we prepare for the transition to Gen3, we have to make a few changes.  The core of our platform will continue to run, but we’re limiting access to Gen2 through free accounts via some of our apps until Gen3 is out. 

 

  • Website: Our website will continue to operate as usual, with access to your completed scenes (Projects) and community scenes (Explore). You can register as a new user, including for a free account, download FBX files, manage your account and upload new videos via our custom upload page for processing through Gen2, if you hold a paid subscription.
  • Windows app: Our windows app will continue to be available on Steam. It will operate as usual, and we will continue to maintain it.
  • Mobile apps / MacOS app: To prepare for Gen3, we are suspending downloads of our iOS, Android and MacOS apps from the app stores.

    If you’ve already installed these apps, you can continue to use them by accessing your completed scenes and community scenes. However, you will no longer be able to upload new videos into our cloud from the mobile apps. The mobile apps won’t prevent you from recording videos or initiating uploads, but the upload will not reach our servers (and you will receive an email to confirm this). We will no longer update or maintain these legacy apps.

    If you’re a paying subscriber, you can continue to upload new videos for processing through Gen2 via our custom upload page on our website.

 

We hope you bear with us.  The transition to Gen3 is a momentous task.  We’re excited about it, and we hope you are, too. 

 

– Team RADiCAL  

Over the last 9 months, countless users have asked us to release a feature that would allow them to upload videos into our cloud-powered AI that were recorded independently by them, i.e. videos that were not recorded through the RADiCAL mobile apps.

We’ve heard you loud and clear.

Starting today, you’ll be able to upload your own videos through our custom uploader on our website.

You can get to the custom uploader in two ways:

 

  • Web: From the members area of our website: -> hit the UPLOAD button -> hit “Custom Uploader” in the pop-up dialogue
  • Desktop Apps: From our desktop apps: -> hit the NEW SCENE button -> hit “USE EXISTING VIDEO” in the dialogue -> hit NEXT (opens up the browser)

 

You can use the custom uploader with any Creator, Producer or Professional subscription, monthly or annual.

This is work in progress, and we’ll continually improve on what we do.  Please get in touch if you have any questions: support@getrad.co

 

*     *     *

 

Team RADiCAL

The response from our users has been absolutely overwhelming and we wanted to thank all of you who showed interest.

This also means that we will not be able to accommodate as many people as we would have liked for the beta.

We have started assigning seats to users, please bear with us as we are working through all of the sign-ups.

We will be allocating seats in batches of 20-30 users.

*              *               *

Please stay tuned!

Team RADiCAL