Do you have experience optimizing custom neural network models for fast inference using the latest TensorRT plugins and cuDNN?  Then we’d love to talk to you – pls head on over to careers page and get in touch!