The Jetson Orin NX 16 GB module is unmatched in performance and efficiency for small form factor, low-power robots, and autonomous machines. This makes it ideal for use in products like drones and handheld devices. The module can easily be used for advanced applications such as manufacturing, logistics, retail, agriculture, healthcare, and life sciences—all in a truly compact, power-efficient package.
It is the smallest Jetson form factor, delivering up to 100 TOPS of AI performance with power configurable between 10 W and 25 W. It gives developers 3x the performance of the NVIDIA Jetson AGX Xavier and 5x the performance of the NVIDIA Jetson Xavier NX.
The system-on-module supports multiple AI application pipelines with NVIDIA Ampere architecture GPU, next-generation deep learning and vision accelerators, high-speed I/O, and fast memory bandwidth. You will be able to develop solutions using your largest and most complex AI models in natural language understanding, 3D perception, and multi-sensor fusion.
Showcasing the giant leap in performance, NVIDIA ran some computer vision benchmarks using the NVIDIA JetPack 5.1. Testing included some dense INT8 and FP16 pretrained models from NGC. The same models were also run for comparison on Jetson Xavier NX.
Following is the complete list of benchmarks:
- NVIDIA PeopleNet v2.5 for the highest accuracy in people detection.
- NVIDIA ActionRecognitionNet for 2D and 3D models.
- NVIDIA LPRNet for license plate recognition.
- NVIDIA DashCamNet for object detection and labeling.
- NVIDIA BodyPoseNet for multiperson human pose estimation.
Taking the geomean of these benchmarks, Jetson Orin NX shows a 2.1x performance increase compared to Jetson Xavier NX. With future software optimizations, this is expected to approach 3.1x for dense benchmarks. Other Jetson devices have increased performance by 1.5x since their first supporting software release, similar is anticipated for the Jetson Orin NX 16 GB.
Jetson Orin NX also brings support for sparsity, which will enable even greater performance. With sparsity, you can take advantage of the fine-grained structured sparsity in deep learning networks to increase the throughput for Tensor Core operations.
All Jetson Orin modules run the world-standard NVIDIA AI software stack. Supported by an ecosystem of services and products, your road to market has never been faster. The Orin NX is now available on the Blox platform. Schedule a call here to find out more.
Source : https://developer.nvidia.com/