Nvidia Jetson Nano is a developer kit, which consists of a SoM(System on Module) and a reference carrier board. It is primarily targeted for creating embedded systems that need high processing power for machine learning, machine vision and video processing applications. Nvidia is not a new player on the embedding computing market – the company has previously released other boards for machine learning applications, namely Jetson TK1, TX1, TX2 and one monster of embedded computing, Jetson AGX Xavier.
A few things are distinctly different about Jetson Nano though. First of all, it is impressive price tag, 99$. Before if you wanted to buy a Jetson series board, you would need to shell out at least 600+ USD for TX2. Secondly, Nvidia is being extra user-friendly this time with tutorials and user manuals, tailored for beginners and people with little knowledge about machine learning and embedded systems. It is especially evident to me, a person who used Jetson TX1 in 2017 for developing a companion robot. At that time, it was almost impossible to develop anything significant on that board without
1)help of embedded systems engineer
2)spending hours on Stack Overflow and NVIDIA forums, trying to narrow down the source of the problem.
Seems that things are going to be different this time At least Nvidia is making an effort to do so. This board signifies the company’s push to bring AI-on-the-edge to the masses, same way Raspberry Pi did with embedded computing.
The system is built around a 128 CUDA core Maxwell GPU and quad-core ARM A57 CPU running at 1.43 GHz and coupled with 4GB of LPDDR4 memory. While the ARM A57 CPU is quite fast and 4GB memory also sounds quite generous for 99$ board, the cornerstone of machine learning acceleration capabilities of the board is in it’s GPU. As we know GPUs, with their high parallelism architecture and high memory bandwidth are much more suitable for training and inference. If above mentioned phrase “128 CUDA core Maxwell GPU” doesn’t make much sense to you, you can think of those CUDA cores the same way as you think of CPU cores, they allow to do more operations simultaneously.
Nvidia claims the Jetson Nano GPU has processing power of 472 GFLOPS (Giga Floating Point Operations per Second). While it is an interesting number, unfortunately it is more useful for comparing Nano to other Nvidia products, than to comparing it to its direct competitors, since when determining the actual inference speed multiple other factors are involved. A better benchmark would be direct comparison of them running popular machine learning models. Here’s one from NVIDIA’s website.
Jetson Nano is the blue bar, Raspberry PI Model 3 is the orange bar, Raspberry Pi 3 + Intel Neural Compute Stick 2 is red bar and Google Edge TPU Coral board is yellow bar.
As we can see Jetson Nano clearly has the edge before Raspberry Pi 3 + Intel Neural Compute Stick 2. Considering comparison with the Coral board, it’s too early too say, many benchmarks for it are not available yet.
Jetson Nano is NVIDIA’s take on “AI embedded platform for the masses”. It has all the characteristics and the price tag necessary to achieve success. Time will tell how it performs against it’s direct competitors.
Hardware overview follow-up
There are a couple of things that need to be emphasized:
- Despite there’s no fan included, you really want to consider buying one separately, since the board is getting noticeably hot during operation, especially under the load.
- The support for WiFi dongles is lacking now. I tested three different ones, and two of them didn’t have drivers for Jetson Nano’s architecture.
- Officially HDMI-to-VGA adapters are not supported, but some models work. Since there’s no list of adapters known to work, it is advised to have a screen that supports HDMI input.
- There are 4 ways to power the boards: Micro USB(5V, 2A), DC barrel jack(5V, 4A), GPIO and PoE(Power over Ethernet). There are jumpers on the board to switch it from default Micro USB power mode to DC barrel jack or PoE.