π§ NVIDIA Jetson Orin Nano Student Guide¶
Author: Dr. Kaikai Liu, Ph.D.
Position: Associate Professor, Computer Engineering
Institution: San Jose State University
Contact: kaikai.liu@sjsu.edu
π Overview¶
This guide introduces the NVIDIA Jetson Orin Nano, explains how to install and use our custom Jetson utility script sjsujetsontool
, and provides step-by-step instructions for development tasks such as launching servers, running AI models, setting up Jupyter, and managing devices.
π§ What Is NVIDIA Jetson Orin Nano?¶
The Jetson Orin Nano is a powerful, energy-efficient AI edge computing board by NVIDIA. Key features:
- β 6-core ARM Cortex CPU
- β Ampere GPU with up to 1024 CUDA cores
- β Ideal for robotics, vision, AI model serving, and cyber experiments
- β Supports JetPack SDK with Ubuntu, CUDA, cuDNN, TensorRT
π Connecting to Jetson via .local
Hostname¶
Jetson devices with mDNS enabled can be accessed using the .local
hostname from macOS or Linux:
ssh username@jetson-hostname.local
For example:
ssh sjsujetson@sjsujetson-01.local
If this doesn't work, make sure
avahi-daemon
is running on Jetson and that your network supports mDNS.
If you want to enable X11-forwarding, you can use
% ssh -X sjsujetson@sjsujetson-01.local
sjsujetson@sjsujetson-01:~$ xclock #test x11
π Mesh VPN Connection¶
All Jetson devices are connected through an overlay Layer 3 (L3) mesh VPN network, allowing them to communicate with each other using static IP addresses. To access another Jetson device in the mesh, simply use its assigned IP address. The IP address format is: 192.168.100.(10 + <number>)
Here, ssh [username]@192.168.100.14
βοΈ Installing sjsujetsontool
¶
A command-line tool for Jetson-based workflows: container management, model serving, AI apps, and more.
β One-line install (no sudo required)¶
curl -fsSL https://raw.githubusercontent.com/lkk688/edgeAI/main/jetson/install_sjsujetsontool.sh | bash
After the script installation, run sjsujetsontool update
to update the local container and script. The container update takes long time.
sjsujetson@sjsujetson-01:~$ curl -fsSL https://raw.githubusercontent.com/lkk688/edgeAI/main/jetson/install_sjsujetsontool.sh | bash
β¬οΈ Downloading sjsujetsontool from GitHub...
β
Downloaded script.
π¦ Installing to /home/sjsujetson/.local/bin/sjsujetsontool
β
Installed successfully. You can now run: sjsujetsontool
sjsujetson@sjsujetson-01:~$ sjsujetsontool update
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
βΉοΈ The 'update' command has been split into two separate commands:
- 'update-container': Updates only the Docker container
- 'update-script': Updates only this script
\nRunning both updates sequentially...
\nπ Running container update...
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
π Checking Docker image update...
β¬οΈ Pulling latest image (this may take a while)...
latest: Pulling from cmpelkk/jetson-llm
....
β Pull complete.
π¦ New version detected. Updating local image...
β
Local container updated from Docker Hub.
\nπ Running script update...
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
β¬οΈ Updating sjsujetsontool script...
β¬οΈ Downloading latest script...
#################################################################################################### 100.0%
β
Script downloaded. Replacing current script...
β
Script updated. Please rerun your command.
Another option is just run the update command for two times:
student@sjsujetson-02:~$ hostname
sjsujetson-02
student@sjsujetson-02:~$ sjsujetsontool update
β¬οΈ Updating sjsujetsontool from GitHub...
π Backing up current script to /home/student/.local/bin/sjsujetsontool.bak
β
Update complete. Backup saved at /home/student/.local/bin/sjsujetsontool.bak
/home/student/.local/bin/sjsujetsontool: line 228: syntax error near unexpected token `('
/home/student/.local/bin/sjsujetsontool: line 228: ` echo "β $name not running (port $port closed)"'
student@sjsujetson-02:~$ sjsujetsontool update
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
βΉοΈ The 'update' command has been split into two separate commands:
- 'update-container': Updates only the Docker container
- 'update-script': Updates only this script
\nRunning both updates sequentially...
\nπ Running container update...
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
π Checking Docker image update...
β¬οΈ Pulling latest image (this may take a while)...
latest: Pulling from cmpelkk/jetson-llm
Digest: sha256:8021643930669290377d9fc19741cd8c012dbfb7d5f25c7189651ec875b03a78
Status: Image is up to date for cmpelkk/jetson-llm:latest
docker.io/cmpelkk/jetson-llm:latest
β Pull complete.
β
Local container is already up-to-date.
\nπ Running script update...
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
β¬οΈ Updating sjsujetsontool script...
β¬οΈ Downloading latest script...
#################################################################################################### 100.0%
β
Script downloaded. Replacing current script...
β
Script updated. Please rerun your command.
Verify:
sjsujetsontool list
You can check the script versions:
sjsujetson@sjsujetson-01:/Developer/edgeAI$ sjsujetsontool version
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
π§Ύ sjsujetsontool Script Version: v0.9.0
π§ Docker Image: jetson-llm:v1
π Image ID: sha256:9868985d80e4d1d43309d72ba85b700f3ac064233fcbf58c8ec22555d85f8c2f
The sjsujetsontool
wraps python apps running via container and makes running code inside the container easy to use. docker
without sudo is already setup in the jetson device. Check existing containers available in the Jetson:
sjsujetson@sjsujetson-01:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
jetson-llm-v1 latest 8236678f7ef1 6 days ago 9.89GB
jetson-pytorch-v1 latest da28af1b9eed 9 days ago 9.71GB
hello-world latest f1f77a0f96b7 5 months ago 5.2kB
nvcr.io/nvidia/pytorch 24.12-py3-igpu ee796da7f569 6 months ago 9.63GB
nvcr.io/nvidia/l4t-base r36.2.0 46b8e6a6a6a7 19 months ago 750MB
sjsujetson@sjsujetson-01:~$ sjsujetsontool shell #enter into the container
root@sjsujetson-01:/workspace#
sudo systemctl start docker
sudo systemctl status docker
The \Developer
and \Developer\models
folders in the jetson host are mounted to the container in the path of \Developer
and \models
β Hostname changes (sudo required)¶
sjsujetson@sjsujetson-01:~$ hostname
sjsujetson-01
sjsujetson@sjsujetson-01:~$ sjsujetsontool set-hostname sjsujetson-02
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
π§ Setting hostname to: sjsujetson-02
[sudo] password for sjsujetson:
π Updating /etc/hosts...
π Resetting machine-id...
π Writing device ID to /etc/device-id
π Please reboot for changes to fully apply.
sjsujetson@sjsujetson-01:~$ sudo reboot
% ssh -X sjsujetson@sjsujetson-02.local
sjsujetson@sjsujetson-02:~$ hostname
sjsujetson-02
For TA, run this additional steps:
sudo chfn -f "Student" student
sudo passwd student
sjsujetson@sjsujetson-02:/Developer/edgeAI$ sjsujetsontool force_git_pull
passwd
. Youβll be prompted to enter your current password, then the new password twice.
β Exter the Container Shell¶
Run the sjsujetsontool shell
command line to enter into the shell of the container
sjsujetson@sjsujetson-01:/Developer/edgeAI$ sjsujetsontool shell
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
root@sjsujetson-01:/workspace# pip install transformers==4.37.0 #install transformer package
Exit the container via exit
, and the container is still running
root@sjsujetson-01:/workspace# exit
exit
sjsujetson@sjsujetson-01:/Developer/edgeAI$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c4010b14e9c0 8236678f7ef1 "/opt/nvidia/nvidia_β¦" 4 days ago Up 4 days jetson-dev
If you want to stop the container, you can use sjsujetsontool stop
sjsujetson@sjsujetson-01:/Developer/edgeAI$ sjsujetsontool stop
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
π Stopping container...
jetson-dev
sjsujetson@sjsujetson-01:/Developer/edgeAI$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
π§ͺ Common Usage Examples¶
π§Ύ sjsujetsontool update
¶
Downloads the latest version of sjsujetsontool
from GitHub and replaces the local version, keeping a backup.
π sjsujetsontool list
¶
Displays all available commands with usage examples.
π’ sjsujetsontool jupyter
¶
Launches JupyterLab on port 8888 from inside the Jetson's Docker container. It allows interactive Python notebooks for AI model testing, data exploration, and debugging.
sjsujetson@sjsujetson-01:~$ sjsujetsontool jupyter
....
To access the server, open this file in a browser:
file:///root/.local/share/jupyter/runtime/jpserver-1-open.html
Or copy and paste one of these URLs:
http://hostname:8888/lab?token=3bbbf2fbea22e917bdbace45cb414bbaeb52f1251163adcf
http://127.0.0.1:8888/lab?token=3bbbf2fbea22e917bdbace45cb414bbaeb52f1251163adcf
π sjsujetsontool run <script.py>
¶
Runs any Python script inside the preconfigured container. Ensures all ML/AI libraries and GPU drivers are properly set up. The path of script.py
should be accessible by the container, for example, the \Developer
path:
sjsujetson@sjsujetson-01:/Developer/models$ sjsujetsontool run /Developer/edgeAI/jetson/test.py
π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βοΈ CUDA Version: 12.6
π Running Python script: /Developer/edgeAI/jetson/test.py
π¦ Python: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0]
π§ Torch: 2.6.0a0+df5bbc09d1.nv24.12
βοΈ CUDA available: True
π₯οΈ CUDA version: Cuda compilation tools, release 12.6, V12.6.85
π Transformers: 4.37.0
𧬠HuggingFace hub: Version: 0.33.2
π‘ Platform: Linux-5.15.148-tegra-aarch64-with-glibc2.39
π Ollama: β
Ollama installed: ollama version is 0.9.2
π§ sjsujetsontool ollama
¶
This section introduces its integrated ollama
command group, which allows you to manage, run, and query large language models inside a Docker container on your Jetson.
sjsujetsontool ollama <subcommand>
enables local management and interaction with Ollama models from inside a persistent Jetson container.
Supported subcommands:
Subcommand | Description |
---|---|
serve |
Start Ollama REST API server (port 11434) |
run <model> |
Run specified model in interactive CLI |
list |
List all installed Ollama models |
pull <model> |
Download a new model |
delete <model> |
Remove a model from disk |
status |
Check if Ollama server is running |
ask |
Ask model a prompt via REST API |
π Commands and Usage
- Start the Ollama Server
sjsujetsontool ollama serve
Starts the Ollama REST server inside the container, listening on http://localhost:11434.
-
Run a Model in CLI Mode
Launches interactive terminal mode using the mistral model. Enter$ sjsujetsontool ollama run mistral π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super βοΈ CUDA Version: 12.6 π¬ Launching model 'mistral' in CLI... pulling manifest pulling ff82381e2bea: 100% ββββββββββββββββββββ 4.1 GB pulling 43070e2d4e53: 100% ββββββββββββββββββββ 11 KB pulling 1ff5b64b61b9: 100% ββββββββββββββββββββ 799 B pulling ed11eda7790d: 100% ββββββββββββββββββββ 30 B pulling 42347cd80dc8: 100% ββββββββββββββββββββ 485 B verifying sha256 digest writing manifest success >>> Send a message (/? for help)
\exit
to exit. -
List Installed Models
Shows a table of downloaded models and their sizes.$ sjsujetsontool ollama list π§ Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super βοΈ CUDA Version: 12.6 π Installed models: NAME ID SIZE MODIFIED mistral:latest 3944fe81ec14 4.1 GB About a minute ago llama3.2:latest a80c4f17acd5 2.0 GB 2 hours ago qwen2:latest dd314f039b9d 4.4 GB 9 days ago llama3.2:3b a80c4f17acd5 2.0 GB 9 days ago
-
Download a New Model
Pulls the specified model into the container. Examples include: β’ phi3 β’ mistral β’ llama3 β’ qwen:7bsjsujetsontool ollama pull llama3
-
Delete a Model
Frees up disk space by removing the model.sjsujetsontool ollama delete mistral
-
Check Server Status
Checks if the REST API is running on port 11434.sjsujetsontool ollama status
-
Ask a Prompt (with auto-pull + caching)
Uses the last used model, or you can specify one:sjsujetsontool ollama ask "What is nvidia jetson orin?"
β’ Automatically pulls model if not available β’ Remembers last used model in .last_ollama_model under workspace/sjsujetsontool ollama ask --model mistral "Explain transformers in simple terms."
π§ͺ Example: Simple Chat Session
Pull and run mistral model
sjsujetsontool ollama pull mistral
sjsujetsontool ollama run mistral
sjsujetsontool ollama ask --model mistral "Give me a Jetson-themed poem."
βΈ»
π§° Troubleshooting
β’ Port already in use: Run sudo lsof -i :11434 and kill the process if needed.
β’ Model not found: Use sjsujetsontool ollama pull
π¬ sjsujetsontool llama
¶
Starts the llama.cpp
server (C++ GGUF LLM inference engine) on port 8000. Loads a .gguf
model and serves an HTTP API for tokenized prompt completion.
After entering into the container, you can run a local downloaded model ('build_cuda' folder is the cuda build):
root@sjsujetson-01:/Developer/llama.cpp# llama-cli -m /models/mistral.gguf -p "Explain what is Nvidia jetson"
....
llama_perf_sampler_print: sampling time = 34.98 ms / 532 runs ( 0.07 ms per token, 15210.86 tokens per second)
llama_perf_context_print: load time = 3498.72 ms
llama_perf_context_print: prompt eval time = 2193.93 ms / 17 tokens ( 129.05 ms per token, 7.75 tokens per second)
llama_perf_context_print: eval time = 84805.65 ms / 514 runs ( 164.99 ms per token, 6.06 tokens per second)
llama_perf_context_print: total time = 92930.78 ms / 531 tokens
llama-server
is a lightweight, OpenAI API compatible, HTTP server for serving LLMs. Start a local HTTP server with default configuration on port 8080: llama-server -m model.gguf --port 8080
, Basic web UI can be accessed via browser: http://localhost:8080
. Chat completion endpoint: http://localhost:8080/v1/chat/completions
root@sjsujetson-01:/Developer/llama.cpp# llama-server -m /models/mistral.gguf --port 8080
Send request via curl in another terminal (in the host machine or container): ```bash sjsujetson@sjsujetson-01:~$ curl http://localhost:8080/completion -d '{ "prompt": "Explain what is Nvidia jetson?", "n_predict": 100 }'
π¦ sjsujetsontool status
¶
Displays:
- Docker container state
- GPU stats from
tegrastats
- Port listening status for key services
π§ sjsujetsontool set-hostname <name>
¶
Changes device hostname, regenerates system identity, writes /etc/device-id
.
π sjsujetsontool stop
¶
Stops the running Docker container started by previous commands.
β οΈ Safety Guidelines¶
- π Power Supply: Use a 5A USB-C adapter or official barrel jack for stability.
- πΎ SSD Cloning: Change the hostname and machine-id after cloning to prevent network conflicts.
- π SSH Security: Only install SSH keys from trusted GitHub accounts.
- π§Ό Disk Cleanup: Remove cache and large datasets before creating system images.
- π¦ Containers: Always stop containers with
sjsujetsontool stop
before unplugging.
π§ Ready to Learn and Build¶
You're now equipped to:
- Run AI models (LLaMA, Mistral, DeepSeek, etc.)
- Build and test LLM applications
- Access Jetson remotely with SSH or VS Code
- Run real-time cyber/AI experiments on the edge!
Made with π» by Kaikai Liu β GitHub Repo