Skip to content

🧠 NVIDIA Jetson Orin Nano Student Guide

Author: Dr. Kaikai Liu, Ph.D.
Position: Associate Professor, Computer Engineering
Institution: San Jose State University
Contact: kaikai.liu@sjsu.edu


πŸ“Œ Overview

This guide introduces the NVIDIA Jetson Orin Nano, explains how to install and use our custom Jetson utility script sjsujetsontool, and provides step-by-step instructions for development tasks such as launching servers, running AI models, setting up Jupyter, and managing devices.


🧠 What Is NVIDIA Jetson Orin Nano?

The Jetson Orin Nano is a powerful, energy-efficient AI edge computing board by NVIDIA. Key features:

  • βœ… 6-core ARM Cortex CPU
  • βœ… Ampere GPU with up to 1024 CUDA cores
  • βœ… Ideal for robotics, vision, AI model serving, and cyber experiments
  • βœ… Supports JetPack SDK with Ubuntu, CUDA, cuDNN, TensorRT

🌐 Connecting to Jetson via .local Hostname

Jetson devices with mDNS enabled can be accessed using the .local hostname from macOS or Linux:

ssh username@jetson-hostname.local

For example:

ssh sjsujetson@sjsujetson-01.local

If this doesn't work, make sure avahi-daemon is running on Jetson and that your network supports mDNS.

If you want to enable X11-forwarding, you can use

% ssh -X sjsujetson@sjsujetson-01.local
sjsujetson@sjsujetson-01:~$ xclock #test x11

🌐 Mesh VPN Connection

All Jetson devices are connected through an overlay Layer 3 (L3) mesh VPN network, allowing them to communicate with each other using static IP addresses. To access another Jetson device in the mesh, simply use its assigned IP address. The IP address format is: 192.168.100.(10 + <number>)

Here, corresponds to the numeric suffix xx of the device’s hostname (sjsujetson-xx). For example, to SSH into the device sjsujetson-04, use:

ssh [username]@192.168.100.14
Replace [username] with your actual username.


βš™οΈ Installing sjsujetsontool

A command-line tool for Jetson-based workflows: container management, model serving, AI apps, and more.

βœ… One-line install (no sudo required)

curl -fsSL https://raw.githubusercontent.com/lkk688/edgeAI/main/jetson/install_sjsujetsontool.sh | bash

After the script installation, run sjsujetsontool update to update the local container and script. The container update takes long time.

sjsujetson@sjsujetson-01:~$ curl -fsSL https://raw.githubusercontent.com/lkk688/edgeAI/main/jetson/install_sjsujetsontool.sh | bash
⬇️ Downloading sjsujetsontool from GitHub...
βœ… Downloaded script.
πŸ“¦ Installing to /home/sjsujetson/.local/bin/sjsujetsontool
βœ… Installed successfully. You can now run: sjsujetsontool
sjsujetson@sjsujetson-01:~$ sjsujetsontool update
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
ℹ️ The 'update' command has been split into two separate commands:
  - 'update-container': Updates only the Docker container
  - 'update-script': Updates only this script
\nRunning both updates sequentially...
\nπŸ”„ Running container update...
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
πŸ” Checking Docker image update...
⬇️ Pulling latest image (this may take a while)...
latest: Pulling from cmpelkk/jetson-llm
....
βœ“ Pull complete.
πŸ“¦ New version detected. Updating local image...
βœ… Local container updated from Docker Hub.
\nπŸ”„ Running script update...
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
⬇️ Updating sjsujetsontool script...
⬇️ Downloading latest script...
#################################################################################################### 100.0%
βœ… Script downloaded. Replacing current script...
βœ… Script updated. Please rerun your command.

Another option is just run the update command for two times:

student@sjsujetson-02:~$ hostname
sjsujetson-02
student@sjsujetson-02:~$ sjsujetsontool update
⬇️  Updating sjsujetsontool from GitHub...
πŸ” Backing up current script to /home/student/.local/bin/sjsujetsontool.bak
βœ… Update complete. Backup saved at /home/student/.local/bin/sjsujetsontool.bak
/home/student/.local/bin/sjsujetsontool: line 228: syntax error near unexpected token `('
/home/student/.local/bin/sjsujetsontool: line 228: `    echo "❌ $name not running (port $port closed)"'
student@sjsujetson-02:~$ sjsujetsontool update
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
ℹ️ The 'update' command has been split into two separate commands:
  - 'update-container': Updates only the Docker container
  - 'update-script': Updates only this script
\nRunning both updates sequentially...
\nπŸ”„ Running container update...
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
πŸ” Checking Docker image update...
⬇️ Pulling latest image (this may take a while)...
latest: Pulling from cmpelkk/jetson-llm
Digest: sha256:8021643930669290377d9fc19741cd8c012dbfb7d5f25c7189651ec875b03a78
Status: Image is up to date for cmpelkk/jetson-llm:latest
docker.io/cmpelkk/jetson-llm:latest
βœ“ Pull complete.
βœ… Local container is already up-to-date.
\nπŸ”„ Running script update...
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
⬇️ Updating sjsujetsontool script...
⬇️ Downloading latest script...
#################################################################################################### 100.0%
βœ… Script downloaded. Replacing current script...
βœ… Script updated. Please rerun your command.

Verify:

sjsujetsontool list

You can check the script versions:

sjsujetson@sjsujetson-01:/Developer/edgeAI$ sjsujetsontool version
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
🧾 sjsujetsontool Script Version: v0.9.0
🧊 Docker Image: jetson-llm:v1
πŸ” Image ID: sha256:9868985d80e4d1d43309d72ba85b700f3ac064233fcbf58c8ec22555d85f8c2f

The sjsujetsontool wraps python apps running via container and makes running code inside the container easy to use. docker without sudo is already setup in the jetson device. Check existing containers available in the Jetson:

sjsujetson@sjsujetson-01:~$ docker images
REPOSITORY                TAG              IMAGE ID       CREATED         SIZE
jetson-llm-v1             latest           8236678f7ef1   6 days ago      9.89GB
jetson-pytorch-v1         latest           da28af1b9eed   9 days ago      9.71GB
hello-world               latest           f1f77a0f96b7   5 months ago    5.2kB
nvcr.io/nvidia/pytorch    24.12-py3-igpu   ee796da7f569   6 months ago    9.63GB
nvcr.io/nvidia/l4t-base   r36.2.0          46b8e6a6a6a7   19 months ago   750MB
sjsujetson@sjsujetson-01:~$ sjsujetsontool shell #enter into the container
root@sjsujetson-01:/workspace#
if you face errors like "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?", restart the docker:
sudo systemctl start docker
sudo systemctl status docker

The \Developer and \Developer\models folders in the jetson host are mounted to the container in the path of \Developer and \models

βœ… Hostname changes (sudo required)

sjsujetson@sjsujetson-01:~$ hostname
sjsujetson-01
sjsujetson@sjsujetson-01:~$ sjsujetsontool set-hostname sjsujetson-02
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
πŸ”§ Setting hostname to: sjsujetson-02
[sudo] password for sjsujetson: 
πŸ“ Updating /etc/hosts...
πŸ”„ Resetting machine-id...
πŸ†” Writing device ID to /etc/device-id
πŸ” Please reboot for changes to fully apply.
sjsujetson@sjsujetson-01:~$ sudo reboot
You will need to use the new hostname to ssh into the device
% ssh -X sjsujetson@sjsujetson-02.local
sjsujetson@sjsujetson-02:~$ hostname
sjsujetson-02

For TA, run this additional steps:

sudo chfn -f "Student" student
sudo passwd student
sjsujetson@sjsujetson-02:/Developer/edgeAI$ sjsujetsontool force_git_pull
If you’re logged in as student and want to change your own password: passwd. You’ll be prompted to enter your current password, then the new password twice.

βœ… Exter the Container Shell

Run the sjsujetsontool shell command line to enter into the shell of the container

sjsujetson@sjsujetson-01:/Developer/edgeAI$ sjsujetsontool shell
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
root@sjsujetson-01:/workspace# pip install transformers==4.37.0 #install transformer package

Exit the container via exit, and the container is still running

root@sjsujetson-01:/workspace# exit
exit
sjsujetson@sjsujetson-01:/Developer/edgeAI$ docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED      STATUS      PORTS     NAMES
c4010b14e9c0   8236678f7ef1   "/opt/nvidia/nvidia_…"   4 days ago   Up 4 days             jetson-dev

If you want to stop the container, you can use sjsujetsontool stop

sjsujetson@sjsujetson-01:/Developer/edgeAI$ sjsujetsontool stop
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
πŸ›‘ Stopping container...
jetson-dev
sjsujetson@sjsujetson-01:/Developer/edgeAI$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES


πŸ§ͺ Common Usage Examples

🧾 sjsujetsontool update

Downloads the latest version of sjsujetsontool from GitHub and replaces the local version, keeping a backup.

πŸ“‹ sjsujetsontool list

Displays all available commands with usage examples.

🟒 sjsujetsontool jupyter

Launches JupyterLab on port 8888 from inside the Jetson's Docker container. It allows interactive Python notebooks for AI model testing, data exploration, and debugging.

sjsujetson@sjsujetson-01:~$ sjsujetsontool jupyter
....
    To access the server, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/jpserver-1-open.html
    Or copy and paste one of these URLs:
        http://hostname:8888/lab?token=3bbbf2fbea22e917bdbace45cb414bbaeb52f1251163adcf
        http://127.0.0.1:8888/lab?token=3bbbf2fbea22e917bdbace45cb414bbaeb52f1251163adcf
you can access the jupyter server via the the provided url. If you want to remote access the jupyter server from another computer, you can replace the hostname with the IP address of the device.

🐍 sjsujetsontool run <script.py>

Runs any Python script inside the preconfigured container. Ensures all ML/AI libraries and GPU drivers are properly set up. The path of script.py should be accessible by the container, for example, the \Developer path:

sjsujetson@sjsujetson-01:/Developer/models$ sjsujetsontool run /Developer/edgeAI/jetson/test.py 
🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
βš™οΈ  CUDA Version: 12.6
🐍 Running Python script: /Developer/edgeAI/jetson/test.py
πŸ“¦ Python: 3.12.3 (main, Nov  6 2024, 18:32:19) [GCC 13.2.0]
🧠 Torch: 2.6.0a0+df5bbc09d1.nv24.12
βš™οΈ  CUDA available: True
πŸ–₯️  CUDA version: Cuda compilation tools, release 12.6, V12.6.85
πŸ“š Transformers: 4.37.0
🧬 HuggingFace hub: Version: 0.33.2
πŸ’‘ Platform: Linux-5.15.148-tegra-aarch64-with-glibc2.39
πŸ” Ollama: βœ… Ollama installed: ollama version is 0.9.2

🧠 sjsujetsontool ollama

This section introduces its integrated ollama command group, which allows you to manage, run, and query large language models inside a Docker container on your Jetson.

sjsujetsontool ollama <subcommand> enables local management and interaction with Ollama models from inside a persistent Jetson container.

Supported subcommands:

Subcommand Description
serve Start Ollama REST API server (port 11434)
run <model> Run specified model in interactive CLI
list List all installed Ollama models
pull <model> Download a new model
delete <model> Remove a model from disk
status Check if Ollama server is running
ask Ask model a prompt via REST API

πŸš€ Commands and Usage

  1. Start the Ollama Server
    sjsujetsontool ollama serve
    

Starts the Ollama REST server inside the container, listening on http://localhost:11434.

  1. Run a Model in CLI Mode

    $ sjsujetsontool ollama run mistral
    🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
    βš™οΈ  CUDA Version: 12.6
    πŸ’¬ Launching model 'mistral' in CLI...
    pulling manifest 
    pulling ff82381e2bea: 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– 4.1 GB                         
    pulling 43070e2d4e53: 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  11 KB                         
    pulling 1ff5b64b61b9: 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  799 B                         
    pulling ed11eda7790d: 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–   30 B                         
    pulling 42347cd80dc8: 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  485 B                         
    verifying sha256 digest 
    writing manifest 
    success 
    >>> Send a message (/? for help)
    
    Launches interactive terminal mode using the mistral model. Enter \exit to exit.

  2. List Installed Models

    $ sjsujetsontool ollama list
    🧠 Detected Jetson Model: NVIDIA Jetson Orin Nano Engineering Reference Developer Kit Super
    βš™οΈ  CUDA Version: 12.6
    πŸ“ƒ Installed models:
    NAME               ID              SIZE      MODIFIED           
    mistral:latest     3944fe81ec14    4.1 GB    About a minute ago    
    llama3.2:latest    a80c4f17acd5    2.0 GB    2 hours ago           
    qwen2:latest       dd314f039b9d    4.4 GB    9 days ago            
    llama3.2:3b        a80c4f17acd5    2.0 GB    9 days ago 
    
    Shows a table of downloaded models and their sizes.

  3. Download a New Model

    sjsujetsontool ollama pull llama3
    
    Pulls the specified model into the container. Examples include: β€’ phi3 β€’ mistral β€’ llama3 β€’ qwen:7b

  4. Delete a Model

    sjsujetsontool ollama delete mistral
    
    Frees up disk space by removing the model.

  5. Check Server Status

    sjsujetsontool ollama status
    
    Checks if the REST API is running on port 11434.

  6. Ask a Prompt (with auto-pull + caching)

    sjsujetsontool ollama ask "What is nvidia jetson orin?"
    
    Uses the last used model, or you can specify one:
    sjsujetsontool ollama ask --model mistral "Explain transformers in simple terms."
    
    β€’ Automatically pulls model if not available β€’ Remembers last used model in .last_ollama_model under workspace/

πŸ§ͺ Example: Simple Chat Session

Pull and run mistral model

sjsujetsontool ollama pull mistral
sjsujetsontool ollama run mistral
Ask directly via REST
sjsujetsontool ollama ask --model mistral "Give me a Jetson-themed poem."

βΈ»

🧰 Troubleshooting β€’ Port already in use: Run sudo lsof -i :11434 and kill the process if needed. β€’ Model not found: Use sjsujetsontool ollama pull manually before ask or run. β€’ Server not running: Start with sjsujetsontool ollama serve before using REST API.

πŸ”¬ sjsujetsontool llama

Starts the llama.cpp server (C++ GGUF LLM inference engine) on port 8000. Loads a .gguf model and serves an HTTP API for tokenized prompt completion.

After entering into the container, you can run a local downloaded model ('build_cuda' folder is the cuda build):

root@sjsujetson-01:/Developer/llama.cpp# llama-cli -m /models/mistral.gguf -p "Explain what is Nvidia jetson"
....
llama_perf_sampler_print:    sampling time =      34.98 ms /   532 runs   (    0.07 ms per token, 15210.86 tokens per second)
llama_perf_context_print:        load time =    3498.72 ms
llama_perf_context_print: prompt eval time =    2193.93 ms /    17 tokens (  129.05 ms per token,     7.75 tokens per second)
llama_perf_context_print:        eval time =   84805.65 ms /   514 runs   (  164.99 ms per token,     6.06 tokens per second)
llama_perf_context_print:       total time =   92930.78 ms /   531 tokens

llama-server is a lightweight, OpenAI API compatible, HTTP server for serving LLMs. Start a local HTTP server with default configuration on port 8080: llama-server -m model.gguf --port 8080, Basic web UI can be accessed via browser: http://localhost:8080. Chat completion endpoint: http://localhost:8080/v1/chat/completions

root@sjsujetson-01:/Developer/llama.cpp# llama-server -m /models/mistral.gguf --port 8080

Send request via curl in another terminal (in the host machine or container): ```bash sjsujetson@sjsujetson-01:~$ curl http://localhost:8080/completion -d '{ "prompt": "Explain what is Nvidia jetson?", "n_predict": 100 }'

πŸ“¦ sjsujetsontool status

Displays:

  • Docker container state
  • GPU stats from tegrastats
  • Port listening status for key services

πŸ”§ sjsujetsontool set-hostname <name>

Changes device hostname, regenerates system identity, writes /etc/device-id.

πŸ›‘ sjsujetsontool stop

Stops the running Docker container started by previous commands.


⚠️ Safety Guidelines

  • πŸ”Œ Power Supply: Use a 5A USB-C adapter or official barrel jack for stability.
  • πŸ’Ύ SSD Cloning: Change the hostname and machine-id after cloning to prevent network conflicts.
  • πŸ” SSH Security: Only install SSH keys from trusted GitHub accounts.
  • 🧼 Disk Cleanup: Remove cache and large datasets before creating system images.
  • πŸ“¦ Containers: Always stop containers with sjsujetsontool stop before unplugging.

🧭 Ready to Learn and Build

You're now equipped to:

  • Run AI models (LLaMA, Mistral, DeepSeek, etc.)
  • Build and test LLM applications
  • Access Jetson remotely with SSH or VS Code
  • Run real-time cyber/AI experiments on the edge!

Made with πŸ’» by Kaikai Liu β€” GitHub Repo