Skip to main content

2 docs tagged with "Ollama"

View all tags

DeepSeek-R1 Local Deployment

Step-by-step guide to deploying DeepSeek-R1 Large Language Model on NeoEdge NG4500 using Ollama. Enable offline AI inference with high privacy and low latency on Jetson Orin.

Ollama

Learn how to install and run Ollama on NeoEdge NG4500 for local LLM inference with CUDA acceleration. Supports DeepSeek-R1 and other mainstream models.