Skip to main content

One doc tagged with "Local LLM"

View all tags

Ollama

Learn how to install and run Ollama on NeoEdge NG4500 for local LLM inference with CUDA acceleration. Supports DeepSeek-R1 and other mainstream models.