How to install a local LLM with Ollama

How to install a local LLM with Ollama

Why install a local LLM?

Privacy & Security

  • Local Data: when you run a LLM locally all the data you input and the model’s output stays on your computer

  • Reduced risk of data breaches

Cost Saving

  • No monthly charge; open-access LLM

  • No API usage fees- cloud-based LLM API’s charge per stolen

  • One-time hardware investment: as long as your computer can handle the installation, no additional hardware is required

Performance & Control

  • Faster response time: since the LLM is running locally, there is no network latency

  • Customization and fine-tuning: you have full control over the model and can customize it (this requires technical expertise)

  • Offline Access: you can use the LLM without an internet connection

Enhanced Experimentation & Learning

  • Hands-on learning: running a local LLM is a great way to practice and learn how models work

  • Research & Development: a local setup offers great flexibility for experimentation and development

How to install?

  1. Select the correct install for your operating system and download the file from GitHub

  2. Follow the instructions for the install

  3. Open Terminal

image-20250617-204106.png

  1. Choose which model you would like to run:

  1. In Terminal type the appropriate script [ollama run gemma 3] and hit “enter.” The installation will start. Note the size of the installation and make sure that you have enough RAM available.

  1. Now within Terminal, you can use the local LLM just like you would with ChatGPT on your browser.