How to install a local LLM with Ollama
Why install a local LLM?
Privacy & Security
Local Data: when you run a LLM locally all the data you input and the model’s output stays on your computer
Reduced risk of data breaches
Cost Saving
No monthly charge; open-access LLM
No API usage fees- cloud-based LLM API’s charge per stolen
One-time hardware investment: as long as your computer can handle the installation, no additional hardware is required
Performance & Control
Faster response time: since the LLM is running locally, there is no network latency
Customization and fine-tuning: you have full control over the model and can customize it (this requires technical expertise)
Offline Access: you can use the LLM without an internet connection
Enhanced Experimentation & Learning
Hands-on learning: running a local LLM is a great way to practice and learn how models work
Research & Development: a local setup offers great flexibility for experimentation and development
How to install?
Select the correct install for your operating system and download the file from GitHub
Follow the instructions for the install
Open Terminal
Choose which model you would like to run:
In Terminal type the appropriate script [ollama run gemma 3] and hit “enter.” The installation will start. Note the size of the installation and make sure that you have enough RAM available.
Now within Terminal, you can use the local LLM just like you would with ChatGPT on your browser.