Running A Private, Local AI — Ollama

RandomResearchAI
3 min readMay 4, 2024

--

What is Ollama?

Ollama is an innovative framework designed to simplify deploying and managing AI models on local hardware. Developed with ease of use in mind, Ollama eliminates the complexities often associated with setting up and running machine learning models, allowing users to focus on building and fine-tuning their algorithms.

Why Use Ollama?

Traditionally, running AI models has been resource-intensive, often requiring access to high-performance computing infrastructure or cloud-based services. While these options offer scalability and flexibility, they can incur significant costs and introduce latency issues.

Ollama offers a compelling alternative by enabling users to deploy and execute AI models directly on their local machines. By leveraging the computing power of modern CPUs and GPUs, Ollama empowers developers to achieve impressive performance gains while maintaining complete control over their data and resources.

Getting Started with Ollama

Getting started with Ollama is a straightforward process:

Head over to https://ollama.com/

There are options for macOS, Linux, and a Windows Preview. However, if you have a Windows machine, follow the steps below to use WSL—Windows Subsystem For Linux. Otherwise, on macOS, click download and set it up. For Linux, the steps are the same as for Windows, except for the WSL installation.

Install WSL

To install WSL, open Powershell or the Terminal app and type:

wsl --install

Thats it! The newer Windows updates make it incredibly easy to download WSL.

Afterward, you will get a screen asking for a username and password. Go ahead and put whatever you like.

Make sure to remember it, though; it will be essential!

Install Ollama (Windows + Linux)

After you open up the Linux terminal or WSL, simply type:

curl -fsSL https://ollama.com/install.sh | sh

This should automatically install ollama.

You will see that it says the NVIDIA GPU has been installed. This is great; AI loves GPUS. It's like a caffeine boost to them, but exponentially.

Now, to make sure Ollama is up and running, go to: https://localhost:11434

If you get the text above, you did everything correctly.

Choosing Models To Run

Now, we have to figure out which models to run. We can find a list of all the models at https://ollama.com/library

For this, we are going ahead with Llama 3. Gosh, there are a lot of llamas today.

As you can see, the 5B Parameter model is only 4.7 GB. Thats amazing. Go ahead and copy this command into your terminal.

After several installations, some may need the sudo command (add sudo before if any of them don't work), and then it should be ready.

Using the same command, we have access to llama3.

You can ask this AI anything. And, best of all, this is all local. You can even turn off the wifi! Try that.

That's all, we are done!

Thanks for taking the time to read this. I will be coming out with more blogs on tech-related topics soon!

--

--

RandomResearchAI
RandomResearchAI

No responses yet