Discovering Ollama: Running Open Source LLMs Locally
I recently discovered Ollama (ollama.com), a platform that allows you to run open-source Large Language Models (LLMs) directly on your local machine. As someone with an M1 Pro MacBook Pro, I was intrigued by the idea of leveraging my hardware to run advanced AI models privately. After exploring Ollama and experimenting with open-source models like Llama, Mistral, and Phi, I’ve been amazed at the speed and quality these models deliver—especially considering they’re running locally.