Week of February 26, 2024
Ollama: running Large Language Models locally • Ollama is a tool to run Large Language Models locally, without the need of a cloud service. Its usage is similar to Docker, but it’s specifically designed for LLMs. You can use it as an interactive shell, through its REST API or using it from a Python library. See also: Ollama on Hacker News • (Andrea Grandi) / March 1