Week of February 26, 2024

Ollama: running Large Language Models locallyOllama is a tool to run Large Language Models locally, without the need of a cloud service. Its usage is similar to Docker, but it’s specifically designed for LLMs. You can use it as an interactive shell, through its REST API or using it from a Python library. See also: Ollama on Hacker News(Andrea Grandi) / March 1

comments powered by Disqus