index rss mastodon twitter github linkedin email
Álvaro Ramírez
sponsor

Álvaro Ramírez

21 November 2024 chatgpt-shell goes offline

Since chatgpt-shell going multi-model, it was only a matter of time until we added support for local/offline models. As of version 2.0.6, chatgpt-shell has a basic Ollama implementation (llama3.2 for now).

chatgpt-shell is more than a shell. Check out the demos in the previous post.

who-offline.gif

For anyone keen on keeping all their LLM interactions offline, Ollama seems like a great option. I'm an Ollama noobie myself and already looking forward to getting acquainted. Having an offline LLM available at my Emacs fingertips will be super convenient.

For the more familiar with Ollama, please give the chatgpt-shell integration a try (it's on MELPA).

v2.0.6 has a basic/inital Ollama implementation. Please give it a good run and report bugs.

You can swap models via:

M-x chatgpt-shell-swap-model

Help make this project sustainable. Sponsor the work.