AI Coding tools like copilot, cody etc.. are becoming very relevant and helpful. But, the problem is that they are not available offline. This became a problem when I wanted to travel and had no internet access. So, I remembered about ollama and was looking for a way to use it for my local development.

Ollama

Ollama is a tool that allows us to run LLMs locally.

You can get it simply by:

brew install ollama

Then you start ollama server locally by:

ollama serve

Next, you also need to pull the model you want to use. You can do this by:

ollama pull gemma

We can try the model by:

ollama run gemma

Once the model is available locally we can use it with the continue plugin for vscode or intellij.

Continue Plugin

Continue ia a plugin that allows us to use AI coding tools like copilot, cody etc.. with our local models.

  1. Install the plugin in vscode.

  2. Configure to use with ollama model. docs

add this to ~/.continue/config.json:

{
  "models": [
    {
      "title": "Gemma Local",
      "model": "gemma",
      "completionOptions": {},
      "apiBase": "http://localhost:11434",
      "provider": "ollama"
    }
  ]
}
  • Alternatively, You can add the local model by clicking on the + icon and select ollama and then select the model you want to use or there will be auto detection option available.

Conclusion

With this setup, I was able to use AI coding tools locally and it was very helpful when I was traveling and had no internet access.

The results are not as appealing as the github copilot but it’s still very helpful to do quick search or get suggestions on debugging and optimizing the code.