Programming has undergone a revolution in recent years with the introduction of artificial intelligence tools that help developers write code more efficiently and accurately. One of the most promising tools in this space is Llama Coder, the copilot that uses the power of Ollama to extend the capabilities of the Visual Studio Code (VS Code) IDE. In this article, we will learn how to set it up and use it through a simple practical example.
What is a copilot?
The term “copilot” comes from aviation, where it refers to the second pilot of an airplane, responsible for assisting the main pilot in flight operations. Similarly, a copilot in the context of programming is an intelligent assistant that helps developers write, optimize, and debug code. This assistant does not replace the developer, but works alongside them to improve efficiency and reduce errors.
Copilots leverage artificial intelligence technologies to analyze code in real time. They use advanced language models and are able to understand the context of the code being written and provide relevant suggestions.
Why use Llama Code with Ollama?
Llama Coder offers two significant advantages over other copilots:
- Free and without usage costs: Llama Code is a completely free plugin that allows you to use the Codellama family of models locally, at no additional cost.
- Privacy and security: Thanks to its integration with Ollama, Llama Code operates entirely locally, ensuring that the processed code is not sent to external cloud services. This is especially important for those who need to ensure high levels of security and data protection.
The plugin is also designed to integrate seamlessly with Visual Studio Code, one of the most popular and powerful code editors. Compatibility with VS Code means that developers can easily incorporate Llama Coder into their daily workflow without having to change tools or habits.
Installation and configuration
Before starting with the installation of Llama Coder you need to install Ollama. In this regard, you can refer to our guide: Ollama – Guide to running LLM models locally.
Then you can install the Llama Coder plugin by searching for it directly from the VS Code marketplace:
Once downloaded, you can access the plugin settings by clicking the gear icon and then clicking on the “Extension Settings” option:
Among the various configurable options, the most important is certainly the AI model. By default, the stable-code:3b-code-q4_0
is set, which is very light and requires just 3 GB of RAM to run. Having good quality hardware available, it is possible to test even the most advanced models to obtain better results.
Once you have selected the model you want to use, if it has not already been downloaded locally previously, when opening any file containing code, a notification will appear at the bottom right asking you to proceed with the download:
By clicking on “Yes” the download will start, which may take a variable amount of time depending on your internet connection. At the bottom right, in the Visual Studio Code status bar, an icon will appear indicating that the download is in progress.
Using Llama Coder
Once the download is complete, you can test the model by trying to write code inside the editor. If the configuration was successful, gray suggestions should appear like those in the following image:
To accept the suggestion and make the change effective, you will need to click the “TAB” key.
For the test I used the basic model which, as you can see from the example, is already quite effective and can help you save a lot of time if you learn to use it correctly. Clearly the results are not up to the level of paid enterprise copilots (such as GitHub Copilot) but if you have enough powerful hardware you can get very interesting results.
In summary, Llama Code represents a significant step forward in the field of development tools based on artificial intelligence. Its integration with VS Code offers developers a copilot with good potential that can improve productivity. Since trying it is free, I invite you to give it a try!