Introduction
It's a New Year and with that, I'm excited to announce TermAI, a Command Line Interface (CLI) that makes it simple to get answers from an LLM model right from your command line.
Why was it built?
While popular LLMs offer powerful capabilities, interacting with them often involves switching to their websites or dedicated applications. This can be disruptive, especially when you need quick answers while working in your IDE or terminal. TermAI eliminates this friction by bringing the power of LLMs directly to your terminal. This allows you to quickly get answers to your questions without ever leaving your current workflow. For example, instead of switching to a browser to find the command to remove a folder via terminal or how to set up a Python virtual environment, you can simply use TermAI to get the information you need instantly.
How does it differ from Ollama?
This is a great question. So with Ollama, there are 2 things: First, you need to download the Ollama application and then second you need to download the Models, while this will provide offline capabilities it's an extremely heavy process since the models are extremely large and sometimes take multiple GB of storage. With TermAI you are directly interacting with the LLM models via the rest api. This makes it extremely lightweight but you lose out on the offline capabilities.
Getting the Application
If you want to try out TermAI for yourself, the application is open-sourced and available on my Github at the following URL: https://github.com/srivats22/termai. Try it out and let me know what you think. I also plan on releasing TermAI as a Python-based CLI application and maybe a Node JS CLI application let see