272d223f732435be07d3186fb08b58133d4b1d4c
Tell-Me
A CLI application that provides AI-powered search and information retrieval using local LLM models. Similar to Perplexity, but running locally in your terminal.
Features
- 🔍 Web Search: Powered by SearXNG for comprehensive internet searches
- 📄 URL Fetching: Automatically fetches and converts web pages to clean Markdown
- 🤖 Local LLM Support: Works with any OpenAI-compatible API (Ollama, LM Studio, etc.)
- 💻 Simple CLI: Clean terminal interface for easy interaction
- ⚙️ Configurable: Easy INI-based configuration
- 🔒 Privacy-Focused: All processing happens locally
Prerequisites
Before using Tell-Me, you need:
- Go 1.21 or higher - Download Go
- A running SearXNG instance - SearXNG Setup Guide
- Quick Docker setup:
docker run -d -p 8080:8080 searxng/searxng
- Quick Docker setup:
- An OpenAI-compatible LLM API such as:
Installation
1. Clone the repository
git clone https://git.netra.pivpav.com/public/tell-me
cd tell-me
2. Download dependencies
go mod download
3. Build the application
go build -o tell-me
4. Set up configuration
Create the config directory and copy the example configuration:
mkdir -p ~/.config
cp tell-me.ini.example ~/.config/tell-me.ini
5. Edit your configuration
Open ~/.config/tell-me.ini in your favorite editor and configure:
[llm]
# Your LLM API endpoint (e.g., Ollama, LM Studio)
api_url = http://localhost:11434/v1
# Model name to use
model = llama3.2
# Context window size
context_size = 16000
# API key (leave empty if not required)
api_key =
[searxng]
# Your SearXNG instance URL
url = http://localhost:8080
Example configurations:
For Ollama:
[llm]
api_url = http://localhost:11434/v1
model = llama3.2
context_size = 16000
api_key =
For LM Studio:
[llm]
api_url = http://localhost:1234/v1
model = your-model-name
context_size = 16000
api_key =
Usage
Simply run the application:
./tell-me
You'll see a welcome screen, then you can start asking questions:
╔════════════════════════════════════════════════════════════════╗
║ Tell-Me CLI ║
║ AI-powered search with local LLM support ║
╚════════════════════════════════════════════════════════════════╝
Using model: llama3.2
SearXNG: http://localhost:8080
Type your questions below. Type 'exit' or 'quit' to exit.
────────────────────────────────────────────────────────────────
❯ What are the latest developments in AI?
The AI will:
- Automatically search the web for current information
- Fetch relevant URLs if needed
- Synthesize the information into a comprehensive answer
- Cite sources with URLs
Type exit or quit to exit the application, or press Ctrl-C.
How It Works
- User asks a question - You type your query in the terminal
- AI searches first - The system prompt enforces web search before answering
- Information gathering - Uses SearXNG to find relevant sources
- Content fetching - Optionally fetches full content from specific URLs
- Answer synthesis - AI combines information and provides a comprehensive answer with citations
Project Structure
tell-me/
├── main.go # Main application entry point
├── config/
│ └── config.go # Configuration loading from INI file
├── llm/
│ └── client.go # OpenAI-compatible API client with tool calling
├── tools/
│ ├── search.go # SearXNG web search implementation
│ └── fetch.go # URL fetching and HTML-to-Markdown conversion
├── go.mod # Go module dependencies
├── tell-me.ini.example # Example configuration file
└── README.md # This file
License
MIT
Description
Languages
Go
100%