5.5 KiB
Tell-Me
A CLI application that provides AI-powered search and information retrieval using local LLM models. Similar to Perplexity, but running locally in your terminal.
Features
- 🔍 Web Search: Powered by SearXNG for comprehensive internet searches
- 📄 URL Fetching: Automatically fetches and converts web pages to clean Markdown
- 🤖 Local LLM Support: Works with any OpenAI-compatible API (Ollama, LM Studio, etc.)
- 💻 Simple CLI: Clean terminal interface for easy interaction
- ⚙️ Configurable: Easy INI-based configuration
- 🔒 Privacy-Focused: All processing happens locally
Prerequisites
Before using Tell-Me, you need:
- Go 1.21 or higher - Download Go
- A running SearXNG instance - SearXNG Setup Guide
- Quick Docker setup:
docker run -d -p 8080:8080 searxng/searxng
- Quick Docker setup:
- An OpenAI-compatible LLM API such as:
Installation
1. Clone or download this repository
git clone <repository-url>
cd tell-me
2. Download dependencies
go mod download
3. Build the application
go build -o tell-me
4. Set up configuration
Create the config directory and copy the example configuration:
mkdir -p ~/.config
cp tell-me.ini.example ~/.config/tell-me.ini
5. Edit your configuration
Open ~/.config/tell-me.ini in your favorite editor and configure:
[llm]
# Your LLM API endpoint (e.g., Ollama, LM Studio)
api_url = http://localhost:11434/v1
# Model name to use
model = llama3.2
# Context window size
context_size = 16000
# API key (leave empty if not required)
api_key =
[searxng]
# Your SearXNG instance URL
url = http://localhost:8080
Example configurations:
For Ollama:
[llm]
api_url = http://localhost:11434/v1
model = llama3.2
context_size = 16000
api_key =
For LM Studio:
[llm]
api_url = http://localhost:1234/v1
model = your-model-name
context_size = 16000
api_key =
Usage
Simply run the application:
./tell-me
You'll see a welcome screen, then you can start asking questions:
╔════════════════════════════════════════════════════════════════╗
║ Tell-Me CLI ║
║ AI-powered search with local LLM support ║
╚════════════════════════════════════════════════════════════════╝
Using model: llama3.2
SearXNG: http://localhost:8080
Type your questions below. Type 'exit' or 'quit' to exit.
────────────────────────────────────────────────────────────────
You: What are the latest developments in AI?
The AI will:
- Automatically search the web for current information
- Fetch relevant URLs if needed
- Synthesize the information into a comprehensive answer
- Cite sources with URLs
Type exit or quit to exit the application.
How It Works
- User asks a question - You type your query in the terminal
- AI searches first - The system prompt enforces web search before answering
- Information gathering - Uses SearXNG to find relevant sources
- Content fetching - Optionally fetches full content from specific URLs
- Answer synthesis - AI combines information and provides a comprehensive answer with citations
Project Structure
tell-me/
├── main.go # Main application entry point
├── config/
│ └── config.go # Configuration loading from INI file
├── llm/
│ └── client.go # OpenAI-compatible API client with tool calling
├── tools/
│ ├── search.go # SearXNG web search implementation
│ └── fetch.go # URL fetching and HTML-to-Markdown conversion
├── go.mod # Go module dependencies
├── tell-me.ini.example # Example configuration file
└── README.md # This file
Configuration Reference
LLM Section
api_url: The base URL for your OpenAI-compatible API endpointmodel: The model name/identifier to usecontext_size: Maximum context window size (default: 16000)api_key: API key if required (leave empty for local APIs like Ollama)
SearXNG Section
url: The URL of your SearXNG instance
Troubleshooting
"Config file not found"
Make sure you've created ~/.config/tell-me.ini from the example file.
"Search request failed"
Check that your SearXNG instance is running and accessible at the configured URL.
"Chat completion failed"
Verify that:
- Your LLM API is running
- The API URL is correct
- The model name is correct
- The model supports tool/function calling
Connection refused errors
Ensure both SearXNG and your LLM API are running before starting Tell-Me.
Tips
- Use specific questions for better results
- The AI will automatically search before answering
- Sources are cited with URLs for verification
- You can ask follow-up questions in the same session
- The conversation history is maintained throughout the session
License
MIT