201 lines
5.8 KiB
Markdown
201 lines
5.8 KiB
Markdown
# Tell-Me
|
||
|
||
A CLI application that provides AI-powered search and information retrieval using local LLM models. Similar to Perplexity, but running locally in your terminal.
|
||
|
||
## Features
|
||
|
||
- 🔍 **Web Search**: Powered by SearXNG for comprehensive internet searches
|
||
- 📄 **URL Fetching**: Automatically fetches and converts web pages to clean Markdown
|
||
- 🤖 **Local LLM Support**: Works with any OpenAI-compatible API (Ollama, LM Studio, etc.)
|
||
- 🔌 **MCP Support**: Extend capabilities with Model Context Protocol servers
|
||
- <20> **Simple CLI**: Clean terminal interface for easy interaction
|
||
- ⚙️ **Configurable**: Easy YAML-based configuration with customizable prompts
|
||
- 🔒 **Privacy-Focused**: All processing happens locally
|
||
|
||
## Prerequisites
|
||
|
||
Before using Tell-Me, you need:
|
||
|
||
1. **Go 1.21 or higher** - [Download Go](https://go.dev/dl/)
|
||
2. **A running SearXNG instance** - [SearXNG Setup Guide](https://docs.searxng.org/admin/installation.html)
|
||
- Quick Docker setup: `docker run -d -p 8080:8080 searxng/searxng`
|
||
3. **An OpenAI-compatible LLM API** such as:
|
||
- [Ollama](https://ollama.ai/) (recommended for local use)
|
||
- [LM Studio](https://lmstudio.ai/)
|
||
- Any other OpenAI-compatible endpoint
|
||
|
||
## Installation
|
||
|
||
### 1. Clone the repository
|
||
|
||
```bash
|
||
git clone https://git.netra.pivpav.com/public/tell-me
|
||
cd tell-me
|
||
```
|
||
|
||
### 2. Download dependencies
|
||
|
||
```bash
|
||
go mod download
|
||
```
|
||
|
||
### 3. Build the application
|
||
|
||
```bash
|
||
go build -o tell-me
|
||
```
|
||
|
||
### 4. Set up configuration
|
||
|
||
Create the config directory and copy the example configuration:
|
||
|
||
```bash
|
||
mkdir -p ~/.config
|
||
cp tell-me.yaml.example ~/.config/tell-me.yaml
|
||
```
|
||
|
||
### 5. Edit your configuration
|
||
|
||
Open `~/.config/tell-me.yaml` in your favorite editor and configure:
|
||
|
||
```yaml
|
||
# Your LLM API endpoint (e.g., Ollama, LM Studio)
|
||
api_url: http://localhost:11434/v1
|
||
|
||
# Model name to use
|
||
model: llama3.2
|
||
|
||
# Context window size
|
||
context_size: 16000
|
||
|
||
# API key (leave empty if not required)
|
||
api_key: ""
|
||
|
||
# Your SearXNG instance URL
|
||
searxng_url: http://localhost:8080
|
||
|
||
# System prompt (customize the AI's behavior)
|
||
prompt: |
|
||
You are a helpful AI research assistant...
|
||
(see tell-me.yaml.example for full prompt)
|
||
|
||
# MCP Server Configuration (optional)
|
||
mcp_servers: {}
|
||
# Add MCP servers to extend functionality
|
||
# See MCP section below for examples
|
||
```
|
||
|
||
**Example configurations:**
|
||
|
||
**For Ollama:**
|
||
```yaml
|
||
api_url: http://localhost:11434/v1
|
||
model: llama3.2
|
||
context_size: 16000
|
||
api_key: ""
|
||
searxng_url: http://localhost:8080
|
||
```
|
||
|
||
**For LM Studio:**
|
||
```yaml
|
||
api_url: http://localhost:1234/v1
|
||
model: your-model-name
|
||
context_size: 16000
|
||
api_key: ""
|
||
searxng_url: http://localhost:8080
|
||
```
|
||
|
||
## Usage
|
||
|
||
Simply run the application:
|
||
|
||
```bash
|
||
./tell-me
|
||
```
|
||
|
||
You'll see a welcome screen, then you can start asking questions:
|
||
|
||
```
|
||
╔════════════════════════════════════════════════════════════════╗
|
||
║ Tell-Me CLI ║
|
||
║ AI-powered search with local LLM support ║
|
||
╚════════════════════════════════════════════════════════════════╝
|
||
|
||
Using model: llama3.2
|
||
SearXNG: http://localhost:8080
|
||
|
||
Type your questions below. Type 'exit' or 'quit' to exit.
|
||
────────────────────────────────────────────────────────────────
|
||
|
||
❯ What are the latest developments in AI?
|
||
```
|
||
|
||
The AI will:
|
||
1. Automatically search the web for current information
|
||
2. Fetch relevant URLs if needed
|
||
3. Synthesize the information into a comprehensive answer
|
||
4. Cite sources with URLs
|
||
|
||
Type `exit` or `quit` to exit the application, or press Ctrl-C.
|
||
|
||
## MCP (Model Context Protocol) Support
|
||
|
||
Tell-Me supports the [Model Context Protocol](https://modelcontextprotocol.io/), allowing you to extend the AI assistant's capabilities with additional tools from MCP servers.
|
||
|
||
### Supported MCP Servers
|
||
|
||
Tell-Me supports **stdio-based MCP servers** (local command execution). Remote SSE-based servers are not supported for security reasons.
|
||
|
||
### Configuration
|
||
|
||
Add MCP servers to your `~/.config/tell-me.yaml`:
|
||
|
||
```yaml
|
||
mcp_servers:
|
||
# Example: Filesystem access
|
||
filesystem:
|
||
command: /usr/local/bin/mcp-server-filesystem
|
||
args:
|
||
- --root
|
||
- /path/to/allowed/directory
|
||
env:
|
||
LOG_LEVEL: info
|
||
|
||
# Example: Weather information
|
||
weather:
|
||
command: /usr/local/bin/mcp-server-weather
|
||
args: []
|
||
env:
|
||
API_KEY: your-weather-api-key
|
||
```
|
||
|
||
## How It Works
|
||
|
||
1. **User asks a question** - You type your query in the terminal
|
||
2. **AI searches first** - The system prompt enforces web search before answering
|
||
3. **Information gathering** - Uses SearXNG to find relevant sources
|
||
4. **Content fetching** - Optionally fetches full content from specific URLs
|
||
5. **Answer synthesis** - AI combines information and provides a comprehensive answer with citations
|
||
|
||
## Project Structure
|
||
|
||
```
|
||
tell-me/
|
||
├── main.go # Main application entry point
|
||
├── config/
|
||
│ └── config.go # Configuration loading from YAML file
|
||
├── llm/
|
||
│ └── client.go # OpenAI-compatible API client with tool calling
|
||
├── mcp/
|
||
│ └── manager.go # MCP server connection and tool management
|
||
├── tools/
|
||
│ ├── search.go # SearXNG web search implementation
|
||
│ └── fetch.go # URL fetching and HTML-to-Markdown conversion
|
||
├── go.mod # Go module dependencies
|
||
├── tell-me.yaml.example # Example YAML configuration file
|
||
└── README.md # This file
|
||
```
|
||
|
||
## License
|
||
|
||
MIT |