first commit
This commit is contained in:
198
README.md
Normal file
198
README.md
Normal file
@@ -0,0 +1,198 @@
|
||||
# Tell-Me
|
||||
|
||||
A CLI application that provides AI-powered search and information retrieval using local LLM models. Similar to Perplexity, but running locally in your terminal.
|
||||
|
||||
## Features
|
||||
|
||||
- 🔍 **Web Search**: Powered by SearXNG for comprehensive internet searches
|
||||
- 📄 **URL Fetching**: Automatically fetches and converts web pages to clean Markdown
|
||||
- 🤖 **Local LLM Support**: Works with any OpenAI-compatible API (Ollama, LM Studio, etc.)
|
||||
- 💻 **Simple CLI**: Clean terminal interface for easy interaction
|
||||
- ⚙️ **Configurable**: Easy INI-based configuration
|
||||
- 🔒 **Privacy-Focused**: All processing happens locally
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before using Tell-Me, you need:
|
||||
|
||||
1. **Go 1.21 or higher** - [Download Go](https://go.dev/dl/)
|
||||
2. **A running SearXNG instance** - [SearXNG Setup Guide](https://docs.searxng.org/admin/installation.html)
|
||||
- Quick Docker setup: `docker run -d -p 8080:8080 searxng/searxng`
|
||||
3. **An OpenAI-compatible LLM API** such as:
|
||||
- [Ollama](https://ollama.ai/) (recommended for local use)
|
||||
- [LM Studio](https://lmstudio.ai/)
|
||||
- Any other OpenAI-compatible endpoint
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Clone or download this repository
|
||||
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd tell-me
|
||||
```
|
||||
|
||||
### 2. Download dependencies
|
||||
|
||||
```bash
|
||||
go mod download
|
||||
```
|
||||
|
||||
### 3. Build the application
|
||||
|
||||
```bash
|
||||
go build -o tell-me
|
||||
```
|
||||
|
||||
### 4. Set up configuration
|
||||
|
||||
Create the config directory and copy the example configuration:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.config
|
||||
cp tell-me.ini.example ~/.config/tell-me.ini
|
||||
```
|
||||
|
||||
### 5. Edit your configuration
|
||||
|
||||
Open `~/.config/tell-me.ini` in your favorite editor and configure:
|
||||
|
||||
```ini
|
||||
[llm]
|
||||
# Your LLM API endpoint (e.g., Ollama, LM Studio)
|
||||
api_url = http://localhost:11434/v1
|
||||
|
||||
# Model name to use
|
||||
model = llama3.2
|
||||
|
||||
# Context window size
|
||||
context_size = 16000
|
||||
|
||||
# API key (leave empty if not required)
|
||||
api_key =
|
||||
|
||||
[searxng]
|
||||
# Your SearXNG instance URL
|
||||
url = http://localhost:8080
|
||||
```
|
||||
|
||||
**Example configurations:**
|
||||
|
||||
**For Ollama:**
|
||||
```ini
|
||||
[llm]
|
||||
api_url = http://localhost:11434/v1
|
||||
model = llama3.2
|
||||
context_size = 16000
|
||||
api_key =
|
||||
```
|
||||
|
||||
**For LM Studio:**
|
||||
```ini
|
||||
[llm]
|
||||
api_url = http://localhost:1234/v1
|
||||
model = your-model-name
|
||||
context_size = 16000
|
||||
api_key =
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Simply run the application:
|
||||
|
||||
```bash
|
||||
./tell-me
|
||||
```
|
||||
|
||||
You'll see a welcome screen, then you can start asking questions:
|
||||
|
||||
```
|
||||
╔════════════════════════════════════════════════════════════════╗
|
||||
║ Tell-Me CLI ║
|
||||
║ AI-powered search with local LLM support ║
|
||||
╚════════════════════════════════════════════════════════════════╝
|
||||
|
||||
Using model: llama3.2
|
||||
SearXNG: http://localhost:8080
|
||||
|
||||
Type your questions below. Type 'exit' or 'quit' to exit.
|
||||
────────────────────────────────────────────────────────────────
|
||||
|
||||
You: What are the latest developments in AI?
|
||||
```
|
||||
|
||||
The AI will:
|
||||
1. Automatically search the web for current information
|
||||
2. Fetch relevant URLs if needed
|
||||
3. Synthesize the information into a comprehensive answer
|
||||
4. Cite sources with URLs
|
||||
|
||||
Type `exit` or `quit` to exit the application.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **User asks a question** - You type your query in the terminal
|
||||
2. **AI searches first** - The system prompt enforces web search before answering
|
||||
3. **Information gathering** - Uses SearXNG to find relevant sources
|
||||
4. **Content fetching** - Optionally fetches full content from specific URLs
|
||||
5. **Answer synthesis** - AI combines information and provides a comprehensive answer with citations
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
tell-me/
|
||||
├── main.go # Main application entry point
|
||||
├── config/
|
||||
│ └── config.go # Configuration loading from INI file
|
||||
├── llm/
|
||||
│ └── client.go # OpenAI-compatible API client with tool calling
|
||||
├── tools/
|
||||
│ ├── search.go # SearXNG web search implementation
|
||||
│ └── fetch.go # URL fetching and HTML-to-Markdown conversion
|
||||
├── go.mod # Go module dependencies
|
||||
├── tell-me.ini.example # Example configuration file
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Configuration Reference
|
||||
|
||||
### LLM Section
|
||||
|
||||
- `api_url`: The base URL for your OpenAI-compatible API endpoint
|
||||
- `model`: The model name/identifier to use
|
||||
- `context_size`: Maximum context window size (default: 16000)
|
||||
- `api_key`: API key if required (leave empty for local APIs like Ollama)
|
||||
|
||||
### SearXNG Section
|
||||
|
||||
- `url`: The URL of your SearXNG instance
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Config file not found"
|
||||
Make sure you've created `~/.config/tell-me.ini` from the example file.
|
||||
|
||||
### "Search request failed"
|
||||
Check that your SearXNG instance is running and accessible at the configured URL.
|
||||
|
||||
### "Chat completion failed"
|
||||
Verify that:
|
||||
- Your LLM API is running
|
||||
- The API URL is correct
|
||||
- The model name is correct
|
||||
- The model supports tool/function calling
|
||||
|
||||
### Connection refused errors
|
||||
Ensure both SearXNG and your LLM API are running before starting Tell-Me.
|
||||
|
||||
## Tips
|
||||
|
||||
- Use specific questions for better results
|
||||
- The AI will automatically search before answering
|
||||
- Sources are cited with URLs for verification
|
||||
- You can ask follow-up questions in the same session
|
||||
- The conversation history is maintained throughout the session
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
Reference in New Issue
Block a user