Quick Start¶
Get WikiMind running locally in 5 minutes.
Prerequisites¶
- Python 3.11+ (3.12 recommended)
- Node.js 20+ (for the React frontend, optional)
- An LLM API key (any one of: Anthropic, OpenAI, Google, or a local Ollama instance)
1. Clone and set up¶
git clone https://github.com/manavgup/wikimind.git
cd wikimind
# Create virtual environment and install dependencies
make venv
make install-dev
# Verify everything is installed correctly
make check-env
2. Configure an LLM provider¶
Copy the example environment file and add at least one API key:
Edit .env and set one of:
# Pick ONE -- the provider auto-enables when a key is detected
ANTHROPIC_API_KEY=sk-ant-...
# or
OPENAI_API_KEY=sk-...
# or
GOOGLE_API_KEY=...
That is all you need. WikiMind auto-detects which providers have keys configured and enables them automatically.
Using Ollama (no API key needed)
If you have Ollama running locally, enable it explicitly:
3. Start the gateway¶
The FastAPI server starts on http://localhost:7842. You can verify it is running:
4. (Optional) Start the React UI¶
In a separate terminal:
5. Ingest your first source¶
# Ingest a web article
curl -X POST http://localhost:7842/ingest/url \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com/interesting-article"}'
# Ingest a PDF
curl -X POST http://localhost:7842/ingest/pdf \
-F "file=@paper.pdf"
# Ingest raw text
curl -X POST http://localhost:7842/ingest/text \
-H "Content-Type: application/json" \
-d '{"content": "Your text here...", "title": "My Notes"}'
Open http://localhost:5173 and use the Inbox view to paste URLs, upload PDFs, or enter text directly.
6. Ask a question¶
Once your source has been compiled (watch the terminal for "Article saved"):
curl -X POST http://localhost:7842/query \
-H "Content-Type: application/json" \
-d '{"question": "What are the key claims from the article I just ingested?"}'
The response includes the answer, confidence level, cited sources, and follow-up questions.
Next steps¶
- Ingesting Sources -- All source types and options
- Configuration -- Full settings reference
- Docker deployment -- Run with Docker Compose
- Architecture overview -- How it all fits together