LocalRAG
Local retrieval-augmented generation system using open-source LLMs and vector search.
Highlights
- Fully local pipeline: no external API calls, runs on consumer hardware.
- Document ingestion with chunking, embedding, and ChromaDB storage.
- Supports multiple local LLM backends via Ollama (Mistral, Llama3, Phi3).
- Context window management with source citation in responses.