← back to portfolio

RAG Knowledge System & Library Assistant

RAG NLP Elasticsearch LLMs

A retrieval-augmented generation system that turns scattered company knowledge into a searchable, conversational library. Ask it a question, get an answer with sources.

The problem

The company's knowledge lived in the worst possible database: people's heads. Some of it was in documents buried across shared drives. Some in Slack threads from two years ago. Most of it existed only as tribal knowledge, you had to know who to ask, and that person had to be available, and they had to remember. New hires spent weeks just figuring out who knew what.

The approach

I built a RAG pipeline that ingests company documents, indexes them intelligently, and serves them through a natural-language interface. Not a chatbot that guesses, a library assistant that retrieves, then reasons.

The outcome

People stopped asking "who would know about this?" and started just asking the system. Onboarding went from weeks of shadowing senior employees to something closer to self-service, new hires could explore the knowledge base at their own pace, on their own schedule. The experts didn't disappear; they just stopped being the bottleneck for basic questions.

What I learned

Everyone obsesses over which LLM to use. The real leverage is in the indexing. How you chunk documents, what metadata you preserve, how you handle overlapping context windows, that's where retrieval quality lives or dies. Garbage in, garbage out applies to embeddings just as much as it applies to everything else. The model can only reason over what you give it to work with.