This article covers the technical components of AI powered knowledge retrieval platforms without tackling critical components such as ethical AI, data privacy, governance and explainable AI. Stay tuned for more on that. In the meantime, you can read up on NetHope’s work with UKHIH for the humanitarian sector and Luiza’s recent article on regulation, oversight and explainability.
Update (29 Nov): Also check out Karin Maasel’s reflections on “use case” and AI capability versus user needs.
The evolution of AI-powered knowledge retrieval systems has primarily focused on technical capabilities—better embedding models, more sophisticated retrieval mechanisms, agentic multi-step flows and of course enhanced (large) language models. While these advances are crucial, they sometimes overshadow the fundamental purpose of these systems: serving human users effectively.
Knowledge retrieval systems exist not as mere technical achievements but as tools that enable human decision-making and action. This shift in perspective—from technical metrics to human utility—requires us to reassess how we design and evaluate these systems.
The equation is straightforward: Utility = Usability + Decision-Enabling Output
. A system may retrieve perfectly accurate information, but if users cannot readily apply it to their workflow, its practical value diminishes significantly.
The way information is presented significantly impacts its utility. Users process information differently based on their context and preferences. Looking at the short but telling evolution of AI powered systems, they still default to standardized outputs: multi-turn chat responses with some dynamic but limited side window artifact (Anthropic) or canvas (OpenAI). Bespoke systems go to far the other way with hyperstructured static outputs (ImpactAI, Gannet). More effective retrieval systems must support multiple presentation formats:
The key isn't offering all formats simultaneously but providing the right format for the specific use case and user context, more on this Generative UI paradigm coming soon.
System response time isn't merely a technical metric—it's a crucial factor in maintaining human cognitive flow. When users must wait for retrievals, they risk losing context and mental momentum. Think of pretty much any Microsoft Azure webpage, Salesforce configuration pages, those old school search pages, and most government internal sites. Better implementation strategies include:
Different users approach information differently. A senior analyst might need detailed technical data, while an executive requires high-level insights. The same individual might want the same knowledge and information in 5 different ways for 5 different use cases. We must adapt to these varying needs through:
The combination of vector-based semantic search with traditional keyword matching isn't just technically sound—it mirrors how humans think about information. Users often have both conceptual ("similar to...") and specific ("containing...") search needs, sometimes simultaneously.
Natural language processing serves as a bridge between human intent and system capability. Rather than forcing users to learn system syntax, modern retrieval systems should:
Query expansion should reflect human associative thinking. When users search for humanitarian initiatives using cash vouchers
they might also be interested in direct in-kind assistance comparisons
or other livelihood support.
This expansion must be:
For narrative outputs (i.e. classic GPT response), the role and style matters significantly. Do you want:
Its not just about the prompt, it’s about the retrieval system in combination with the UX of that information that makes any of the above styles work well.
Consider a system managing international development documents and reports. Traditional approaches might present a list of documents with (original) static summaries. A human-centric approach instead offers:
This approach acknowledges that users need different levels of detail at different times and supports natural workflow progression.
OK. So why are most AI powered knowledge retrieval systems left to be desired?
Current AI-powered knowledge retrieval implementations often reveal a fundamental disconnect: sophisticated retrieval mechanisms paired with suboptimal user experiences. We see systems with meticulously fine-tuned embedding models and domain-specific rerankers delivering results through interfaces that fail to serve human workflows effectively.
This disconnect stems from a critical skills integration challenge. One one hand organizations might excel at building powerful retrieval engines but they often lack practitioners who deeply understand human information interaction patterns, on the other, some organizations are brilliant at user centred principles but have zero experience or knowledge in data science and AI.
I’d argue that in the non-profit and government international development and humanitarian sector the weakness lies in practical UX.
The solution requires more than traditional or theoretical UX expertise (there are way too many theoretical UX “experts” out there). We need do’ers, experimentalists, practitioners who can bridge the gap between technical capabilities and human utility, understanding both the possibilities and limitations of modern knowledge retrieval systems. This isn't about surface-level interface design—it's about deeply understanding how humans process and use information in real-world contexts.
Success in human-centric knowledge retrieval requires the marriage of two distinct skill sets:
The path forward requires organizations to recognize that effective knowledge retrieval demands equal investment in both technical excellence and human experience design. Only through this integration can we create systems that truly serve human needs while leveraging the full potential of advanced retrieval technologies.
In my experience, if you can find that knowledge domain-specific specialist with some tech and UI experience, never let them go. Takeaway their other burdens and pair them with a AI Developer.
The future of knowledge retrieval lies with human cognitive patterns and workflow needs. Key developments will likely include:
The technical sophistication of knowledge retrieval systems must be balanced with human usability considerations. As we continue to develop these systems, the focus should shift from raw retrieval capabilities to enabling effective human decision-making and action.
Successful implementation requires:
The future of knowledge retrieval lies not in pushing technical boundaries alone but in creating systems that make the user say “damn! that was useful”. And they keep coming back.