Baobab Tech Solutions, Language Models  |  October 8, 2024

Rapid Research: using a simple AI agent

At Baobab Tech, we frequently find ourselves needing to conduct quick yet comprehensive scans of various topics to support our work. This need for efficient information gathering and synthesis led us to develop what we call "Rapid Research" – a process that leverages artificial intelligence to dramatically accelerate our ability to gather insights and make informed decisions.

The challenge of information gathering

There is soooo much information out there and we often need to dive deep into new topics or stay updated on rapidly evolving fields. However, traditional research methods can be time-consuming and often impractical for our needs. This is where our "Rapid Research" process comes into play.

Our "Rapid Research" approach

Our "Rapid Research" process is not formal academic research per se, but rather a structured method for efficiently scanning and synthesizing information on various topics. This approach allows us to gain a comprehensive understanding of complex subjects in a fraction of the time it would take using conventional methods. Here's how we do it:

1. Define the core research objective

We begin by clearly articulating the main goal of our research. This could be understanding a new technology, exploring potential solutions, or investigating a specific perspective in an industry or sub-industry. This overarching objective guides our entire process.

2. Identify key sub-themes or perspectives

We break down the main research topic into smaller, more manageable sub-themes. For instance, if we're exploring the potential of AI in agriculture for a development project, our sub-themes might include:

  • AI applications in crop management
  • Challenges of implementing AI in rural farming communities
  • Environmental impact of AI-driven agriculture
  • Case studies of successful AI integration in developing countries' agriculture

We might perform some quick research to figure out what these themes are.

3. Curate relevant resources

For each sub-theme, we compile a list of 5-20 high-quality resources. These typically include:

  • Industry reports and white papers
  • Academic publications
  • Reputable blogs and news sites
  • Expert interviews or talks
  • Case studies and real-world examples

We focus on recent, authoritative sources that provide valuable insights into our specific sub-themes. We will often use simple web searches or Perplexity.ai to perform these searches.

4. AI-Assisted analysis and synthesis

This is where we leverage the power of large language models (LLMs) to dramatically accelerate our research process. For each sub-theme:

a) We provide context to the AI, explaining the sub-theme and the specific perspective we want to analyze it from. A simple file with context/focus and the URLs:

<CONTEXT>
AI applications with considerations, risks etc, focused on crop management
</CONTEXT>
<URLS>
...
url1,
url2,
....
</URLS>

b) We feed the AI our curated resources

c) In the context we give clear instructions on the type of analysis and synthesis we're looking for, such as identifying key trends, extracting main arguments, or highlighting potential challenges and opportunities.

d) The AI processes all the information and produces a concise yet comprehensive summary of the sub-theme or sub-research, aligned with our specified perspective and with citations/references

5. Overarching synthesis

Once all sub-themes have been analyzed, we move to the final synthesis stage:

a) We provide the main research context to the AI, clearly articulating our overarching research goal.

b) Instead of feeding in all the original sources again, we provide the AI with the synthesized summaries from each sub-theme.

c) We instruct the AI to produce a cohesive analysis that addresses our main research question, drawing insights from across all sub-themes.

<FOCUS>
AI in agriculture for a development project
</FOCUS>
<PERSPECTIVES>
sub-research summary #1 with citations
sub-research summary #2 with citations
...
</PERSPECTIVES>

The technology behind our "Rapid Research"

At Baobab Tech, we've implemented this process using the following tech stack as a script:

  • Primary LLM: Anthropic's Claude 3.5 Sonnet, GPT4o or Llama-3.1 405b
  • Supporting models: For specific tasks like long document summarization, we sometimes employ smaller, specialized models such as Llama 3.1 70B
  • Orchestration: A basic linear agentic flow to manage the process

The key requirement for our primary model is a large context window (at least 128K tokens) to handle the volume of information being processed.

Benefits of our "Rapid Research" approach

  1. Time efficiency: By leveraging AI for initial reading and synthesis, we can conduct comprehensive topic scans in hours rather than days or weeks.
  2. Consistency: Our structured approach ensures that each sub-theme is analyzed with equal rigour and from the desired perspective.
  3. Scalability: This method can be applied to both quick, targeted inquiries and more extensive research projects.
  4. Agility: In the fast-moving tech world, our ability to quickly understand and synthesize information on new topics keeps us adaptable and innovative.
  5. Enhanced decision making: The comprehensive yet quick nature of our research process supports faster, more informed decision-making for both our team and our clients.

Limitations and ethical considerations

While our "Rapid Research" process offers significant benefits, we at Baobab Tech are always mindful of its limitations:

  1. AI bias: We recognize that LLMs can inherit biases from their training data. Human oversight is crucial in our process to identify and mitigate these biases.
  2. Depth vs. speed: While our process is excellent for rapid scans and synthesis, it's not a replacement for deep, specialized research when that's what's required.
  3. Source reliability: The quality of our output is heavily dependent on the quality of input sources. Careful curation of resources remains a critical human task in our process.
  4. Contextual understanding: While AI can process and synthesize information quickly, it doesn't possess the contextual understanding that our human experts do, particularly in complex fields like international development.
  5. Ethical use: We're committed to using AI responsibly and transparently. We always verify key findings independently and are clear with our clients about our research methodologies.

Conclusion

At Baobab Tech, our AI-enhanced "Rapid Research" process represents a powerful synergy between human expertise and artificial intelligence. By structuring our approach and leveraging AI's ability to rapidly process and synthesize information, we can tackle complex research tasks with unprecedented efficiency.

This method allows us to stay at the forefront of technological developments and provide cutting-edge solutions to our clients in the development and humanitarian sectors. However, we always remember that this process is a tool to augment human intelligence, not replace it.

As we continue to refine our "Rapid Research" process and as AI technology evolves, we anticipate even more sophisticated research workflows emerging. At Baobab Tech, we're committed to finding the right balance between AI capabilities and human insight, always keeping in mind our ultimate goal: to expand human potential and enable organizations to have greater impact in their crucial work.