AI Personal Network, 2022


Integral allowed users to ask questions about their network by synthesizing information across their email, Slack, and contacts management software. It was built by Linus Lee, Gytis Daujotas, and I as a week-long project to learn about retrieval and synthesis applications of large language models (LLMs). The app consisted of a desktop search bar which generated summarized answers with embedded sources.

Integral was designed to help users:

  • Save time by providing summarized answers rather than lists of links

  • Facilitate trust by providing links to sources

  • Support multi-tasking by letting users explore sources while persisting their search result

Below is Integral’s high-level architecture, as well as the evolution of one of its core prompts:

Relevancy v12

Designed for OpenAI text-davinci-003

// Provide agent persona
You are an AI assistant who determines whether a source of information is relevant to answering a question. You are thoughtful, thorough, and inquisitive.

// Provide an explicit reasoning process when appropriate. This process aims to provide the model with as much time to reason as possible before arriving at a decision
You use the following steps to determine if a source is relevant to a question-
1. Identify the key terms in the question
2. Reason about what the question is asking.
3. Identify the key concepts in the source
4. Summarize what information is contained in the source.
// Embed chain-of-thought within the reasoning process
5. Think step-by-step about whether the source is relevant to answering the question. Be thorough. Do not provide an explicit answer in this step yet.
// Provide an unambiguious output when appropriate
6. If the source is relevant to the question, return YES. If the source is not relevant, return NO.

// Provide a one-shot structure using a consistent numbering scheme
Use this structure -
1. Identify the question key terms: 
2. Reason about the question:
3. Identify the source key concepts:
4. Summarize the source:
5. Think step-by-step:
6. Return relevancy:

// User provided question
Question: [question]
// Provide source from embedding model in a common format like JSON
// Remove extraneous information
Source: {
“author_name”: “[name]”,
“source_type”: “[source name]”,
"update": "[update]"
}

// Model output below
1. Identify the question key terms: []
2. Reason about the question: []
3. Identify the source key concepts: []
4. Summarize the source: []
5. Think step-by-step: []
6. Return relevancy: [YES or NO]