LangChain is a framework for developing applications powered by language models.1

It allows to:

  1. Be data-aware: connect a language model to other sources of data
  2. Be agentic: Allow a language model to interact with its environment1

It provides:

  1. Components: LangChain provides modular abstractions for the components neccessary to work with language models. LangChain also has collections of implementations for all these abstractions.
  2. Use-Case Specific Chains: Chains can be thought of as assembling these components in particular ways in order to best accomplish a particular use case.1



  • Text: generic interface around text
  • ChatMessages: text that has content and is associated to a user
    • SystemChatMessage: chat message with instructions to the AI system
    • HumanChatMessage: chat message generated by a human
    • AIChatMessage: chat message generated by the AI system
  • Examples: instances of input/output pairs
  • Document: unstructured data, with content and metadata.


  • LLMs: Large Language Models. Text in, text out.
  • Chat model: Chat Message in, Chat Message out.
  • Text Embedding model: Text in, list of floats out.


  • Prompt Value: input for a model
  • Prompt Template: classes in charge of generating PromptValues.
  • Example Selector: Dynamic selector of examples to include in prompts.
  • Output Parser: intruct the model on how output should be formatted, and parse output into the desired formatting.


Indexes refer to ways to structure documents so that LLMs can best interact with them.2

  • Document Loaders: Loading documents from various sources.
  • Text Splitters: Splitting text into smaller chunks.
  • VectorStores: Index relying on embeddings.
  • Retrievers: Interfaces for fetching relevant documents.


There are two main types of memory: short term and long term.

Short term memory generally refers to how to pass data in the context of a singular conversation (generally is previous ChatMessages or summaries of them).

Long term memory deals with how to fetch and update information between conversations.3

  • Chat Message History: remembers all previous chat interactions.


Chains is an incredibly generic concept which returns to a sequence of modular components (or other chains) combined in a particular way to accomplish a common use case.4

  • Chain: end-to-end wrapper around multiple components
  • LLMChain: Input variables + PromptTemplate + Model + Optional output parser.
  • Index-related chains
    • Stuffing: Stuff all the related data in the prompt as context
    • Map Reduce: Run an initial prompt on each chunk of data, then use a final prompt to combine all the outputs.
    • Refine: Initial prompt on first chunk of data, pass the output and refine with each other chunk of data.
    • Map-Rerank: try to complete the task with each chunk, but output a score of the certainty, returns the highest score.
  • Prompt Selector: Choose the right prompt for the right model.


[…] there is a “agent” which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call.5

  • Tools: how models interact with other resources. Text in, text out.
  • Agents: the language models. Text in, “action + action input” out.
  • Toolkits: set of tools to be used together for a task
  • Agent Executor: logic that orchestrates agents and tools


  1. LangChain Docs 2 3

  2. LangChain Docs: Indexes

  3. LangChain Docs: Memory

  4. LangChain Docs: Chains

  5. LangChain Docs: Agents