Skip to main content

Anthropic

All functionality related to Anthropic models.

Anthropic is an AI safety and research company, and is the creator of Claude. This page covers all integrations between Anthropic models and LangChain.

Prompting Overview

Claude is chat-based model, meaning it is trained on conversation data. However, it is a text based API, meaning it takes in single string. It expects this string to be in a particular format. This means that it is up the user to ensure that is the case. LangChain provides several utilities and helper functions to make sure prompts that you write - whether formatted as a string or as a list of messages - end up formatted correctly.

Specifically, Claude is trained to fill in text for the Assistant role as part of an ongoing dialogue between a human user (Human:) and an AI assistant (Assistant:). Prompts sent via the API must contain \n\nHuman: and \n\nAssistant: as the signals of who's speaking. The final turn must always be \n\nAssistant: - the input string cannot have \n\nHuman: as the final role.

Because Claude is chat-based but accepts a string as input, it can be treated as either a LangChain ChatModel or LLM. This means there are two wrappers in LangChain - ChatAnthropic and Anthropic. It is generally recommended to use the ChatAnthropic wrapper, and format your prompts as ChatMessages (we will show examples of this below). This is because it keeps your prompt in a general format that you can easily then also use with other models (should you want to). However, if you want more fine-grained control over the prompt, you can use the Anthropic wrapper - we will show and example of this as well. The Anthropic wrapper however is deprecated, as all functionality can be achieved in a more generic way using ChatAnthropic.

Prompting Best Practices

Anthropic models have several prompting best practices compared to OpenAI models.

System Messages may only be the first message

Anthropic models require any system messages to be the first one in your prompts.

ChatAnthropic

ChatAnthropic is a subclass of LangChain's ChatModel, meaning it works best with ChatPromptTemplate. You can import this wrapper with the following code:

npm install @langchain/anthropic
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({});

When working with ChatModels, it is preferred that you design your prompts as ChatPromptTemplates. Here is an example below of doing that:

import { ChatPromptTemplate } from "langchain/prompts";

const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful chatbot"],
["human", "Tell me a joke about {topic}"],
]);

You can then use this in a chain as follows:

const chain = prompt.pipe(model);
await chain.invoke({ topic: "bears" });