Get Started
Voice and Chat Agent Testing
Call Monitoring
Langchain
Integrate your Langchain application with Hamming.
Before you begin
Follow the Setting up guide to make sure that you have access to the Hamming dashboard and you have created a secret key.
Quickstart - Node.js
Learn how to monitor your AI application with our Hamming TypeScript SDK.
const { Hamming } = require("@hamming/hamming-sdk");
const { OpenAI } = require("openai");
// Setup OpenAI API Key to be used by the Langchain LLM call
process.env.OPENAI_API_KEY = "<your-openai-key>";
const HAMMING_API_KEY = "<your-secret-key>";
const hamming = new Hamming({ apiKey: HAMMING_API_KEY });
// Setup Hamming callback to be used by the Langchain chain
const cb = new HammingCallbackHandler(hamming);
async function run() {
const model = new ChatOpenAI({ model: "gpt-4-turbo" });
const systemTemplate = "Translate the following into {language}:";
const promptTemplate = ChatPromptTemplate.fromMessages([
["system", systemTemplate],
["user", "{text}"],
]);
const chain = promptTemplate.pipe(model);
const result = await chain.invoke(
{ language: "italian", text: "hi" },
{ callbacks: [cb] } // Pass the Hamming callback into the chain
);
console.log(result.content);
}
run().catch(console.error);
Install dependencies:
npm install langchain @langchain/openai
Run the script by executing the following command in your terminal:
node langchain-app.js
This will create a monitoring item in Hamming with the corresponding LLM trace. You can view the item on the Monitoring page.
Quickstart - Python
Learn how to monitor your AI application with our Hamming Python SDK.
Create a file named langchain-app.py and add the following code:
import os
from hamming import (
ClientOptions,
Hamming,
LangchainCallbackHandler
)
from operator import itemgetter
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
HAMMING_API_KEY = "<your-secret-key>"
# Setup OpenAI API Key to be used by the Langchain LLM call
os.environ["OPENAI_API_KEY"] = "<your-openai-key>"
hamming = Hamming(ClientOptions(api_key=HAMMING_API_KEY))
# Setup Hamming callback to be used by the Langchain chain
callback_handler = LangchainCallbackHandler(hamming)
def run():
prompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")
prompt2 = ChatPromptTemplate.from_template(
"what country is the city {city} in? respond in {language}"
)
model = ChatOpenAI()
chain1 = prompt1 | model | StrOutputParser()
chain2 = (
{"city": chain1, "language": itemgetter("language")}
| prompt2
| model
| StrOutputParser()
)
chain2.invoke(
{"person": "trudeau", "language": "french"},
config={"callbacks":[callback_handler]} # Pass the Hamming callback into the chain
)
if __name__ == "__main__":
run()
Install dependencies:
pip install langchain langchain_openai
Run the script by executing the following command in your terminal:
python langchain-app.py
This will create a monitoring item in Hamming with the corresponding LLM trace. You can view the item in the Monitoring dashboard.