Prompt CMS
Store, version, and deploy prompts without code change.
Goals
- Centralized Storage: Store all your prompts in a single, organized location.
- Version Control: Keep track of changes to your prompts over time, allowing you to revert to previous versions if needed.
- Collaboration: Enable team members to collaborate on prompt creation and optimization.
- Deployment: Deploy prompt changes without needing to modify your codebase.
- Programmatic Access: Access and manage your prompts programmatically through our API.
How it works
You can store, manage and access your prompts here.
You can create a prompt from scratch by visiting prompts and clicking “Add Prompt”.
You can also save a work-in-progress prompt in the playground and store it as a prompt.
The slug is the unique identifier for the prompt. You’ll use this slug to fetch a specific prompt from your codebase.
Modifying an existing prompt and saving will create a new version of the prompt.
By default, the Hamming SDK library will fetch the prompt using the production label.
You can tag any version of your prompt with a production or staging label. This is useful if you want to deploy a different version of your prompt to production or staging.
This allows you to update prompts instantly without needing to wait for Apple’s App Store or Google Play Store review process.
Create a file named main.ts and add the following code:
import { Hamming } from "@hamming/hamming-sdk";
const hamming = new Hamming({
apiKey: env.HAMMING_API_KEY,
});
async function main() {
//Gets all prompts
const prompts = await hamming.prompts.list();
console.log(prompts);
//Gets the latest version of the prompt by slug (default label is production)
const latestPrompt = await hamming.prompts.get("this-is-the-prompt-slug");
console.log(latestPrompt);
//Gets the production version of the prompt
const productionPrompt = await hamming.prompts.get("this-is-the-prompt-slug", "production");
console.log(productionPrompt);
//Gets a particular version of the prompt
const versionPrompt = await hamming.prompts.get("this-is-the-prompt-slug", undefined, "75c817a240b1");
console.log(versionPrompt);
}
main();
Create a file named main.py and add the following code:
from dotenv import load_dotenv
from hamming import (
ClientOptions,
Hamming,
)
load_dotenv()
HAMMING_API_KEY = os.getenv("HAMMING_API_KEY")
hamming = Hamming(ClientOptions(api_key=HAMMING_API_KEY))
prompts = hamming.prompts
def main():
# Get all prompts
prompt_list = prompts.list()
print(prompt_list)
# Get a prompt by slug
slug = "greeting-prompt-v2"
prompt_by_slug = prompts.get(slug)
print(prompt_by_slug)
# Get a prompt by label
label = "production"
prompt_by_label = prompts.get(slug, label=label)
print(prompt_by_label)
if __name__ == "__main__":
main()
This example demonstrates how to run prompts using the Hamming SDK. It includes:
- Fetching a prompt by slug
- Running the prompt locally, injecting variables and processing the result
- Using a streaming API for chat completions
import dotenv from "dotenv";
dotenv.config();
import { envsafe, str } from "envsafe";
import { Hamming } from "@hamming/hamming-sdk";
const env = envsafe({
HAMMING_API_KEY: str(),
OPENAI_API_KEY: str(),
});
const hamming = new Hamming({
apiKey: env.HAMMING_API_KEY,
openaiApiKey: env.OPENAI_API_KEY,
});
async function main() {
// Run a prompt with variables
const prompt1 = await hamming.prompts.get("get-weather");
const result = await hamming.openai.createChatCompletion(prompt1, {
location: "San Francisco",
});
console.log("Role:", result.choices[0].message.role);
console.log("Content:", result.choices[0].message.content);
console.log("Tool Calls:", result.choices[0].message.tool_calls);
// Run a prompt using streaming API
const prompt2 = await hamming.prompts.get("basic-prompt");
const result2 = await hamming.openai.createChatCompletionStream(prompt2);
let content = "";
for await (const chunk of result2) {
content += chunk.choices[0].delta.content;
}
console.log(content);
}
main().catch((e) => {
console.error(e);
process.exit(1);
});