Getting started

You can access our hosted Web Application, or request access to the Prompt Optimizer API.

How it works

Prompt Optimizer uses LLMs to generate a well-structured prompt, given a task description and a few examples. It generates a set of candidate prompts and ranks them based on their performance. The prompt with the highest score is then returned as the best prompt.


Describe task

Write a short description of the task you want to solve, in natural language.


Define task inputs and outputs

Input and output definitions can be automatically extracted from the task description.

Define the input and output format for the task. This includes the data type, structure, and any constraints.


Provide examples

We can generate synthetic examples based on the task description and input/output format.

Providing 2-3 hand-labeled examples will help us generate more realistic synthetic data.

Provide 10-20 examples that include inputs and corresponding outputs for the task. The examples will be used as a training set for the LLMs.


Generate candidate prompts [Automated]

We use LLMs to generate a set of candidate prompts. This takes care of selecting the best candidates for few-shot examples.


🎯 Rank and select the best prompt [Automated]

Rank the candidate prompts based on their performance. Behind the scenes, we run a thorough evaluations process. The prompt with the highest score is then returned as the best prompt.

Programmatic access

We’re slowly rolling out programmatic access to the Prompt Optimizer. You can use the following end-to-end example to get started.

API Reference

For early access to the API, please contact us.

Coming soon