Prompt Chains

This feature is currently in beta. Please reach out to help@usespryngtime.com for access to this feature!

Easily monitor, debug, and iterate on your LLM "prompt chains".

What is a "prompt chain"?

Imagine you have a workflow that first classifies a customer support ticket. Then based on that classification, it calls the right next step. This series of actions is a workflow for an LLM

For example, you might classify a customer support ticket as "Refund request", "Product support", or "Other"

"Refund Request" classification leads to the "Refund" workflow being triggered, like asking for the customer's email to provide a refund in Stripe/Paypal.

"Product support" classification leads to the "Support" workflow being triggered, like RAG retrieval to answer a customer's question.

How to track & debug prompt chains

It's super easy! All you need to do is add a promptChainId into your API calls!

Here's an example:

const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: 'user', content: query }],
    model: 'gpt-3.5-turbo-1106',
    max_tokens: 100,
    promptChainId: "tracking_id_123", // This is how you add a prompt to a prompt chain
    user:"user_123456",
});

Then, login to the dashboard https://accounts.usespryngtime.com/sign-in to view your prompt chains!

Last updated