Tech Trends

Building Effective Guardrails for LLM Apps, on Scout: Crafting a Node to Deno Bot

Practical tips and strategies to boost user engagement and retention for your SaaS product.

Alex BoquistAlex Boquist
Share article:

When developing production LLM applications that accept user input, it’s essential to consider the variety of inputs users might provide and how the application will process them. For instance, you may need to handle out-of-scope inputs by informing users when their requests exceed the application’s capabilities. Without such precautions, users might cause your application to perform numerous irrelevant functions.

In this blog, we will create a straightforward non-RAG chatbot that assists users in converting their Node code into Deno equivalent. We will construct this chatbot as an app on Scout. Given its specific purpose, we need to ensure the user’s input is related to Node and Deno before it’s sent to the final LLM prompt for Deno conversion. If you would like to just watch a video tutorial, scroll to the end.

Let’s begin.


General Overview

Firstly, create an app on Scout. If you don’t have a Scout account, you can create one at https://www.scoutos.com.

Once you’ve created an app, it should by default contain one LLM block and one text type input, named input. This input will be passed into the blocks below when executed.

Upon clicking the LLM block, a side panel with configuration options will appear. The prompt field will display

Tell a 4 sentence story about {{inputs.input}}.

The prompt input uses jinja2 syntax and has the app state in scope. Therefore, if we input “An Alligator” above and run it, the prompt will resolve to Tell a 4 sentence story about an Alligator. Click 'run' to see the LLM output.

Creating a System Prompt

Let’s proceed to create a system message that establishes the context or instructions for the application. Click the plus icon and select text. This action will generate a text block, essentially a Jinja template. If the text block appears below the output block, simply drag and drop it to the top. Paste the following text into the side-panel text input and rename the slug to system_prompt:

You are an expert in Deno and Node.js. You are an assistant that helps Deno users convert their Node.js code to Deno.

The purpose of this block will soon become apparent.

Create Guardrail Block

Next, click the “+” sign and create an LLM block. Ensure this block is positioned between the system_prompt and the output block. You may need to drag it. Let’s name this block qualify.

For the model, use gpt-4-turbo, set the temperature to 0, max tokens to 400, and Response Type to JSON.

Paste the following text into the prompt field:

bash
user's question: ```{{inputs.input}}```

{{system_prompt.output}} - Detect if the user's question above meets these requirements:

1. It's a question or plain code.
2. It contains JavaScript code.

If the user's request meets these two requirements, return true. Return JSON with two keys: "meets_requirements"(boolean) and "reason"(str). The reason should state why it does or does not meet the requirements. Place the reason first.

Here, we are using two variables from the above input and system_prompt.output. The output value of any blocks can be referenced with {{block_slug.output}}. So if we were to run this app now with the input: "const x = 10", this prompt would evaluate to

bash
user's question: ```const x = 10```

You are an expert in Deno and Node.js. You are an assistant that helps Deno users convert their Node.js code to Deno. - Detect if the user's question above meets these requirements:

1. It's a question or plain code.
2. It contains JavaScript code.

If the user's request meets these two requirements, return true. Return JSON with two keys: "meets_requirements"(boolean) and "reason"(str). The reason should state why it does or does not meet the requirements. Place the reason first.

From the prompt above, you can see we are instructing it to return a boolean indicating if the user’s question is within the scope of our application. We also instruct it to provide a reason. We have found that by asking it to provide a reason before deciding true or false, we get more accurate results.

Try running it now. The output of that LLM node should resemble the following:

bash
{
  "reason": "The user's request is plain code and contains JavaScript code.",
  "meets_requirements": true
}

Now that we have the LLM deciding if the user’s input is within scope, we can use that boolean to determine what to send as the final request to the LLM. The LLM’s response is then sent back to the user.

Create the Output Block

Now, let’s reopen the output node.

Select gpt-4-turbo, set temperature to 0, and max tokens to 500. Paste the prompt below into the prompt field.

bash
{% if qualify.output.meets_requirements %}
{{system_prompt.output}}:

Help the user with their question about converting Node to Deno.

Here is the Node: ```{{inputs.input}}```

Output the equivalent in the Deno environment:

{% else %}

{{system_prompt.output}}:

Inform the user that their request wasn't a question about converting Node to Deno.

{% endif %}

Let’s break this down.

bash
{% if qualify.output.meets_requirements %}

{% else %}

{% endif %}

We have a simple if-else Jinja block here. The condition will evaluate to true if qualify.output.meets_requirements is true. This is the boolean that the LLM above generated. If it evaluates to false, the text after the else will render.

If true,

bash
{{system_prompt.output}}:

Help the user with their question about converting Node to Deno.

Here is the Node: ```{{inputs.input}}```

Output the equivalent in the Deno environment:

If false:

Inform the user that their request wasn't a question about converting Node to Deno.

As you can see, if the user’s question is determined to be out of scope, we won’t send it at all to the final LLM.

Now, try it out with in-scope and out-of-scope inputs to see how it performs!

Conclusion:

In this blog, we’ve guided you through building a chatbot capable of converting Node code to Deno using the Scout platform. With the help of guardrails ensuring the user’s input is within scope, we’ve demonstrated how to create a focused and efficient LLM application. Remember, the key to a successful chatbot is understanding your user’s needs and continuously refining your tool to meet these needs.

Also, by clicking the chat icon on the top right, you can chat with the bot you just created :), but it won’t have chat memory. Scout apps have memory built in by default, but it’s beyond the scope of this blog to show how to use it to build prompts.

I hope you’ve found this blog helpful. If there are any other tutorials concepts you would like us to explore, please let us know!

-Alex

https://www.scoutos.com

Alex BoquistAlex Boquist
Share article:

Ready to get started?

Start building right now for free or chat live with a Scout engineer

By providing your email address, you agree to receive the Scout newsletter.