Tools

How to Build an AI-Powered PDF Analyzer

An in-depth analysis of the latest trends shaping the SaaS industry and what to expect in the coming year.

ZachZach
Share article:

If you need an efficient way to process and summarize content from PDF documents, building a PDF Analyzer with Scout is a great solution. Who can benefit from this?

  • Lawyers and legal professionals who need to quickly extract and summarize key points from case files, legal briefs or contracts.
  • Project managers who need a quick summary of a new proposal or compliance documents.
  • Scientists and Lab technicians to review scientific papers or technical protocols.
  • Anyone learning a new topic and in need of a walkthrough

After you have signed into Scout and created an organization the first step is to create a Collection where you can store your PDFs. Creating a new Collection is as simple as clicking the “Collections” option in the side menu and then clicking the “New Collection” button to name your Collection.

With the collection now made we can start on our App. Click on the “+” icon located in the side menu to create a new app. After naming your App we can proceed to the App configuration.

On the App Builder dashboard, you can start adding the necessary blocks. For a simple PDF analyzer app, it will be fairly straightforward with only two blocks: a collection block where we have stored the PDF and an LLM (Language Learning Model) block to summarize the content.

To add new blocks click on the “+” icon in the center of the page and add a collection block. To configure the collection block its advised to rename your Slug, which is just an identifier for the block, to a short descriptive term.

Next, you will want to select the collection where you are storing your PDFs. The “Limit” refers to how many “chunks” of information are returned, and can be adjusted to your liking. You made need to increase it, especially for longer documents.

The “Minimum Similarity” allows you to fine-tune how closely the documents returned from the Collection need to match the search query. This can be adjusted to your liking, however, we’ve found that a value of 0.6/0.7 works well for most applications.

Lastly, you’ll need to add the input variable into the query field of the collection block:

{{inputs.input}}

With that done, your app should be looking something like this:

Now, let’s configure the LLM block.

First, select your model, we recommend using OpenAI’s GPT-4o. Next, you can adjust the temperature, which affects the diversity and creativity of the generated text. A lower temperature value makes the model more conservative, while a higher temperature value increases the randomness. For this application a lower temperature would be more appropriate

Max Tokens represents the length of the response, for longer answers try increasing the max tokens. A value of 400–600 seems to do well for most uses.

And for Response Type we’ll select “text”.

The last step is to configure the prompt field. This is where we instruct the model on how to process the input text and generate a summary. You’re welcome to experiment with different prompts, but if you’re looking for a plug-and-play solution, we have found that this prompt works well:

bash
$SYSTEM: As the [Tech Company] Assistant, a friendly developer relations assistant for [Tech Company]. 

{% set system_message %}
You provide users with accurate and reliable information based on the 
documentation's opinion, which is delimited by ~. 

If the documentation's opinion or previous assistant messages do not provide a clear answer to the user's inquiry, you don't try to answer the question and you let the user know that you are not sure. You only provide code examples that are found explicitly in the documentation's opinion. Only give information that is explicitly supported by the documentation's opinion. 

If the user is having a technical issue and you don't know the answer, you provide debugging advice, or ask follow-up questions. 

Documentation's opinion states:  ~ {{pdf.output}} ~   

Based on the provided documentation's opinion, please answer the following question and think through the answer, based on the documentation's opinion, step by step. When you refer to the documentation, refer to it simply as 'the documentation' and do not use the word 'opinion'. The documentation is not a part of the conversation, only use it if it helps answer users question.
{% endset %}

{% if not chat_history %}
$SYSTEM: {{ system_message }}
{% endif %}
{% for message in chat_history %}
{% if message.role != "system" %}
{% if loop.last %}
$SYSTEM: {{ system_message }}

{% endif %}
${{message.role | upper}}: {{message.content}}
{% endif %}
{% endfor %}

$USER: {{ inputs.input }}

With the addition of your prompt your app should be looking something like this:

You’re now ready to test your new app! Based on the results, you may need to adjust the prompt, temperature, or other configurations to improve the quality of the summary.

The best way to interact with your new AI powered PDF analyzer is to open a chat window. Clicking on the “text bubble” icon located at the top right of the page will open a chat window for easy interaction with your new app.

And there you have it! With these steps, your PDF Analyzer app should be fully functional and ready to make your document processing tasks much easier.

Start building with Scout and transform the way you handle documents. If you have any questions or need further assistance drop into our community Slack where our users and staff can lend a hand.

ZachZach
Share article:

Ready to get started?

Start building right now for free or chat live with a Scout engineer

By providing your email address, you agree to receive the Scout newsletter.