OpenAI’s Assistants API: A Beginner’s Guide to Elevate Your App

Let’s level-set before we begin. I’m a beginner, just like you, when it comes to using AI and LLMs(large language models). I got into this space because we’re currently at a junction in the tech industry. I remember the first time I used chatGPT; it felt like ✨magic.✨ It gave me an answer to an obscure problem that I had been trying to solve for weeks.

Armed with wanting to be a part of the magic, a desire to gain new skills and one semester of Python, I started playing around with Open AI’s Assistants API. Not too long after that, I was able to work alongside exceptional engineers to create a beta app, at the heart of which was our use of Open AI’s APIs.💚 I won’t give away our secret recipe, although you’ll get a sneak peek below.👀 Using Open AI, we were able to build a fully functional beta that would replace an app that took 3 years to build, comprised thousands of lines of code, and had less accurate behavior than our beta, which took just 6 weeks to build.

Long story short, am I the most qualified to talk about this subject? No, but do I understand what it feels like to have no idea where to start when trying to integrate AI and LLMs into your app? Yes. Let’s dive into a scenario where I’ll walk you through how to use the Assistants API to elevate your app.

❗️Disclaimer: This is a starting point for you to iterate off of, so it’s not perfect or production-ready. I’m sure the code will be out of date as soon as OpenAI releases a new version since the Assistants API is in beta.

How do you determine what is a good use case for this tech?

The way I think about it is, what is a user action that uses natural language but is very specific for each user? Answering that question, the problem we’re going to solve below is how to analyze a user’s file and draw conclusions about the data in the file.

For the workspace, I do all my proof of concepts in a Jupyter notebook and the Open AI UI.

Step 1: Create your Open AI Client

import openai
openai.api_key = <insert your API key here>
from openai import OpenAI
client = OpenAI()

Step 2: Upload your file

file = client.files.create(
  file=open("File1.csv", "rb"),
  purpose='assistants'
)
print(file)

Step 3: Create your assistant

As shown below there are several tools available for an Assistant. I’ve chosen to use the code_interpreter to solve this problem due to the fact that file search has a known constraint for summarization(ref). So we’re gonna try to get the LLM to use pandas and other libraries to do an in-depth analysis of the file

my_assistant = client.beta.assistants.create(
    instructions="You are an expert accountant, tasked with reading and analyzing General Ledger files", 
    name="Expert File Analyzer Assistant", 
    tools=[{"type": "code_interpreter"}],
    tool_resources={
        "code_interpreter": {
          "file_ids": [file.id]
        }
      },
    model="gpt-4o"
)
print(my_assistant)

Step 4: Create your thread

thread = client.beta.threads.create()
print(thread)

Step 5: Create your prompt and message

Prompt Engineering is a challenging and daunting task as it is a brand new field and there’s limited documentation available. I’ve provided a starter prompt below that I’ve tested and iterated on. Your prompt is going to be the most important part to solving your specific problem. For some ideas you can visit: https://cookbook.openai.com/

prompt=f"""
    **Role**: you are an expert accountant analyzing General Ledger files.
    You are provided with a General Ledger file attached to this assistant. 
    Each transaction has a date, 2 or more reference fields, and an amount.
    **Task**:
    You are to open the file and analyze the data inside
    **Instructions**:
    1. Open the attached file and load into the code interprator using your library of choice.
    2. Identify the column that contains dates
    3. Identify the column that contains amounts
    4. Identify the row where transactional data begins
    5. Analyze the all the columns except the data and amount columns and make a recommendation on which column should be the the primary refence field(reference 1)
    6. Analyze the all the columns except the data and amount columns and make a recommendation on which column should be the the secondary refence field(reference 2)
    Respond with only the output summary using the following template.
    ***Output***
    1. The Date column is found in column
    2. The Amount column is found in column
    3. Transactional Data begins in row
    3. My recommendation for reference 1 is
    4. My recommendation for reference 2 is
"""
client.beta.threads.messages.create(
    thread_id=thread.id,
    role="user",
    content=prompt,
    attachments= [
    {
      "file_id": file.id,
      "tools": [{"type": "code_interpreter"}]
    }
  ]
)

Step 6: Run and get your response

from typing_extensions import override
from openai import AssistantEventHandler
from openai.types.beta.threads import Text
class EventHandler(AssistantEventHandler):
    def __init__(self):
        super().__init__()
        self.text_responses = []
        self.error = None
    def get_last_text_response(self):
        return None if len(self.text_responses) == 0 else self.text_responses[-1]
    @override
    def on_text_done(self, text: Text) -> None:
        self.text_responses.append(text.value)
    @override
    def on_exception(self, exception: Exception):
        self.error = str(exception)
    @override
    def on_timeout(self):
        self.error = "Timeout error"
event_handler = EventHandler()
with client.beta.threads.runs.stream(
    thread_id=thread.id,
    assistant_id=my_assistant.id,
    event_handler=event_handler,
) as stream:
    stream.until_done()
if event_handler.error:
    print(event_handler.error)
summary = event_handler.get_last_text_response()
print(summary)

Step 7: Iterate on your code

Modify your prompt and or the number of messages sent to the Assistant by repeating Step 5 and 6 until the output you receive solves your problem consistently. To help me refine my prompt, I use the UI for threads. This helps me see how the LLM is thinking and helps me test different follow up prompts. Something we’ve noticed is sometimes the Assistant gets confused if you give it too much info.

While you’re iterating it’s quite normal to go through 4 stages of prompting(there may be more or less but these four are how I constantly feel while working in this space)

Stage 1: Exploration and Confusion

Confused

Stage 2: Frustration

Frustrated

Stage 3: Contemplating the meaning of life

Think

Stage 4: The Eureka moment when you finally get all the pieces right

Think

Step 8: Clean up

client.files.delete(file.id)
client.beta.threads.delete(thread_id)
client.beta.assistants.delete(assistant_id)

I hope this blog post helps you take the leap into LLMs and the AI space because even though coding in this space is very difficult, the rewards and enhancements to your app will take you to the next level and help you solve problems faster and more specifically per each customer than ever before! Happy Coding!👩‍💻😊

Ewa Oszajec

Ewa is a Software Engineer III at FloQast with a passion for solving challenging problems and developing new skills. When she’s not coding, she enjoys traveling, weightlifting, hiking, and playing with dogs.



Back to Blog