LLM OpenAI

Flows have many use cases, and not all of them require the use of large language models. That’s why this is an optional component. Connecting an LLM component allows you to pick LLM models and control the Generator output. The first and most popular one is the LLM OpenAI which connects you to ChatGPT. The…
Categories:
LLM OpenAI

Flows have many use cases, and not all of them require the use of large language models. That’s why this is an optional component. Connecting an LLM component allows you to pick LLM models and control the Generator output. The first and most popular one is the LLM OpenAI which connects you to ChatGPT.

What is the LLM Open AI component?

The LLM Open AI component connects ChatGPT to your flow. It’s an additional component to the Generator. While the Generator is where the magic happens, the LLM-type components allow you to control how it happens. The Generator uses ChatGPT-4 by default. If you wish to change the model or limit the generation capabilities, you can connect this component.

LLM OpenAi component in Flowhunt

The LLM Open AI component can be found in the LLMs category of the flows editor. It contains these settings:

Max Tokens

Tokens represent the individual units of text the model processes and generates. Token usage varies with models, and a single token can be anything from words or subwords to a single character. Models are usually priced in millions of tokens.

The max tokens setting limits the total number of tokens that can be processed in a single interaction or request, ensuring the responses are generated within reasonable bounds. The default limit is 4,000 tokens, which is the optimal size for summarizing documents and several sources to generate an answer.

Model Name

ChatGPT has differently capable models, each with different pricing. For example, using the less advanced and older GPT-3.5 will cost less than using the newest 4o, but the quality and speed of the output will suffer.

Temperature

Temperature controls the variability of answers, ranging from 0 to 1.

A temperature of 0.1 will make the responses very to the point but potentially repetitive and deficient.

A high temperature of 1 allows for maximum creativity in answers but creates the risk of irrelevant or even hallucinatory responses.

For example, the recommended temperature for a customer service bot is between 0.2 and 0.5. This level should keep the answers relevant and to the script while allowing for a natural level of variation in responses.

How to connect the LLM OpenAI component to your flow

You’ll notice that the LLM component only has a handle on the right side. Nothing connects to it, but it connects to other components. These usually serve to alter the output further or merge it with other components.

Flow connected to OpenAI

LLMs can connect to these categories:

Generator

The generator connects the user Chat Input with the settings from the LLM Component and runs them through the LLM to create a response.

Splitters

These are methods to enhance the accuracy of output. They can use the input to create similar follow-up questions, decompose complex user queries, or expand on simple ones to provide more information.

Frequently Asked Questions

  • What are LLMs?

    Large language models are types of AI trained to process, understand, and generate human-like text. A common example is ChatGPT, which can provide elaborate responses to almost any query.

  • Can I connect an LLM straight to Chat Output?

    No, the LLM component is only a representation of the AI model. It changes the model the Generator will use. The default LLM in the Generator is ChatGPT-4o.

  • What LLMs are available in Flows?

    At the moment, only the OpenAI component is available. We plan to add more in the future.

  • Do I need to add an LLM to my flow?

    No, Flows are a versatile feature with many use cases without the need for an LLM. You add an LLM if you want to build a conversational chatbot that generates text answers freely.

  • Does the LLM OpenAI component generate the answer?

    Not really. The component only represents the model and creates rules for it to follow. It’s the generator component that connects it to the input and runs the query through the LLM to create output.

Our website uses cookies. By continuing we assume your permission to deploy cookies as detailed in our privacy and cookies policy.