HomeGuidesRecipesAPI EndpointsRelease NotesCommunity
Log In

Understanding Tokens

One of the unique values of GPT models comes from their ability to "remember" the context of a conversation. However, there is a limit to how much text it can analyze - both as an input and an output - at one time. If a conversation with a model goes on too long, for example, it may begin to "forget" context you discussed earlier.

This context limit is normally measured in tokens, which are words or sub-parts of words. For example, “eating” might be broken into two tokens: “eat” and “ing”. On average, each word costs around 1.3 tokens, so a 750 word document would cost 1000 tokens to process.

As a result, it's important to keep tokens in mind when using tools like Composer to ensure the model doesn't lose its place. In this article we will go over best practices for conserving tokens.

Limit your prompt length

It's important to get to the point when working with a GPT. Your overall token count includes the prompt and any contextyou provide as well as the length of the response. This is especially important if you expect a long answer.

Control the length of your responses

For longer conversations, you can specify that the model limit the size of its response. You can append "Be concise." to the end of your prompt, or specify the number of bullets that a response contains.

Use clear prompting

Being specific can help keep the model focused on the specific job you have in mind for it and thus limit the number of tokens being used in your conversation.

You can even create templates for yourself if you're asking similar types of questions or prompts.

Balance complex queries

When working with large amounts of text in longer conversations, it can become easy for a model to "forget" earlier context. Remember, you can ask the model questions about previous responses that it's provided, so summarizing them can help the model hold onto the important points longer.

Some example prompts:

  • "Summarize your previous reply while keeping the important points."
  • "Abstract the main arguments in your previous response."