Class MotorLargeRequestText

java.lang.Object
com.chatmotorapi.api.MotorLargeRequestText

public class MotorLargeRequestText
extends Object
Facilitates the creation and execution of large-sized requests for the ChatMotor API using a builder pattern for flexible and reliable configuration. This class is ideal for scenarios requiring the transmission of large volumes of unstructured text (e.g., books, reports). It handles large input by splitting it into smaller chunks, submitting them consecutively to the ChatMotor, and aggregating individual responses into a final comprehensive response.

The model used in requests can be specified with the `.aiModel` setter on the `MotorLargeRequestText` builder. If `.aiModel` is not set, the default model used is determined by the environment variable `MOTOR_CHAT_MODEL`. If this environment variable is not set, the default model is MotorDefaultsModels.MOTOR_CHAT_MODEL.

Example:

This example processes a large, unstructured text and splits it into complete sentences, applying the same query to each portion of the text. Note that the input file will be divided into multiple chunks submitted to the ChatGPT API, meaning context will be lost between each request and response due to the ChatMotor API's context size restrictions.

For instance, GPT-4 Turbo has a context window of 128k tokens and a maximum response size of 4096 tokens. Each response is dedicated to a specific portion of the text because we cannot produce a response that exceeds 4096 tokens.

You can define a chunk size using the inputChunkSize() method. Setting a large chunk size (e.g., 480Kb) will process requests faster, but you must use a smaller chunk size if the output is expected to exceed 4096 tokens to avoid losing data.

*

You can add a ProgressListener to the request to track progress using .progressListener(ProgressListener progressListener). See the User Guide for more info.

In this example, we ask GPT-4 to explain the characters of a famous literary text. The size of the input file is limited for clarity; the purpose is to demonstrate the chunking process and its results.


 // Create a ChatMotor instance
 ChatMotor chatMotor = ChatMotor.builder().build();
 
 String systemPrompt = 
     "I will give you a French extract from 'A la Recherche du Temps Perdu.' "
     + "Please explain in English who the characters are, what they are doing, "
     + "and their goals if you can perceive them."
     + "Please add a blank line, then this line: "
     + "--------------------------------------------------------------------------------------------"
     + ", and then a second blank line at the end of your response. "
     + "Do not include the extraction from the book.";
 
 MotorSystemMessage motorSystemMessage = new MotorSystemMessage(systemPrompt);
 
 // Create and configure the request
 MotorLargeRequestText request = MotorLargeRequestText.builder()
     .chatMotor(chatMotor)
     .systemMessage(motorSystemMessage)
     .inputChunkSize(ChunkSize.ofKilobytes(10)) // For clarity of result, we use a small chunk size
     .filePath("/path/to/proust/proust-chapter-1.txt")
     .build();
 
 // Execute the request
 MotorLargeResponse largeResponse = request.execute();
 
 if (largeResponse.isResponseOk()) {
     try (InputStream in = largeResponse.getInputStream()) {
         Files.copy(in, Paths.get("/path/to/proust/proust-chapter-1-response.txt"), StandardCopyOption.REPLACE_EXISTING);
     }
 } else {
     // Check if the response is an OpenAI error and get the details
     OpenAiError openAiError = largeResponse.getOpenAiError();
     OpenAiErrorType errorType = openAiError.getType();
 
     if (errorType != OpenAiErrorType.NO_OPENAI_ERROR) {
         System.out.println("OpenAI returned an error: " + openAiError);
     } else {
         System.out.println("Throwable: " + largeResponse.getThrowable());
     }
 }

Check the input text at: proust-chapter-1.txt

Check the response produced at: proust-chapter-1-response.txt

  • Method Details

    • chatMotor

      public ChatMotor chatMotor()
      Gets the ChatMotor to be used in the request.
      Returns:
      the
    • aiOptions

      public MotorAiOptions aiOptions()
      Gets the AI options to be used in the request.
      Returns:
      the AI options to be used in the request
    • aiModel

      public String aiModel()
      Gets the AI model to be used in the request.
      Returns:
      the AI model to be used in the request
    • systemMessage

      public MotorSystemMessage systemMessage()
      Returns the system prompt
      Returns:
      the system prompt
    • filePath

      public String filePath()
      Returns the file path
      Returns:
      the file path
    • inputChunkSize

      public ChunkSize inputChunkSize()
      Returns the input chunk size
      Returns:
      the input chunk size
    • progressListener

      public ProgressListener progressListener()
      Gets the ProgressListener to be used in the request.
      Returns:
      the ProgressListener instance
    • builder

      public static MotorLargeRequestText.Builder builder()
      Static method to initiate building a MotorLargeRequestText instance.
      Returns:
      a new Builder instance
    • execute

      public MotorLargeResponse execute()
      Execute that large request by treating the content as text file.
      Files are per default splitted in chunks of 4k tokens, without splitting on sentences.
      Returns:
      MotorResponse The aggregated response combining all individual responses from the submitted chunks.