Package com.chatmotorapi.api.functional
Class MotorLargeTranslationRequest
java.lang.Object
com.chatmotorapi.api.functional.MotorLargeTranslationRequest
public class MotorLargeTranslationRequest extends Object
Handles the creation and execution of translation requests using the
The
When setting up the MotorLargeTranslationRequest, you can choose between two implementations for handling large translation requests:
You can add a ProgressListener to the request to track progress using
ChatMotor
API. This class is designed to translate text documents
using OpenAI's advanced translation models configured through the
ChatMotor
.
The model to be used in requests can be specified with the .aiModel setter
directly on the MotorSummaryRequest builder. If .aiModel is not
called, the default model used is System.getenv("MOTOR_CHAT_MODEL"). If the
environment variable is not set, the model used is
MotorDefaultsModels.MOTOR_CHAT_MODEL
The
userInputStreaming(boolean userInputStreaming)
option:
When setting up the MotorLargeTranslationRequest, you can choose between two implementations for handling large translation requests:
- Input Streaming: If
useInputStreaming
is set to true, the streaming implementation using Sax Parser and TagSoup is used. This method is more efficient for large files, as it processes data in a stream, reducing memory consumption. However, it may be less robust in this beta version. - In-Memory Processing: If
useInputStreaming
is set to false (default), the Jsoup implementation is used, which loads files entirely into memory before processing. This approach is simpler and more reliable but can be less efficient for very large files.
You can add a ProgressListener to the request to track progress using
.progressListener(ProgressListener progressListener)
.
See the User Guide for more info.
Usage example:
String documentPath = "/path/to/my_document.html";
String responseFilePath = "/path/to/my_document_translated.html";
// We assume that the env var MOTOR_API_KEY is set.
ChatMotor chatMotor = ChatMotor.builder()
.build();
MotorLargeTranslationRequest motorLargeTranslationRequest = MotorLargeTranslationRequest.builder()
.chatMotor(chatMotor)
.filePath(documentPath)
.languageCode("fr")
.useInputStreaming(true)
.build();
MotorLargeResponse motorLargeResponse = motorLargeTranslationRequest.execute();
if (motorLargeResponse.isResponseOk()) {
System.out.println("responseFilePath: " + responseFilePath);
try (InputStream in = largeReponse.getInputStream()) {
Files.copy(in, Paths.get(responseFilePath), StandardCopyOption.REPLACE_EXISTING);
}
}
else {
// Treat errors
// See the MotorLargeResponse
class for more details.
}
This class uses a builder pattern to ensure a flexible and secure way to configure each translation request.
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
MotorLargeTranslationRequest.Builder
Builder class forMotorSummaryRequest
. -
Method Summary
Modifier and Type Method Description String
aiModel()
Gets the AI model to be used in the request.MotorAiOptions
aiOptions()
Gets the AI options to be used in the request.static MotorLargeTranslationRequest.Builder
builder()
Returns a new builder instance for creating aMotorLargeTranslationRequest
.ChatMotor
chatMotor()
Gets the ChatMotor to be used in the request.MotorLargeResponse
execute()
Executes the process of generating a translation from a specified file using a GPT model.MotorStreamStatus
executeAsStream(MotorResponseListener motorResponseListener)
Executes the process of generating a translation from a specified file using a GPT model.String
filePath()
Gets the file path to be used in the request.ChunkSize
inputChunkSize()
Returns the input chunk sizeString
languageCode()
Gets the language code to be used in the request.ProgressListener
progressListener()
Gets the ProgressListener to be used in the request.boolean
useInputStreaming()
Returns whether streaming of input file is enabled for the request
-
Method Details
-
chatMotor
Gets the ChatMotor to be used in the request.- Returns:
- the
-
aiOptions
Gets the AI options to be used in the request.- Returns:
- the AI options to be used in the request
-
aiModel
Gets the AI model to be used in the request.- Returns:
- the AI model to be used in the request
-
filePath
Gets the file path to be used in the request.- Returns:
- the file path to be used in the request
-
languageCode
Gets the language code to be used in the request.- Returns:
- the language code to be used in the request
-
inputChunkSize
Returns the input chunk size- Returns:
- the input chunk size
-
useInputStreaming
public boolean useInputStreaming()Returns whether streaming of input file is enabled for the request- Returns:
- true if streaming of input file is enabled, false otherwise
-
progressListener
Gets the ProgressListener to be used in the request.- Returns:
- the ProgressListener instance
-
builder
Returns a new builder instance for creating aMotorLargeTranslationRequest
.- Returns:
- a new
MotorLargeTranslationRequest
builder
-
execute
Executes the process of generating a translation from a specified file using a GPT model. This method initializes a session with necessary configurations, reads the text file, and creates a summary. based on the text content.- Returns:
- the
MotorLargeResponse
instance. - Throws:
MotorExecutionException
- if there is an error reading the file.
-
executeAsStream
Executes the process of generating a translation from a specified file using a GPT model.
The configured request is executed as a stream.
This is valid only of request with a text format with less that 4096 tokens, aka less than 1MB.- Parameters:
motorResponseListener
- The listener that will handle the streamed data chunks.- Returns:
- An instance of
MotorStreamStatus
that contains information about the outcome of the streaming operation.
-