ChatMotor API v1.2-beta - September 20, 2024User GuideWhat's NewCredits and AcknowledgmentsIntroduction to ChatMotorComplete Javadoc and Code ExamplesDownload the SoftwareChoosing your EditionRequirementsInstallation DetailsExampleAndroidPurchasing the SoftwareGetting StartedCreating a ChatMotor ObjectDefining OpenAI optionsMaking a Chat Completion RequestException ManagementConsistent Request and Response HandlingUnchecked ExceptionsExtracting OpenAI ErrorsExample of Extracting OpenAI ErrorsMaking a Chat Completion Request with Streaming ResponseExplaining the MotorResponseListener interfaceAdvantages of Using a ListenerExample Implementation: ConsoleResponseListenerKeeping the Context in the ConversationAdvanced TopicsFileSupported Text File Extensions (Case-Insensitive)FileToTextConverter UtilityException: Translation APIs support HTML filesLarge Content Management: MotorLargeRequestLineIsomorphic TransformationsLine-Based File ProcessingStreaming ProcessingChunk Input SizeUsage ExampleExplanationLarge Content Management: MotorLargeRequestTextInput SizeUsage ExampleAutomatic Retry MechanismUsage ExampleTimeout SetupUsage ExampleProxy ConfigurationUsage ExampleFailover MechanismUsage ExampleAudio and Image APIsTranscriptionKey FeaturesThe MotorTranscriptionRequest classThe MotorTranscriptFormatterRequest classUsage ExampleSpeechKey FeaturesUsage ExampleImageKey FeaturesUsage ExampleVisionKey FeaturesUsage ExampleUsage Example with a local imageFunctional APIsSummarizationClassic SummaryStrategic Summary TranslationLarge Files TranslationOther Experimental Functional APIsContent Extraction Usage Example:Content Filtering Usage Example:Note on Experimental APIsPro Supplemental APIsUsage Example:Entity ExtractionUsage Example:Sentiment AnalysisUsage Example:Notification APIKey FeaturesNotificationChannel InterfaceMotorNotifyResponseExample Usage: Slack NotificationOther ImplementationsExtendable DesignTroubleshooting and Best PracticesCommon Issues and SolutionsBe Patient with Large RequestsAppendixReference to sample code examples
The ChatMotor API is designed to help you easily integrate ChatGPT capabilities into your Java applications. It abstracts away the complexities of implementation so you can focus on specifying what you want to achieve, without worrying about the details of how to achieve it.
With support for extensive text processing, functional APIs for tasks like translation and summarization, and advanced features like progressive teletext display and automatic retry mechanisms, the ChatMotor API simplifies the development of intelligent, responsive applications.
Check out our Changelog.
ChatMotor is built on top of the open-source Simple-OpenAI Java SDK developed by Sashir Estela and peers.
We extend our gratitude for their work, which has been fundamental to the development of ChatMotor.
ChatMotor is an advanced Java SDK designed for integrating with the ChatGPT API, providing a comprehensive and robust set of features. Built on the solid foundation provided by Simple-OpenAI, ChatMotor not only enhances several of the base SDK's capabilities but also introduces a range of unique features tailored to extend functionality and improve user experience:
Enhanced Streaming: Simplified management of streaming for progressive teletext display.
Automatic Retry Mechanism: Abstracted HTTP management for reliable API calls.
Customizable Timeouts: Easy setup for timeouts tailored to your needs.
Proxy Support: Seamless integration of proxy information.
Unlimited Input Prompt Size: Automatic sequencing for prompts exceeding 4096 tokens.
Advanced Summarization: Summarization & strategic summarization.
Advanced Translation: No input size limits with preservation of original formatting.
Advanced Functional APIs: Categorization, Entity Recognition, Sentiment Analysis.
Consistent API Calls: Easy to memorize and use with consistent method names for all operations, such as execute()
, executeAsStream()
, etc.
Unlimited Audio Transcription: Bypass OpenAI's 25 MB limit.
Text-to-Speech API: Unlimited input size for converting text to audio.
Functional APIs: For summarization, strategic summaries, content extraction, and filtering. Includes now entity recognition, categorization and sentiment analysis.
Comprehensive Format Management: Support for various input/output formats including txt, RTF, Word, HTML, PDF.
Vision and DALL-E Integration: Generate text from images and images from text.
Large File Processing: Large files are processed in parallel by splitting them into chunks and handling each chunk with separate simultaneous threads for faster processing.
Default Settings for Easy Start and Testing: Pre-configured defaults that facilitate immediate usage and testing without initial setup.
Optional Environment Variable Configuration: Configuration via environment variables reduces code, simplifies complexity, and enhances maintainability in applications developed using ChatMotor.
Failover Key Mechanism: Provides a robust backup strategy, ensuring continuous service availability and reliability by automatically switching to a secondary OpenAI key if the primary account fails (Quota limits, CB date expired, ...).
No Exceptions Thrown: ChatMotor requests don't throw exceptions, making it easy to manage errors without disrupting your application's flow.
Detailed Error Messages: We decode OpenAI error messages, providing comprehensive details for effective troubleshooting.
No Null Values Returned: We ensure that methods never return null values, providing default values to maintain predictable and robust behavior. It's time in 2024 to make an end to the 1 billion dollars mistake.
ChatMotor's extensive Javadoc and sample code ensure that you have all the tools necessary for seamless integration. Each class is accompanied by a complete how-to guide embedded within the Javadoc, along with ready-to-work code examples.
This detailed Javadoc ensures that developers can access all necessary information for effective integration and utilization of the SDK, even without the user guide.
Download the installation file from ChatMotor Download.
When downloading, you have atomically access to Pro Edition for 30 days, then ChatMotor will reverse to Free Edition
Feature | Free Edition | Pro Edition |
---|---|---|
Integration with OpenAI API | Yes | Yes |
HTTP Proxy Support | Yes, without authentication | Yes, with authentication |
Failover OpenAI API Key | No | Yes |
Email Support | Best Effort | Priority Support |
Base Functional APIs (Translation, Summary) | Limited (Up to 50,000 characters text files and 250KB HTML files) | Unlimited |
Advanced Functional APIs (Categorization, Entity Recognition, Sentiment Analysis) | No | Yes |
Post-Processing Notification API (SMS, Slack, Email) | No | Yes |
Audio Files Transcription | Up to 1 hour duration | Unlimited |
Text to Audio | Up to 50,000 characters | Unlimited |
Token Usage Visibility | Basic (Total tokens only) | Detailed (Prompt, Completion, and Total tokens) |
Access to All Functional APIs for 30-Day Trial | Yes | N/A |
Upgrades and New Versions | No | Yes, with annual maintenance fee |
ChatMotor API requires Java 11 or higher and is compatible with Windows, macOS, Linux, and Android.
Transcribing files in formats other than MP3 requires FFMPEG to be installed on Linux and macOS.
On Android, only MP3 file transcription is supported.
Extract the downloaded chatmotor-api-1.2-beta.zip
archive to a directory, such as /path/to/software
.
Set the CHATMOTOR_HOME
environment variable to the installation directory:
export CHATMOTOR_HOME=/path/to/software/chatmotor-api-1.2-beta
CHATMOTOR_HOME/lib
directory to your development environment's CLASSPATH
.When deploying in production, include all JAR files from the /lib
directory in the CLASSPATH
.
For J2EE Web servers: Copy the contents of the CHATMOTOR_HOME/lib
directory to the server's lib
directory while preserving the installation files.
If the zip
file is extracted to /home/user1
, the directory structure will be /home/user1/chatmotor-api-1.2-beta
. Set CHATMOTOR_HOME
in your bash settings file as follows:
xxxxxxxxxx
export CHATMOTOR_HOME=/home/user1/chatmotor-api-1.2-beta
Since Android does not support environment variables, you will need to explicitly set the ChatMotor home directory using the ChatMotor.setChatMotorHome(String chatMotorHome)
method:
xxxxxxxxxx
// ChatMotor instance creation is explained in the next chapter: Getting Started
ChatMotor chatMotor = ChatMotor.builder().build();
chatMotor.setChatMotorHome("/path/to/software/chatmotor-api-1.2-beta");
Add the following three lines to your AndroidManifest.xml
:
xxxxxxxxxx
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
If you decide to purchase the software after the 30-day trial, visit our Pricing Page for detailed information and purchasing options.
After purchasing, you will receive a license file via email. If you have previously used a trial version, there is no need to reinstall the software. Simply place the license file in the following directory:
xxxxxxxxxx
CHATMOTOR_HOME/license
Where CHATMOTOR_HOME
is either the environment variable CHATMOTOR_HOME
or the value defined by the ChatMotor.setChatMotorHome(String chatMotorHome)
method.
Here is the initial step required to use the services: you must create a ChatMotor instance and pass to it the license file path and the OpenAI key:
x String key = "[your api key]";
ChatMotor chatMotor = ChatMotor.builder()
.apiKey(key)
.build();
In this example, replace "your_openai_api_key"
with your actual API key. This code initializes the ChatMotorClient
with the API key, enabling you to make authenticated requests.
Alternatively, you may set the OpenAI keey using the environment variables CHATMOTOR_API_KEY
:
xxxxxxxxxx
// We assume that the env var CHATMOTOR_API_KEY is set.
ChatMotor chatMotor = ChatMotor.builder()
.build();
For example, this could be your settings:
xxxxxxxxxx
CHATMOTOR_API_KEY=sk-8q6Ynlz3d4zij9VFaCL993BlbkFJXSfT1Tq2AmrFA13rH8UN
The AI options (temperature, max tokens, etc.) are optionally defined with the MotorAiOptions class:
xxxxxxxxxx
MotorAiOptions options = MotorAiOptions.builder()
.temperature(0.7) // Controls the randomness of the AI responses
.topP(0.9) // Controls the diversity of the response predictions
.build();
See the MotorAiOptions class Javadoc all options and the corresponding default values.
After creating the ChatMotor instance, and optionally a MotorAiOptions instance he you can start making requests to various endpoints provided by the ChatMotor SDK.
Here's an example of how to call the Chat Completion service to ask a question and wait for a complete answer. The response is printed to the console. This is a simple request, functioning exactly as it does when using ChatGPT in your browser. It uses 2 classes: MotorRequest for the request creation, and MotorResponse to manage the response.
The OpenAI model in use is defined in
xxxxxxxxxx
List<MotorMessage> motorMessages = new ArrayList<MotorMessage>();
motorMessages.add(new MotorSystemMessage("You are an expert in programming."));
motorMessages.add(new MotorUserMessage("Write a simple technical article about Java language"));
// Make a request to the ChatMotor.
MotorRequest motorRequest = MotorRequest.builder()
.chatMotor(chatMotor)
.motorAiOptions(options)
.messages(motorMessages)
.build();
// Execute the request and get a response
MotorResponse motorResponse = motorRequest.execute();
// Check if the response is ok.
if (motorResponse.isResponseOk()) {
String response = motorResponse.getResponse();
System.out.println(response);
} else {
// If the response is not ok, we get the error message and the throwable.
Throwable throwable = motorResponse.getThrowable();
System.out.println("The thrown Exception is: " + throwable);
}
To make the developer's life easier, the MotorRequest
and all other API requests provided by the ChatMotor SDK are designed to never throw exceptions. Instead, they use a status-checking mechanism that simplifies error handling and improves code readability:
No Need for Try-Catch Blocks:
Developers do not need to wrap their API calls in try-catch blocks, resulting in cleaner and shorter code.
Consistent Error Checking:
By using a consistent method to check for errors, developers can easily manage error handling across different parts of their application.
All our requests and responses are designed to be consistent and never throw exceptions.
Developers can simply check the response status using isResponseOk()
to determine if the request was successful and use getThrowable()
to retrieve any exceptions. This approach ensures unified and predictable error management, leading to more reliable and maintainable code.
To provide immediate understanding of certain errors, we have a few unchecked exceptions that can be accessed via MotorResponse.getThrowable()
. These exceptions help developers quickly identify specific issues without the need for extensive debugging:
MotorExecutionException:
Indicates that an error occurred during the execution of the request. This exception helps pinpoint issues that arise during the processing phase.
MotorNotImplementedException:
Indicates that a requested feature or function is not implemented. This exception is useful for identifying missing functionality and ensuring that developers are aware of unimplemented features.
MotorTimeoutException:
Indicates that the request timed out. This exception helps developers understand that the operation took too long and was aborted due to a timeout.
By using these unchecked exceptions, developers can quickly understand and address common errors, improving the development experience and reducing the time spent on troubleshooting.
Handling OpenAI errors effectively is crucial for robust application development. ChatMotor provides mechanisms to extract and manage OpenAI errors efficiently.
When executing a request, you can check if the response contains an OpenAI error and extract the error details as follows:
xxxxxxxxxx
MotorRequest motorRequest = MotorRequest.builder()
.chatMotor(chatMotor)
.motorAiOptions(options)
.messages(motorMessages)
.build();
MotorResponse motorResponse = motorRequest.execute();
// Check if the response is an OpenAI error and get the OpenAI error details
OpenAiError openAiError = motorResponse.getOpenAiError();
OpenAiErrorType errorType = openAiError.getType();
if (errorType != OpenAiErrorType.NO_OPENAI_ERROR) {
System.out.println("OpenAI has returned an error : " + openAiError);
// Take specific action depending on the error code:
if (errorType.equals(OpenAiErrorType.SERVER_ERROR)) {
// Handle server error
} else if (errorType.equals(OpenAiErrorType.TOKENS)) {
// Handle token error
} else {
// Handle other types of errors
}
} else {
System.out.println("An Exception was raised during the treatment: "
+ motorResponse.getThrowable());
}
See the OpenAiError Javadoc.
The executeAsStream method allows you to benefit from ChatGPT's streaming capabilities, enabling the response to be processed and displayed in real-time. This method is particularly useful when you want to provide immediate feedback to the user as the response is being generated, rather than waiting for the entire response to be completed.
xxxxxxxxxx
List<MotorMessage> motorMessages = new ArrayList<MotorMessage>();
motorMessages.add(new MotorSystemMessage("You are an expert in programming."));
motorMessages.add(new MotorUserMessage("Write a simple technical article about Java language"));
// Make a request to the ChatMotor.
MotorRequest motorRequest = MotorRequest.builder()
.chatMotor(chatMotor)
.motorAiOptions(options)
.messages(motorMessages)
.build();
// Execute the request and display immediately the response in the console.
// This uses a listener to display the response in the console.
MotorResponseListener consoleResponseListener = new ConsoleResponseListener();
MotorStreamStatus motorStreamStatus = motorRequest.executeAsStream(consoleResponseListener);
// Check if the response is ok.
if (motorStreamStatus.isResponseOk()) {
// Nothing to do here. Display is done by the listener.
} else {
// If the response is not OK, we get the error message and the throwable.
Throwable throwable = motorStreamStatus.getThrowable();
System.out.println("The thrown Exception is: " + throwable);
}
The MotorResponseListener interface allows you to handle responses from the ChatMotor service in real-time as they are being streamed. Implementing this interface enables you to process each part of the response immediately, which is useful for providing instant feedback to users.
Avoids Async Methods: The listener mechanism eliminates the need for complex asynchronous methods, simplifying the code.
Immediate Processing: Each part of the response is processed as soon as it is received, providing real-time interaction and feedback.
Using the MotorResponseListener
, developers can easily handle streaming responses, improving user experience with real-time updates and reducing the complexity of asynchronous programming.
Here's an example implementation of the MotorResponseListener
that prints the response to the console as it is received:
xxxxxxxxxx
package com.chatmotorapi.api.listener;
import com.chatmotorapi.api.MotorResponse;
/**
* A concrete implementation of {@link MotorResponseListener} that outputs
* response chunks to the console. This listener is designed to handle streamed
* responses from the ChatMotor and print each chunk received directly to the
* standard output.
*/
public class ConsoleResponseListener implements MotorResponseListener {
/**
* Receives a chunk of response from the ChatMotor and prints it to the console.
* This method is called automatically when a new response chunk is available.
*
* @param motorResponse The {@link MotorResponse} containing the chunk of data
* to be printed. The response is expected to be non-null
* and ready to be displayed.
*/
public void onChunkReceived(MotorResponse motorResponse) {
if (motorResponse != null && motorResponse.getResponse() != null) {
System.out.print(motorResponse.getResponse());
}
}
public void onResponseCompleted() {
System.out.println(); // Force a last line feed on the stream
}
public void onError(Throwable throwable) {
throwable.printStackTrace();
}
}
We provide also other standard ready to use implementation : QueueResponseListener, MotorFileResponseListener and SwingResponseListener which are described in their Javadoc.
By adding UserMessages
to the list in a loop and executing the motorRequest.execute()
method, you can maintain the context of the conversation, just as you would in a browser session. Here's an example demonstrating how to do this in the context of building a Java method:
xxxxxxxxxx
/**
* Builds a Java class with a divide(int a, int b) method.
*/
public static void contextTest() {
System.out.println(new Date() + " Begin...");
// Build a ChatMotor instance.
ChatMotor chatMotor = ChatMotorTest.getChatMotor();
// Make context aware requests to ChatGPT with ChatMotor.
// This simulates a "browser session" with https://chatgpt.com/
// Context is kept between calls.
List<MotorMessage> motorMessages = new ArrayList<MotorMessage>();
motorMessages.add(new MotorSystemMessage(
"You are an expert in Java programming, world class."));
motorMessages.add(new MotorUserMessage(
"Write a simple class with an int divide(int a, int b) method. "
+ "Do not include a main method. "
+ "Do not comment your work."));
buildAndexecuteTheRequest(chatMotor, motorMessages);
motorMessages.add(new MotorUserMessage(
"Now add to the class nice and complete Javadoc."));
buildAndexecuteTheRequest(chatMotor, motorMessages);
motorMessages.add(new MotorUserMessage(
"Now catch the divide by zero Exception and "
+ "print a clean error message if it occurs."));
buildAndexecuteTheRequest(chatMotor, motorMessages);
System.out.println(new Date() + " End.");
}
/**
* Build and execute a request, keeping the context.
*
* @param chatMotor the ChatMotor instance
* @param motorMessages the messages
*/
private static void buildAndexecuteTheRequest(ChatMotor chatMotor,
List<MotorMessage> motorMessages) {
MotorRequest motorRequest = MotorRequest.builder()
.chatMotor(chatMotor)
.messages(motorMessages)
.build();
// Execute the request.
MotorResponseListener consoleResponseListener = new ConsoleResponseListener();
MotorStreamStatus motorStreamStatus =
motorRequest.executeAsStream(consoleResponseListener);
if (motorStreamStatus.isResponseOk()) {
// Nothing to do here. The response is available
// in the consoleResponseListener.
} else {
Throwable throwable = motorStreamStatus.getThrowable();
System.out.println(throwable);
}
}
This will display the following flow in the console.
The ChatMotor APIs are designed to handle input text files, defined by their extensions (txt, text, csv, or their uppercase equivalents). This means that the .filepath(String filepath)
method of all APIs (except MotorLargeTranslationRequest
see below) accepts only text files. This restriction is due to the nature of OpenAI APIs, which take text content as input and return text content without formatting options.
.txt
.text
.csv
For other formatted files, we provide a utility called FileToTextConverter
that allows extracting text from the following file types based on their (case-insensitive) extensions and dumping content in a text file:
.html
.csv
.doc
.docx
.rtf
Example:
xxxxxxxxxx
FileToTextConverter.convert("/path/to/mypdf.pdf", "/path/to/mypdpf.txt"); // UTF-8 encoding
// Or:
FileToTextConverter.convert("/path/to/mypdf.pdf", "/path/to/mypdpf.txt", Charset.forName("ISO_8859_1 "));
The MotorLargeTranslationRequest
API also accepts HTML input. This API will return the same formatted HTML file with its content translated.
The MotorLargeRequestLine class facilitates creating and executing large-sized requests for the ChatMotor API. It uses a builder pattern for flexible and reliable configuration, handling large input by submitting consecutive standard-sized requests and aggregating responses into a final comprehensive result.
This class is designed for isomorphic tasks like translation and CSV manipulation, where each chunk of user prompts can be processed independently and then combined. It is ideal for scenarios requiring the transmission of huge data volumes efficiently.
The input file for MotorLargeRequestLine
must contain lines of text, as it processes the content line by line. This approach ensures smooth handling of large files without exceeding token limits.
MotorLargeRequestLine
processes large content using streaming, allowing it to handle very large-sized files efficiently. By processing the file line by line, it submits consecutive requests and aggregates the results, ensuring that even enormous data volumes are managed effectively without hitting token limits.
It is optionally possible to set the chunk input size with MotorLargeRequestLine.inputChunkSizeBytes(int inputChunkSizeBytes)
. This is not recommended.
Since we are doing isomorphic transformations, the chunk input size must be less than 4096 tokens, corresponding to the OpenAI maximum response size. The default value for inputChunkSizeBytes
is retrieved from the current ChatMotor
instance using chatMotor.maxChunkInputSize()
, which ensures a conservative size to prevent data loss.
inputChunkSizeBytes
cannot exceed ChatMotor.maxChunkInputSize()
.
This default size avoids data loss during transformation by ensuring the output size does not exceed the model's maximum token limit.
Here's how you can use MotorLargeRequestLine
to process a large CSV file:
xxxxxxxxxx
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set.
ChatMotor chatMotor = ChatMotor.builder()
.build();
MotorAiOptions options = MotorAiOptions.builder()
.temperature(0.0) // No creativity
.maxTokens(4000) // Max tokens
.build();
String filePath = "/path/to/user/Fahrenheit.csv";
MotorSystemMessage motorSystemMessage = new MotorSystemMessage(
"I will provide you with temperature measurements in Fahrenheit in a text file. "
+ "Convert the temperatures to degrees Celsius and add ';true' at the end of the line "
+ "if the temperature is above 20 degrees Celsius. "
+ "Do not add any comments and maintain the exact input format "
+ "without adding any new CR/LF characters."
);
MotorUserMessage motorUserMessage = new MotorUserMessage(userPrompt);
MotorLargeRequestLines request = MotorLargeRequestLines.builder()
.chatMotor(chatMotor)
.aiOptions(options)
.systemMessage(motorSystemMessage)
.filePath(filePath)
// Optional, default value is safe enough
.inputChunkSize(ChunkSize.ofKilobytes(10))
.build();
// Execute the request.
MotorLargeResponse largeResponse = request.execute();
if (largeResponse.isResponseOk()) {
String outFilePath = "/path/to/Celsius.csv";
try (InputStream in = largeResponse.getInputStream()) {
Files.copy(in, Paths.get(outFilePath),
StandardCopyOption.REPLACE_EXISTING);
}
} else {
// Check if the response is an OpenAI error and get
// the OpenAI error details
OpenAiError openAiError = largeResponse.getOpenAiError();
OpenAiErrorType errorType = openAiError.getType();
if (errorType != OpenAiErrorType.NO_OPENAI_ERROR) {
System.out.println("OpenAI has returned an error : " + openAiError);
} else {
System.out.println("Throwable: " + largeResponse.getThrowable());
}
}
Initialization: The MotorLargeRequestLine
is built with the ChatMotor instance, AI options, system message, and the path to the input file.
Streaming Processing: Processes the file line by line using streaming, ensuring efficient handling of very large files.
Chunk Input Size: Optionally set the chunk input size, ensuring it is less than 4096 tokens.
Execution: The execute()
method processes the large content and aggregates the responses.
Error Handling: Checks if the response contains an OpenAI error and handles it accordingly, or outputs any exceptions.
This approach ensures efficient handling of large content, making it ideal for scenarios requiring the processing of extensive data volumes.
The MotorLargeRequestText class facilitates the creation and execution of large-sized requests for the ChatMotor API using a builder pattern for flexible and reliable configuration. It is ideal for scenarios requiring the transmission of large volumes of unstructured text, such as books and reports. This class handles large input by splitting it into smaller chunks, submitting them consecutively to the ChatMotor, and aggregating individual responses into a final comprehensive result.
This example processes a large, unstructured text by splitting it into complete sentences and applying the same query to each portion. The input file will be divided into multiple chunks submitted to the ChatGPT API, meaning context will be lost between each request and response due to the context size restrictions of the ChatMotor API.
GPT-4 Turbo, for example, has a context window of 128k tokens and a maximum response size of 4096 tokens. Each response is dedicated to a specific portion of the text because we cannot produce a response that exceeds 4096 tokens.
You can define a chunk size using the MotorLargeRequestText.inputChunkSizeBytes
method. This is not recommended.
The default value for inputChunkSizeBytes
is retrieved from the current ChatMotor
instance using chatMotor.maxInputSize()
, which is set to 440KB. This size is suitable for GPT-4 models.
inputChunkSizeBytes
cannot exceed the value defined by ChatMotor.maxInputSize()
.
xxxxxxxxxx
// Create a ChatMotor instance
ChatMotor chatMotor = ChatMotor.builder().build();
String systemPrompt =
"I will give you a French extract from 'A la Recherche du Temps Perdu.' "
+ "Please explain in English who the characters are, what they are doing, "
+ "and their goals if you can perceive them. "
+ "Please add a blank line, then this line: "
+ "---------------------------------------------------------------------"
+ ", and then a second blank line at the end of your response. "
+ "Do not include the extraction from the book.";
MotorSystemMessage motorSystemMessage = new MotorSystemMessage(systemPrompt);
// Create and configure the request
MotorLargeRequestText request = MotorLargeRequestText.builder()
.chatMotor(chatMotor)
.systemMessage(motorSystemMessage)
// For clarity of result, we use a small chunk size
.inputChunkSize(ChunkSize.ofKilobytes(10))
.filePath("/path/to/proust/proust-chapter-1.txt")
.build();
// Execute the request
MotorLargeResponse largeResponse = request.execute();
if (largeResponse.isResponseOk()) {
try (InputStream in = largeResponse.getInputStream()) {
Files.copy(in,
Paths.get("/path/to/proust/proust-chapter-1-response.txt"),
StandardCopyOption.REPLACE_EXISTING);
}
} else {
// Check if the response is an OpenAI error and get the details
OpenAiError openAiError = largeResponse.getOpenAiError();
OpenAiErrorType errorType = openAiError.getType();
if (errorType != OpenAiErrorType.NO_OPENAI_ERROR) {
System.out.println("OpenAI returned an error: " + openAiError);
} else {
System.out.println("Throwable: " + largeResponse.getThrowable());
}
}
Check the input text at: proust-chapter-1.txt Check the response produced at: proust-chapter-1-response.txt
ChatMotor ensures robust and reliable communication by handling HTTP requests automatically and offering a retry functionality with capping and delay management. This reduces the likelihood of errors disrupting operations.
xxxxxxxxxx
// Define the max number of retries for the API requests
int maxRetries = 4;
// Define the retry interval between API request retries
Duration retryInterval = Duration.ofMillis(4000);
// Build the ChatMotor instance with all the parameters
ChatMotor chatMotor = ChatMotor.builder()
.maxRetries(maxRetries)
.retryInterval(retryInterval)
.build();
// Define Prompts.
// Build and execute any request with the ChatMotor and it's parameters.
// ...
ChatMotor allows you to set a timeout for API requests to ensure efficient resource management and prevent indefinite blocking, maintaining the responsiveness of your application.
xxxxxxxxxx
// Define the timeout for the API requests threads
Duration timeout = Duration.ofSeconds(200);
// Build the ChatMotor instance with all the parameters
ChatMotor chatMotor = ChatMotor.builder()
.timeout(timeout)
.build();
ChatMotor allows you to configure a proxy for API requests, enabling you to manage network traffic effectively and ensure secure and controlled communication.
xxxxxxxxxx
String myProxyAddress = "proxy.example.com";
int myProxyPort = 80;
// Build a Java 11+ HttpClient instance and set the proxy
HttpClient httpClient = HttpClient.newBuilder()
.proxy(ProxySelector.of(new InetSocketAddress(myProxyAddress,
myProxyPort)))
.build();
// Build the ChatMotor instance with the HttpClient and proxy settings
ChatMotor chatMotor = ChatMotor.builder()
.httpClient(httpClient)
.build();
If your account exceeds its quota or hits the rate limit, ChatMotor ensures continuity with a failover mechanism. A secondary OpenAI key linked to a secondary account will be used automatically if the primary account fails.
xxxxxxxxxx
// Define a failover API key to be automatically used in
// case the account linked to the primary API key fails
String failoverApiKey = "[the failover api key]";
// Build the ChatMotor instance with the failover API key
ChatMotor chatMotor = ChatMotor.builder()
.failoverApiKey(failoverApiKey)
.build();
ChatMotor handles the creation and execution of transcription requests using the OpenAI Whisper model. The MotorTranscriptionRequest class is designed to transcribe audio files into text with unlimited size, bypassing the 25MB OpenAI Whisper limitation.
Large files are chunked to reduce memory consumption. The default chunk size is 4MB, but this can be overridden
FFmpeg: Required to install on Linux and macOS to handle media processing.
Unlimited Size: Transcription size is not bound to the 25MB OpenAI Whisper limitation.
Memory Optimization: Large files are chunked to reduce memory consumption.
Flexible Configuration: Uses a builder pattern for secure and flexible configuration.
Model Specification: The model can be specified with the .aiModel
setter on the MotorTranscriptionRequest
builder. If not set, it defaults to the model specified in the MOTOR_WHISPER_MODEL
environment variable or MotorDefaultsModels.MOTOR_WHISPER_MODEL
.
Format Conversion: Large files in formats other than MP3 are converted to MP3 for better chunk quality.
ffmpeg Its required to install ffmpeg on Linux and macOS.
xxxxxxxxxx
String audioFilePath = "/path/to/my_audiofile.wav";
String transcriptionFilePath = "/path/to/transcription_file.txt";
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set.
ChatMotor chatMotor = ChatMotor.builder()
.build();
MotorTranscriptionRequest request = MotorTranscriptionRequest.builder()
.chatMotor(chatMotor)
.filePath(audioFilePath)
.inputChunkSize(ChunkSize.ofMegabytes(2))
.build();
MotorLargeResponse largeResponse = request.execute();
if (largeResponse.isResponseOk()) {
try (InputStream in = largeResponse.getInputStream()) {
Files.copy(in, Paths.get(transcriptionFilePath),
StandardCopyOption.REPLACE_EXISTING);
}
} else {
// Treat errors.
// See the MotorLargeResponse class for more details.
}
The MotorTranscriptFormatterRequest class allows you to format a transcription file using line breaks and paragraphs. It ensures that the transcribed text is readable and well-structured.
xxxxxxxxxx
String transcriptionFilePath = "/path/to/transcription_file.txt";
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set.
ChatMotor chatMotor = ChatMotor.builder()
.build();
MotorTranscriptFormatterRequest request = MotorTranscriptFormatterRequest
.builder()
.chatMotor(chatMotor)
.filePath(transcriptionFilePath)
.build();
MotorLargeResponse largeResponse = request.execute();
if (largeResponse.isResponseOk()) {
try (InputStream in = largeResponse.getInputStream()) {
String outFilePath = "/path/to/transcription_file_formatted.txt";
Files.copy(in, Paths.get(outFilePath),
StandardCopyOption.REPLACE_EXISTING);
}
} else {
// Treat errors
// See the MotorLargeResponse class for more details.
}
The MotorSpeechRequest API handles the creation and execution of text-to-speech requests to the ChatMotor using user prompts. It employs the builder pattern to ensure flexibility and enforce mandatory configuration. The model used in requests can be specified with the .aiModel
setter on the MotorSpeechRequest
builder. If not set, it defaults to the model specified in the MOTOR_SPEECH_MODEL
environment variable or MotorDefaultsModels.MOTOR_SPEECH_MODEL
.
Model Specification: The model can be set using the .aiModel
setter or defaults to the environment variable MOTOR_SPEECH_MODEL
.
Input Limits: Standard requests are limited to 500 words for non-MP3 files. There are no limits for MP3 files.
Audio File Conversion: Use MotorAudioFileConverter
to convert audio files to MP3 and vice versa.
xxxxxxxxxx
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set
// Create a ChatMotor instance.
ChatMotor chatMotor = ChatMotor.builder().build();
// The audio file to generate.
File speechFile = new File("/path/to/audio/file.mp3");
// The input text
String prompt = "It was the best of times, it was the worst of times, "
+ "it was the age of wisdom, it was the age of foolishness, "
+ "it was the epoch of belief, it was the epoch of incredulity.";
MotorSpeechRequest request = MotorSpeechRequest.builder()
.chatMotor(chatMotor)
.input(prompt)
.audioFile(speechFile)
.speechResponseFormat(MotorSpeechFormat.MP3)
.speechVoice(MotorSpeechVoice.ALLOY)
.speechSpeed(1.0)
.build();
// Execute the Speech request.
MotorLargeResponse largeResponse = request.execute();
if (largeResponse.isResponseOk()) {
File responseFilePath = new File("/path/to/audio/file.mp3");
System.out.println("audioFile: " + largeResponse.getResponseFile());
try (InputStream in = largeResponse.getInputStream()) {
Files.copy(in, Paths.get(responseFilePath),
StandardCopyOption.REPLACE_EXISTING);
}
} else {
// Treat errors
// See the MotorLargeResponse class for more details.
}
The MotorImageRequest API handles the creation and execution of image-related requests for the ChatMotor API, designed to interface with OpenAI's Vision API. This class leverages the builder pattern to provide a flexible and reliable configuration mechanism, ensuring that all necessary parameters are set before execution.
Model Specification: The model can be set using the .aiModel
setter or defaults to the environment variable MOTOR_IMAGE_MODEL
.
Input Limits: Image requests are limited to 4096 tokens.
xxxxxxxxxx
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set
// Create a ChatMotor instance.
ChatMotor chatMotor = ChatMotor.builder().build();
String prompt = "hyperealtic view of the Eiffel Tower in " +
"green under a blue clear sky with sunset orange sun.";
// Make a request to the ChatMotor.
MotorImageRequest request = MotorImageRequest.builder()
.chatMotor(chatMotor)
.n(1)
.prompt(prompt)
.motorImageSize(MotorImageSize.X1024)
.responseImageFormat(ResponseImageFormat.URL)
.build();
// Execute the image request.
MotorImageResponse imageResponse = request.execute();
if (imageResponse.isResponseOk()) {
List<MotorImageContainer> imageContainers
= imageResponse.getImageContainer();
for (MotorImageContainer motorImageContainer : imageContainers) {
System.out.println(motorImageContainer.getUrl());
}
} else {
// Treat errors
// See the MotorLargeResponse class for more details.
}
The MotorVisionRequest API handles the creation and execution of vision-related requests to the ChatMotor using system and user prompts. This class uses the builder pattern to ensure flexibility and enforce mandatory configuration.
Model Specification: The model can be set using the .aiModel
setter or defaults to the environment variable MOTOR_VISION_MODEL
.
Streaming Support: The API supports streaming requests for efficient processing.
xxxxxxxxxx
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set
// Create a ChatMotor instance.
ChatMotor chatMotor = ChatMotor.builder().build();
// Optionally build a MotorAiOptions instance.
MotorAiOptions options = MotorAiOptions.builder()
.temperature(0.7)
.maxTokens(1000)
.build();
String text = "Describe what you see in this image.";
String imageUrl = "https://upload.wikimedia.org/wikipedia/commons/d/d1/"
+ "Mount_Everest_as_seen_from_Drukair2_PLW_edit.jpg";
MotorContentPartText contentPartText = new MotorContentPartText(text);
MotorContentPartImage contentPartImage = new MotorContentPartImage(
imageUrl);
// Build a Vision request
MotorVisionRequest request = MotorVisionRequest.builder()
.chatMotor(chatMotor)
.motorAiOptions(options)
.contentPartText(contentPartText)
.contentPartImage(contentPartImage)
.build();
// Execute the request.
MotorResponse motorResponse = request.execute();
if (motorResponse.isResponseOk()) {
String response = motorResponse.getResponse();
System.out.println(response);
}
else {
// Treat errors
// See the {@link MotorResponse} class for more details.
}
MotorVisionRequest
also supports using a local image loaded with VisionImageUtil.
xxxxxxxxxx
// Build ChatMotor instance...
// ...
String text = "Describe what you see in this image. Give details.";
String userHome = System.getProperty("user.home");
File myEverest = new File(userHome + File.separator +
"Mount_Everest_as_seen_from_Drukair2_PLW_edit.jpg");
// Load a local image file as a base64 string.
String base64 = VisionImageUtil.loadImageAsBase64(myEverest);
MotorContentPartText contentPartText = new MotorContentPartText(text);
MotorContentPartImage contentPartImage = new MotorContentPartImage(base64);
// Make a request to the ChatMotor.
MotorVisionRequest request = MotorVisionRequest.builder()
.chatMotor(chatMotor)
.motorAiOptions(options)
.contentPartText(contentPartText)
.contentPartImage(contentPartImage)
.build();
// Execute the request and treat the response
MotorResponse motorResponse = request.execute();
The Functional APIs in ChatMotor provide advanced text processing capabilities tailored to tasks like summarization and translation. These APIs offer higher-level functionality and do not directly wrap OpenAI or underlying Sample-OpenAI APIs. Instead, they leverage these models to provide specialized features, enabling developers to handle complex text operations more effectively and efficiently.
The Classic Summary API generates a concise overview of the input text, capturing the main points while reducing the text to a shorter form.
The MotorSummaryRequest class handles the creation and execution of summarization requests. It allows configuration for the text file to be summarized and the model to be used. If the model is not specified, it defaults to the value set in the MOTOR_CHAT_MODEL
environment variable or MotorDefaultsModels.MOTOR_CHAT_MODEL
.
Key Features:
Flexible Configuration: Uses a builder pattern for secure and customizable request configuration.
Model Specification: The model can be set via the .aiModel
method or defaults to environment variables.
Language Specification: The language of the text must be specified to ensure accurate summarization, as ChatGPT may mix up languages. In automation flows, the MotorLanguageGuesserRequest
class can be used to reliably detect the language of the text.
Text Format Support: Only text formats are supported because OpenAI understands only text formats. This includes file types like .txt
, .csv
, and .text
.
Line Feed Preservation: The API tries to preserve the line feeds of the passed text, except for the streaming "teletype" response of OpenAI (executeAsStream
method).
File Conversion: For other formats, the FileToTextConverter
class allows converting most current formats (e.g., .html
, .docx
, .pdf
) to text.
In-Memory Summarization: Summaries are treated as text in memory because summarization is not an isomorphic 1-to-1 operation and there is no reliable way to handle large files due to context loss between parts. The summary will thus be a maximum (for GPT-4 models) of 124KB tokens input and 4096 tokens output.
Usage Example:
xxxxxxxxxx
String documentPath = "/path/to/my_document.txt";
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set.
ChatMotor chatMotor = ChatMotor.builder().build();
MotorSummaryRequest request = MotorSummaryRequest.builder()
.chatMotor(chatMotor)
.file(new File(documentPath))
.filePath(documentPath)
.languageCode("fr")
.build();
MotorResponse motorResponse = request.execute();
if (motorResponse.isResponseOk()) {
System.out.println(motorResponse.getResponse());
} else {
// Treat errors
// See the MotorResponse class for more details.
}
The MotorStrategicSummaryRequest API offers the same key features as the Classic Summary but focuses on providing a high-level synthesis of the given text, extracting the most critical information and key insights. The output is always in HTML format, enhancing key points for better readability and understanding.
A strategic summary is designed to provide a quick understanding of the main points, trends, and actionable items within the text, making it ideal for decision-making processes and strategic planning.
Key Features:
Concise Overview: Summarizes the text to highlight essential information.
Key Insights: Focuses on the main points, trends, and actionable items.
Decision Support: Aids in strategic planning and decision-making by providing a quick understanding of the content.
The response will be in HTML format to guarantee readability and ease of understanding. There is no executeAsStream
method implemented in this class.
Usage Example:
xxxxxxxxxx
String documentPath = "/path/to/my_document.txt";
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set.
ChatMotor chatMotor = ChatMotor.builder().build();
MotorStrategicSummaryRequest request = MotorStrategicSummaryRequest.builder()
.chatMotor(chatMotor)
.file(new File(documentPath))
.filePath(documentPath)
.languageCode("fr")
.build();
MotorResponse motorResponse = request.execute();
if (motorResponse.isResponseOk()) {
// Better to dump the response HTML to a file, can't be readily displayed on the console.
String responseFilePath = "/path/to/my_document_summary.html";
try (InputStream in = motorResponse.getInputStream()) {
Files.copy(in, Paths.get(responseFilePath), StandardCopyOption.REPLACE_EXISTING);
}
} else {
// Treat errors
// See the MotorResponse class for more details.
}
The MotorLargeTranslationRequest API (handles the translation of text files, whether small or extensive. It preserves the format and structure of the original content while providing high-quality translations. This API uses streaming from input to output, preventing high memory consumption and ensuring efficient handling of large inputs. Users don't need to consider the size of their files; the API automatically switches modes to handle the file efficiently.
ChatMotor will automatically handle the buffer size to ensure no loss of data, but you can always fine-tune the default value of the input buffer length.
This class ensures the format is preserved when translating an HTML file. This allows you to convert Word files to HTML, process them for translation, and then reintegrate them back into Word easily by using "Open With" on the HTML file.
Web Services for PDF or Word to HTML Conversion
To facilitate automated processing, we will offer a free web service in the next ChatMotor beta version to convert Word files to HTML and HTML files to Word. This service will leverage the latest market technology for HTML to Word conversion. We are not experts in file conversion and do not intend to become one, but we aim to provide a useful tool within the current technological limits.
Usage Example:
xxxxxxxxxx
String documentPath = "/path/to/my_document.html";
String responseFilePath = "/path/to/my_document_translated.html";
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set.
ChatMotor chatMotor = ChatMotor.builder().build();
MotorLargeTranslationRequest motorLargeTranslationRequest = MotorLargeTranslationRequest.builder()
.chatMotor(chatMotor)
.filePath(documentPath)
.languageCode("fr")
.build();
MotorLargeResponse motorLargeResponse = motorLargeTranslationRequest.execute();
if (motorLargeResponse.isResponseOk()) {
System.out.println("responseFilePath: " + responseFilePath);
try (InputStream in = motorLargeResponse.getInputStream()) {
Files.copy(in, Paths.get(responseFilePath), StandardCopyOption.REPLACE_EXISTING);
}
} else {
// Treat errors
// See the MotorLargeResponse class for more details.
}
When setting up the MotorLargeTranslationRequest
, you can choose between two implementations for handling large translation requests:
Input Streaming: If useInputStreaming
is set to true
, the streaming implementation using SAX Parser and TagSoup is used. This method is more efficient for large files, as it processes data in a stream, reducing memory consumption. However, it may be less robust in this beta version.
In-Memory Processing: If useInputStreaming
is set to false
(default), the Jsoup implementation is used, which loads files entirely into memory before processing. This approach is simpler and more reliable but can be less efficient for very large files.
As part of our continuous innovation, we are introducing experimental APIs designed to enhance text sanitization and customization. Please note that these APIs are in the experimental phase, and response times may vary.
The MotorExtractContentRequest API enables you to filter the input text to remove or alter content related to specified topics or keywords. This API enhances text sanitization and customization, making it easier to tailor content to specific needs.
xxxxxxxxxx
String documentPath = "/path/to/my_document.html";
String responseFile = "/path/to/response_file.html";
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set.
ChatMotor chatMotor = ChatMotor.builder().build();
List<String> topics = new ArrayList<>();
topics.add("AI Generative");
topics.add("Machine learning");
MotorExtractContentRequest extractContentRequest = MotorExtractContentRequest.builder()
.chatMotor(chatMotor)
.topics(topics)
.filePath(documentPath)
.languageCode("fr")
.build();
MotorResponse motorResponse = extractContentRequest.execute();
if (motorResponse.isResponseOk()) {
System.out.println(motorResponse.getResponse());
} else {
// Treat errors
// See the MotorResponse class for more details.
}
The MotorFilterContentRequest API allows you to filter the input text to remove or alter content related to specified topics or keywords. This API enhances text sanitization and customization, making it easier to ensure that your content adheres to certain standards or guidelines.
xxxxxxxxxx
String documentPath = "/path/to/my_document.html";
String responseFile = "/path/to/response_file.html";
// We assume that env var MOTOR_LICENSE_FILE_PATH and MOTOR_API_KEY are set.
ChatMotor chatMotor = ChatMotor.builder().build();
List<String> topics = new ArrayList<>();
topics.add(MotorDefaultContentFilters.COMMERCIAL_CONTENT);
topics.add(MotorDefaultContentFilters.PROFANITY);
MotorFilterContentRequest filterContentRequest = MotorFilterContentRequest.builder()
.chatMotor(chatMotor)
.filePath(documentPath)
.topics(topics)
.languageCode("fr")
.build();
MotorResponse motorResponse = filterContentRequest.execute();
if (motorResponse.isResponseOk()) {
System.out.println(motorResponse.getResponse());
} else {
// Treat errors
// See the MotorResponse class for more details.
}
These APIs are currently in an experimental phase, and while they offer powerful capabilities for content extraction and filtering, their response times may not always be optimal. We are actively working to enhance their performance and reliability. Your feedback is valuable to us as we continue to improve these features.
ChatMotor provides advanced supplemental APIs for specialized text processing tasks. These APIs include MotorCategorizationRequest
, MotorEntityRequest
, and MotorSentimentRequest
, each tailored for specific use cases such as categorization, entity extraction, and sentiment analysis.
Categorization (MotorCategorizationRequest)
The MotorCategorizationRequest API is designed for categorizing text into predefined or custom categories. It is useful for tasks like content tagging, spam filtering, or topic detection.
xxxxxxxxxx
// Define the path to the input file
String filePath = "path/to/categorization.txt";
// Build a ChatMotor instance
ChatMotor chatMotor = ChatMotor.builder()
.build();
// Create and execute the request
MotorCategorizationRequest request = MotorCategorizationRequest.builder()
.chatMotor(chatMotor)
.filePath(filePath)
.build();
MotorCategorizationResponse response = request.execute();
// Handle the response
if (response.isResponseOk()) {
System.out.println("Categorization: " + response.getMotorCategorization());
} else {
System.err.println("Error: " + response.getThrowable());
}
See the MotorCategorizationResponse Javadoc for response details.
The MotorEntityRequest
API extracts named entities such as people, locations, and organizations from a text. This is useful for processing articles, reports, or emails to extract structured data.
xxxxxxxxxx
// Define the path to the input file
String filePath = "path/to/entities.txt";
// Build a ChatMotor instance
ChatMotor chatMotor = ChatMotor.builder()
.build();
// Create and execute the request
MotorEntityRequest request = MotorEntityRequest.builder()
.chatMotor(chatMotor)
.filePath(filePath)
.build();
MotorEntityResponse response = request.execute();
// Handle the response
if (response.isResponseOk()) {
System.out.println("Entities: " + response.getMotorEntity());
} else {
System.err.println("Error: " + response.getThrowable());
}
See the MotorEntityResponse Javadoc for response details.
The MotorSentimentRequest
API analyzes the sentiment of a given text, identifying whether the sentiment is positive, negative, or neutral. It is useful for customer feedback analysis, social media monitoring, and more.
xxxxxxxxxx
// Define the path to the input file
String filePath = "path/to/sentiment.txt";
// Build a ChatMotor instance
ChatMotor chatMotor = ChatMotor.builder()
.build();
// Create and execute the request
MotorSentimentRequest request = MotorSentimentRequest.builder()
.chatMotor(chatMotor)
.filePath(filePath)
.build();
MotorSentimentResponse response = request.execute();
// Handle the response
if (response.isResponseOk()) {
System.out.println("Sentiment: " + response.getMotorSentiment());
} else {
System.err.println("Error: " + response.getThrowable());
}
See the MotorSentimentResponse Javadoc for response details.
The Notification API in ChatMotor allows you to notify users that their document is ready or send the document to them directly after an AI request is processed. This is especially useful for scenarios where users may be waiting for a document to be generated and need to be informed promptly. It supports multiple communication channels, including SMS (via Twilio), Slack, and Email.
The API is designed to be flexible and extendable, allowing new communication channels to be added easily, such as Notion or Airtable, in addition to the built-in implementations.
Notify Users: Inform users as soon as their document is ready for download or review.
Send Documents: Dispatch the processed document to the user through various communication channels.
Extendable: Easily add new notification channels by implementing the NotificationChannel
interface.
Synchronous/Asynchronous Execution: Choose between synchronous (SYNC) and asynchronous (ASYNC) execution modes for notifications and document dispatch.
The NotificationChannel interface defines two key methods:
notifyDocumentAvailable()
: Notifies the user that a response document is available.
dispatchDocument()
: Sends the document to the user.
Both methods accept the document's file path and the execution mode (SYNC
or ASYNC
).
The MotorNotifyResponse indicates whether the notification and/or send operation was successfully delivered. Note that this response is meaningful only in SYNC
mode.
The following example demonstrates how to use the Slack implementation to notify users and send a document.
xxxxxxxxxx
// Define your Slack credentials and file path
String token = "your-slack-bot-token";
String channel = "your-slack-channel-id";
String filePath = "path/to/your/document.txt";
// Build a ChatMotor instance
ChatMotor chatMotor = ChatMotor.builder().build();
// Create an instance of SlackNotificationChannel
SlackNotificationChannel slackChannel = new SlackNotificationChannel.Builder()
.chatMotor(chatMotor)
.token(token)
.channel(channel)
.build();
// Notify the user that the document is available
System.out.println("Sending a notification...");
MotorNotifyResponse notifyResponse
= slackChannel.notifyDocumentAvailable(filePath, ExecutionMode.SYNC);
System.out.println("Notification Response: " + notifyResponse);
// Dispatch the document to the user
System.out.println("Sending the document...");
MotorNotifyResponse dispatchResponse
= slackChannel.dispatchDocument(filePath, ExecutionMode.SYNC);
System.out.println("Dispatch Response: " + dispatchResponse);
You can also use other communication channels, such as:
SMS (via Twilio): Notify users via SMS using Twilio with the TwilioNotificationChannel implementation.
Email: Send email notifications or dispatch documents directly to a user's inbox using the EmailNotificationChannel implementation.
The Notification API is designed to be flexible and extensible. You can create custom implementations for other platforms by implementing the NotificationChannel
interface. This allows you to use any communication service (e.g., Notion, Airtable, or internal company systems) to notify users or send documents.
API Key Issues
Problem: Invalid or missing API key.
Solution: Ensure that your API key is correctly set in the environment variables MOTOR_API_KEY
. Verify that the key has the necessary permissions.
xxxxxxxxxx
export MOTOR_API_KEY=your_openai_api_key
License File Issues
Problem: License file not found or invalid.
Solution: Make sure the MOTOR_LICENSE_FILE_PATH
environment variable points to the correct path of your license file. Verify the contents and validity of the license file.
xxxxxxxxxx
export MOTOR_LICENSE_FILE_PATH=/path/to/chatmotor_license_key.txt
Network Issues
Problem: Network connectivity problems causing request failures.
Solution: Verify your network connection. If using a proxy, ensure it is correctly configured in your ChatMotor
setup.
xxxxxxxxxx
ttpClient httpClient = HttpClient.newBuilder()
.proxy(ProxySelector.of(new InetSocketAddress("proxy.example.com", 80)))
.build();
ChatMotor chatMotor = ChatMotor.builder()
.httpClient(httpClient)
.build();
Performance Issues
Problem: high memory consumption with MotorLargeTranslationRequest
API
Solution: Optimize your request parameters and use the streaming options for handling large files efficiently.
xxxxxxxxxx
MotorLargeTranslationRequest request = MotorLargeTranslationRequest.builder()
.chatMotor(chatMotor)
.filePath(documentPath)
.useInputStreaming(true)
.build();
When using MotorLargeRequestText
, MotorLargeRequestLine
, or MotorLargeTranslationRequest
, and MotorLargeTranscriptionRequest
, it's important to be patient. These operations can take a significant amount of time, especially for very large files.
Note: Even if a translation or processing takes an hour, it is still far faster and more cost-effective than manual human processing. Understand that large text processing, line processing, and translations are intensive operations. The API handles large files by splitting them into smaller chunks and processing each chunk sequentially. This ensures accurate results but can increase the processing time.
For sample code examples and detailed usage instructions, please refer to the ChatMotor Java Samples project. This repository contains comprehensive examples and guides to help you integrate and utilize the ChatMotor API in your Java applications.