Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

OpenAI Client

The OpenAiClient is an implementation of GenAiClient that interacts with the OpenAI API (GPT models). It supports both synchronous chat and asynchronous streaming chat.

Features

  • Synchronous Chat: Sends a prompt to the model and waits for the full response.
  • Streaming Chat: streams the response from the model token by token, suitable for real-time applications.
  • Non-Blocking I/O: Uses XNIO and Undertow’s asynchronous client to prevent blocking threads during I/O operations.

Configuration

The client is configured via openai.yml.

Properties

PropertyDescriptionDefault
urlThe OpenAI API URL for chat completions.https://api.openai.com/v1/chat/completions
modelThe model to use (e.g., gpt-3.5-turbo, gpt-4).null
apiKeyYour OpenAI API key.null

Example openai.yml

url: https://api.openai.com/v1/chat/completions
model: gpt-3.5-turbo
apiKey: your-openai-api-key

Usage

Injection

You can inject the OpenAiClient as a GenAiClient implementation.

GenAiClient client = new OpenAiClient();

Synchronous Chat

List<ChatMessage> messages = new ArrayList<>();
messages.add(new ChatMessage("user", "Hello, OpenAI!"));
String response = client.chat(messages);
System.out.println(response);

Streaming Chat

List<ChatMessage> messages = new ArrayList<>();
messages.add(new ChatMessage("user", "Write a long story."));

client.chatStream(messages, new StreamCallback() {
    @Override
    public void onEvent(String content) {
        System.out.print(content);
    }

    @Override
    public void onComplete() {
        System.out.println("\nDone.");
    }

    @Override
    public void onError(Throwable throwable) {
        throwable.printStackTrace();
    }
});