mmodzelewski commented on code in PR #2711:
URL: https://github.com/apache/iggy/pull/2711#discussion_r2812417028


##########
examples/java/README.md:
##########
@@ -64,6 +64,112 @@ You can also customize the server using environment 
variables:
 IGGY_HTTP_ENABLED=true IGGY_TCP_ADDRESS=0.0.0.0:8090 cargo run --bin 
iggy-server
 ```
 
+## Blocking vs. Async - When to Use Each

Review Comment:
   I think this section belongs further down. We should aim for a logical flow, 
starting with the basics and moving on to the more advanced concepts.



##########
examples/java/README.md:
##########
@@ -64,6 +64,112 @@ You can also customize the server using environment 
variables:
 IGGY_HTTP_ENABLED=true IGGY_TCP_ADDRESS=0.0.0.0:8090 cargo run --bin 
iggy-server
 ```
 
+## Blocking vs. Async - When to Use Each
+
+The Iggy Java SDK provides two client types: **blocking (synchronous)** and 
**async (non-blocking)**. Choose based on your use case:
+
+### Use Blocking Client When
+
+- Message rate < 1000/sec
+- Writing scripts, CLI tools, or simple applications
+- Sequential code is easier to reason about
+- Integration tests
+
+### Use Async Client When
+
+- Need > 5000 msg/sec throughput
+- Application is already async/reactive (Spring WebFlux, Vert.x)
+- Want to pipeline multiple requests over a single connection
+- Building services that handle many concurrent streams
+
+### Performance Characteristics
+
+**Blocking Client:**
+
+- Throughput: ~5,000 msg/sec (varies with batch size, network latency)
+- Thread usage: One thread per operation
+- Latency: Low (one request at a time)
+
+**Async Client:**
+
+- Throughput: ~20,000+ msg/sec (with pipelining)

Review Comment:
   I'd refrain from including a specific message count, at least until we've 
done some proper benchmarking. Secondly, this depends heavily on the message 
size.



##########
examples/java/README.md:
##########
@@ -132,18 +238,53 @@ Building streams with advanced configuration:
 
 Shows how to use the stream builder API to create and configure streams with 
custom settings.
 
-## Async Client
+## Key Async Patterns
 
-The following example demonstrates how to use the asynchronous client:
+### CompletableFuture Chaining
 
-Async producer example:
-
-```bash
-./gradlew runAsyncProducer
+```java
+client.connect()
+    .thenCompose(v -> client.login())
+    .thenCompose(identity -> client.streams().createStream("my-stream"))
+    .thenAccept(stream -> System.out.println("Created: " + stream.name()))
+    .exceptionally(ex -> {
+        System.err.println("Error: " + ex.getMessage());
+        return null;
+    });
 ```
 
-Async consumer example:
+### Pipelining for Throughput
 
-```bash
-./gradlew runAsyncConsumerExample
+```java
+List<CompletableFuture<Void>> sends = new ArrayList<>();
+for (int i = 0; i < 10; i++) {
+    sends.add(client.messages().sendMessages(...));
+}
+CompletableFuture.allOf(sends.toArray(new CompletableFuture[0])).join();
+```
+
+### Thread Pool Offloading
+
+```java
+// WRONG - blocks Netty event loop
+client.messages().pollMessages(...)
+    .thenAccept(polled -> {
+        saveToDatabase(polled);  // blocking I/O!
+    });
+
+// CORRECT - offloads to processing pool
+var processingPool = Executors.newFixedThreadPool(8);
+client.messages().pollMessages(...)
+    .thenAcceptAsync(polled -> {
+        saveToDatabase(polled);  // runs on processingPool
+    }, processingPool);
 ```
+
+## Next Steps

Review Comment:
   I'd remove this section entirely, as it's a bit redundant and doesn't add 
anything new.



##########
examples/java/src/main/java/org/apache/iggy/examples/blocking/BlockingConsumer.java:
##########
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iggy.examples.blocking;
+
+import org.apache.iggy.Iggy;
+import org.apache.iggy.client.blocking.tcp.IggyTcpClient;
+import org.apache.iggy.consumergroup.Consumer;
+import org.apache.iggy.identifier.StreamId;
+import org.apache.iggy.identifier.TopicId;
+import org.apache.iggy.message.Message;
+import org.apache.iggy.message.PolledMessages;
+import org.apache.iggy.message.PollingStrategy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Optional;
+
+/**
+ * Blocking Consumer Example
+ *
+ * <p>Demonstrates message consumption using the blocking (synchronous) Iggy 
client.
+ *
+ * <p>This example shows:
+ *
+ * <ul>
+ *   <li>Connection and authentication</li>
+ *   <li>Continuous message polling from a specific partition</li>
+ *   <li>Offset-based consumption</li>
+ *   <li>Graceful shutdown</li>
+ * </ul>
+ *
+ * <p>Run this after running BlockingProducer to see messages flow through.
+ *
+ * <p>Run with: {@code ./gradlew runBlockingConsumer}
+ */
+public final class BlockingConsumer {

Review Comment:
   Other than the documentation, I don't see how this differs from the 
gettingstarted/consumer example.



##########
examples/java/src/main/java/org/apache/iggy/examples/async/AsyncProducer.java:
##########
@@ -29,183 +30,239 @@
 import org.slf4j.LoggerFactory;
 
 import java.math.BigInteger;
+import java.util.ArrayList;
 import java.util.List;
 import java.util.Optional;
-import java.util.UUID;
 import java.util.concurrent.CompletableFuture;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicInteger;
 
 /**
- * AsyncProducer demonstrates how to use the async client to send messages to 
Apache Iggy.
- * This producer sends messages asynchronously and handles responses using 
CompletableFuture.
+ * Async Producer Example - High Throughput
+ *
+ * <p>Demonstrates high-throughput message production using the async 
(non-blocking) Iggy client.
+ *
+ * <h2>WHEN TO USE THE ASYNC CLIENT:</h2>
+ * <ul>
+ *   <li>You need &gt; 5000 msg/sec throughput</li>
+ *   <li>Your application is already async/reactive (Spring WebFlux, Vert.x, 
etc.)</li>
+ *   <li>You want to pipeline multiple requests over a single connection</li>
+ *   <li>You're building a service that handles many concurrent streams</li>
+ * </ul>
+ *
+ * <h2>KEY DIFFERENCES FROM BLOCKING CLIENT:</h2>
+ * <ul>
+ *   <li>All methods return CompletableFuture instead of direct values</li>
+ *   <li>Built on Netty's non-blocking I/O (no thread-per-request)</li>
+ *   <li>Can pipeline multiple send operations without waiting for each to 
complete</li>
+ *   <li>Single connection handles all concurrent requests via event loop 
multiplexing</li>
+ * </ul>
+ *
+ * <h2>PERFORMANCE CHARACTERISTICS:</h2>
+ * <ul>
+ *   <li>Throughput: Higher (pipelines requests, no thread contention)</li>
+ *   <li>Latency per request: Similar to blocking</li>
+ *   <li>Thread usage: Minimal (Netty event loop threads)</li>
+ *   <li>Code complexity: Higher (futures, callbacks)</li>
+ * </ul>
+ *
+ * <p>This example shows:
+ * <ul>
+ *   <li>Async client setup with CompletableFuture chaining</li>
+ *   <li>Pipelined message sending (fire multiple sends without blocking)</li>
+ *   <li>Error handling with exceptionally()</li>
+ *   <li>Performance measurement</li>
+ *   <li>Proper async shutdown</li>
+ * </ul>
+ *
+ * <p>Run with: {@code ./gradlew runAsyncProducer}
  */
-public class AsyncProducer {
+public final class AsyncProducer {
     private static final Logger log = 
LoggerFactory.getLogger(AsyncProducer.class);
 
-    private static final String HOST = "127.0.0.1";
-    private static final int PORT = 8090;
+    // Configuration
+    private static final String IGGY_HOST = "localhost";
+    private static final int IGGY_PORT = 8090;
     private static final String USERNAME = "iggy";
     private static final String PASSWORD = "iggy";
+    private static final String STREAM_NAME = "async-example-stream";
+    private static final String TOPIC_NAME = "async-example-topic";
+    private static final int PARTITION_COUNT = 3;
 
-    private static final String STREAM_NAME = "async-test";
-    private static final String TOPIC_NAME = "events";
-    private static final long PARTITION_ID = 0L;
+    // High-throughput configuration
+    private static final int MESSAGE_BATCH_SIZE = 500; // Larger batches for 
async
+    private static final int TOTAL_BATCHES = 20;
+    private static final int MAX_IN_FLIGHT = 5; // Pipeline up to 5 concurrent 
sends
 
-    private static final int MESSAGE_COUNT = 100;
-    private static final int MESSAGE_SIZE = 256;
+    private AsyncProducer() {
+        // Utility class
+    }
 
-    private final AsyncIggyTcpClient client;
-    private final AtomicInteger successCount = new AtomicInteger(0);
-    private final AtomicInteger errorCount = new AtomicInteger(0);
+    public static void main(String[] args) {
+        AsyncIggyTcpClient client = null;
 
-    public AsyncProducer() {
-        this.client = new AsyncIggyTcpClient(HOST, PORT);
-    }
+        try {
+            log.info("=== Async Producer Example (High Throughput) ===");
 
-    public CompletableFuture<Void> start() {
-        log.info("Starting AsyncProducer...");
+            // 1. Build, connect, and login - all chained with 
CompletableFuture
+            log.info("Connecting to Iggy server at {}:{}...", IGGY_HOST, 
IGGY_PORT);
 
-        return client.connect()
-                .thenCompose(v -> {
-                    log.info("Connected to Iggy server at {}:{}", HOST, PORT);
-                    return client.users().login(USERNAME, PASSWORD);
-                })
-                .thenCompose(v -> {
-                    log.info("Logged in successfully as user: {}", USERNAME);
-                    return setupStreamAndTopic();
-                })
-                .thenCompose(v -> {
-                    log.info("Stream and topic setup complete");
-                    return sendMessages();
-                })
-                .thenRun(() -> {
-                    log.info("All messages sent. Success: {}, Errors: {}", 
successCount.get(), errorCount.get());
-                })
-                .exceptionally(ex -> {
-                    log.error("Error in producer flow", ex);
-                    return null;
-                });
+            // ASYNC PATTERN: Use join() only at the end to block until client 
is ready.
+            // In a real async app (e.g. Spring WebFlux), you'd chain 
everything without join().
+            client = Iggy.tcpClientBuilder()
+                    .async()
+                    .host(IGGY_HOST)
+                    .port(IGGY_PORT)
+                    .credentials(USERNAME, PASSWORD)
+                    .buildAndLogin()
+                    .join(); // Block here to wait for connection
+
+            log.info("Connected successfully");
+
+            // 2. Setup stream and topic
+            AsyncIggyTcpClient finalClient = client;
+            setupStreamAndTopic(finalClient).join();
+
+            // 3. Send messages with pipelining
+            sendMessagesAsync(finalClient).join();
+
+            log.info("=== Producer completed successfully ===");
+
+        } catch (RuntimeException e) {
+            log.error("Producer failed", e);
+            System.exit(1);
+        } finally {
+            // Always close the client
+            if (client != null) {
+                try {
+                    client.close().join();
+                    log.info("Client closed");
+                } catch (RuntimeException e) {
+                    log.error("Error closing client", e);
+                }
+            }
+        }
     }
 
-    private CompletableFuture<Void> setupStreamAndTopic() {
-        log.info("Checking stream: {}", STREAM_NAME);
+    private static CompletableFuture<Void> 
setupStreamAndTopic(AsyncIggyTcpClient client) {
+        // ASYNC CHAINING PATTERN:
+        // Each operation returns CompletableFuture. We chain them with 
thenCompose().
+        // Errors propagate down the chain and can be handled with 
exceptionally().
 
         return client.streams()
                 .getStream(StreamId.of(STREAM_NAME))
-                .thenCompose(stream -> {
-                    if (stream.isEmpty()) {
-                        log.info("Creating stream: {}", STREAM_NAME);
-                        return client.streams()
-                                .createStream(STREAM_NAME)
-                                .thenAccept(created -> log.info("Stream 
created: {}", created.name()));
-                    } else {
-                        log.info("Stream exists: {}", STREAM_NAME);
-                        return CompletableFuture.completedFuture(null);
-                    }
+                .thenApply(stream -> {
+                    log.info("Stream '{}' already exists", STREAM_NAME);

Review Comment:
   This flow does not look right. `getStream` returns optional, so you should 
check what's inside the optional. Later, there's `creating stream` log message 
within `exceptionally`, which actually won't be called. Please review and 
update this flow.  



##########
examples/java/src/main/java/org/apache/iggy/examples/blocking/BlockingProducer.java:
##########
@@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iggy.examples.blocking;
+
+import org.apache.iggy.Iggy;
+import org.apache.iggy.client.blocking.tcp.IggyTcpClient;
+import org.apache.iggy.identifier.StreamId;
+import org.apache.iggy.identifier.TopicId;
+import org.apache.iggy.message.Message;
+import org.apache.iggy.message.Partitioning;
+import org.apache.iggy.topic.CompressionAlgorithm;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.math.BigInteger;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * Blocking Producer Example
+ *
+ * <p>Demonstrates basic message production using the blocking (synchronous) 
Iggy client.
+ *
+ * <h2>WHEN TO USE THE BLOCKING CLIENT:</h2>
+ * <ul>
+ *   <li>Simple applications, scripts, CLI tools</li>
+ *   <li>Message rate &lt; 1000/sec</li>
+ *   <li>Sequential code is easier to reason about</li>
+ *   <li>Integration tests</li>
+ * </ul>
+ *
+ * <p>This example shows:
+ * <ul>
+ *   <li>Client connection and authentication</li>
+ *   <li>Stream and topic creation</li>
+ *   <li>Batch message sending (recommended for efficiency)</li>
+ *   <li>Balanced partitioning for throughput</li>
+ *   <li>Proper resource cleanup</li>
+ * </ul>
+ *
+ * <p>Run with: {@code ./gradlew runBlockingProducer}
+ */
+public final class BlockingProducer {

Review Comment:
   This seems like a duplicate of `gettinstarted/producer` example.  



##########
examples/java/src/main/java/org/apache/iggy/examples/async/AsyncConsumer.java:
##########
@@ -0,0 +1,349 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iggy.examples.async;
+
+import org.apache.iggy.Iggy;
+import org.apache.iggy.client.async.tcp.AsyncIggyTcpClient;
+import org.apache.iggy.consumergroup.Consumer;
+import org.apache.iggy.identifier.StreamId;
+import org.apache.iggy.identifier.TopicId;
+import org.apache.iggy.message.Message;
+import org.apache.iggy.message.PolledMessages;
+import org.apache.iggy.message.PollingStrategy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.math.BigInteger;
+import java.util.Optional;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * Async Consumer Example - Backpressure and Error Handling
+ *
+ * <p>Demonstrates advanced async message consumption patterns including:
+ * <ul>
+ *   <li>Non-blocking continuous polling</li>
+ *   <li>Backpressure management (don't poll faster than you can process)</li>
+ *   <li>Error recovery with exponential backoff</li>
+ *   <li>Offloading CPU-intensive work from Netty threads</li>
+ *   <li>Graceful shutdown</li>
+ * </ul>
+ *
+ * <h2>CRITICAL ASYNC PATTERN - Thread Pool Management:</h2>
+ *
+ * <p>The async client uses Netty's event loop threads for I/O operations.
+ * <strong>NEVER</strong> block these threads with:
+ * <ul>
+ *   <li>{@code .join()} or {@code .get()} inside {@code 
thenApply/thenAccept}</li>
+ *   <li>{@code Thread.sleep()}</li>
+ *   <li>Blocking database calls</li>
+ *   <li>Long-running computations</li>
+ * </ul>
+ *
+ * <p>If your message processing involves blocking operations, offload to a 
separate
+ * thread pool using {@code thenApplyAsync(fn, executor)}.
+ *
+ * <p>This example shows the correct pattern.
+ *
+ * <p>Run after AsyncProducer to see messages flow.
+ *
+ * <p>Run with: {@code ./gradlew runAsyncConsumer}
+ */
+public final class AsyncConsumer {
+    private static final Logger log = 
LoggerFactory.getLogger(AsyncConsumer.class);
+
+    // Configuration (must match AsyncProducer)
+    private static final String IGGY_HOST = "localhost";
+    private static final int IGGY_PORT = 8090;
+    private static final String USERNAME = "iggy";
+    private static final String PASSWORD = "iggy";
+    private static final String STREAM_NAME = "async-example-stream";
+    private static final String TOPIC_NAME = "async-example-topic";
+    private static final long PARTITION_ID = 0L;
+    private static final long CONSUMER_ID = 0L;
+
+    // Polling configuration
+    private static final int POLL_BATCH_SIZE = 100;
+    private static final int POLL_INTERVAL_MS = 1000;
+    private static final int BATCHES_LIMIT = 5; // Exit after receiving this 
many batches
+    private static final int MAX_EMPTY_POLLS = 5; // Exit if no messages after 
consecutive empty polls
+
+    // Error recovery configuration
+    private static final int MAX_RETRY_ATTEMPTS = 5;
+    private static final int INITIAL_BACKOFF_MS = 100;
+    private static final int MAX_BACKOFF_MS = 5000;
+
+    // Thread pool for message processing (separate from Netty threads)
+    // Size based on workload: CPU-bound = availableProcessors, I/O-bound = 2x 
or more
+    private static final int PROCESSING_THREADS = 
Runtime.getRuntime().availableProcessors();
+
+    private static volatile boolean running = true;
+
+    private AsyncConsumer() {
+        // Utility class
+    }
+
+    public static void main(String[] args) {
+        AsyncIggyTcpClient client = null;
+        ExecutorService processingPool = null;
+
+        // Handle Ctrl+C gracefully
+        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
+            log.info("Shutdown signal received, stopping consumer...");
+            running = false;
+        }));
+
+        try {
+            log.info("=== Async Consumer Example (Backpressure + Error 
Handling) ===");
+
+            // Create thread pool for message processing
+            processingPool = Executors.newFixedThreadPool(PROCESSING_THREADS, 
r -> {
+                Thread t = new Thread(r, "message-processor");
+                t.setDaemon(true);
+                return t;
+            });
+
+            log.info("Created processing thread pool with {} threads", 
PROCESSING_THREADS);
+
+            // 1. Connect and authenticate
+            log.info("Connecting to Iggy server at {}:{}...", IGGY_HOST, 
IGGY_PORT);
+            client = Iggy.tcpClientBuilder()
+                    .async()
+                    .host(IGGY_HOST)
+                    .port(IGGY_PORT)
+                    .credentials(USERNAME, PASSWORD)
+                    .buildAndLogin()
+                    .join();
+
+            log.info("Connected successfully");
+
+            // 2. Poll messages continuously with backpressure
+            AsyncIggyTcpClient finalClient = client;
+            pollMessagesAsync(finalClient, processingPool).join();
+
+            log.info("=== Consumer stopped gracefully ===");
+
+        } catch (RuntimeException e) {
+            log.error("Consumer failed", e);
+            System.exit(1);
+        } finally {
+            // Cleanup
+            if (client != null) {
+                try {
+                    client.close().join();
+                    log.info("Client closed");
+                } catch (RuntimeException e) {
+                    log.error("Error closing client", e);
+                }
+            }
+
+            if (processingPool != null) {
+                processingPool.shutdown();
+                try {
+                    if (!processingPool.awaitTermination(5, TimeUnit.SECONDS)) 
{
+                        processingPool.shutdownNow();
+                    }
+                    log.info("Processing thread pool shut down");
+                } catch (InterruptedException e) {
+                    processingPool.shutdownNow();
+                }
+            }
+        }
+    }
+
+    private static CompletableFuture<Void> pollMessagesAsync(
+            AsyncIggyTcpClient client, ExecutorService processingPool) {
+        log.info("Starting async polling loop (limit: {} batches)...", 
BATCHES_LIMIT);
+
+        AtomicInteger totalReceived = new AtomicInteger(0);
+        AtomicInteger emptyPolls = new AtomicInteger(0);
+        AtomicInteger consumedBatches = new AtomicInteger(0);
+        AtomicReference<BigInteger> offset = new 
AtomicReference<>(BigInteger.ZERO);
+
+        // RECURSIVE ASYNC POLLING PATTERN:
+        // Each poll schedules the next poll after processing completes.
+        // This provides natural backpressure - we don't poll for new messages
+        // until we've finished processing the current batch.
+
+        CompletableFuture<Void> pollingLoop = new CompletableFuture<>();
+        pollBatch(client, processingPool, totalReceived, emptyPolls, 
consumedBatches, offset, 0, pollingLoop);
+        return pollingLoop;
+    }
+
+    private static void pollBatch(
+            AsyncIggyTcpClient client,
+            ExecutorService processingPool,
+            AtomicInteger totalReceived,
+            AtomicInteger emptyPolls,
+            AtomicInteger consumedBatches,
+            AtomicReference<BigInteger> offset,
+            int retryAttempt,
+            CompletableFuture<Void> loopFuture) {
+        if (!running || consumedBatches.get() >= BATCHES_LIMIT) {
+            log.info(
+                    "Finished consuming {} batches. Total messages received: 
{}",
+                    consumedBatches.get(),
+                    totalReceived.get());
+            loopFuture.complete(null);
+            return;
+        }
+
+        StreamId streamId = StreamId.of(STREAM_NAME);
+        TopicId topicId = TopicId.of(TOPIC_NAME);
+        Consumer consumer = Consumer.of(CONSUMER_ID);
+
+        client.messages()
+                .pollMessages(
+                        streamId,
+                        topicId,
+                        Optional.of(PARTITION_ID),
+                        consumer,
+                        PollingStrategy.offset(offset.get()),
+                        (long) POLL_BATCH_SIZE,
+                        false)
+                .thenComposeAsync(
+                        polled -> {
+                            // OFFLOAD TO PROCESSING POOL:
+                            // We use thenComposeAsync with processingPool to 
move message processing
+                            // off the Netty event loop. This is critical for 
heavy workloads.
+
+                            int messageCount = polled.messages().size();
+
+                            if (messageCount > 0) {
+                                // Update offset for next poll
+                                offset.updateAndGet(current -> 
current.add(BigInteger.valueOf(messageCount)));
+                                consumedBatches.incrementAndGet();
+
+                                return processMessages(polled, totalReceived, 
processingPool)
+                                        .thenRun(() -> emptyPolls.set(0));
+                            } else {
+                                int empty = emptyPolls.incrementAndGet();
+                                if (empty >= MAX_EMPTY_POLLS) {
+                                    log.info("No more messages after {} empty 
polls, finishing.", MAX_EMPTY_POLLS);
+                                    running = false;
+                                    return 
CompletableFuture.completedFuture(null);
+                                }
+                                log.info("Caught up - no new messages. 
Waiting...");
+                                // Sleep without blocking Netty threads
+                                return CompletableFuture.runAsync(
+                                        () -> {
+                                            try {
+                                                Thread.sleep(POLL_INTERVAL_MS);
+                                            } catch (InterruptedException e) {
+                                                
Thread.currentThread().interrupt();
+                                            }
+                                        },
+                                        processingPool);
+                            }
+                        },
+                        processingPool)
+                .thenRun(() -> {
+                    // SUCCESS: Reset retry counter and schedule next poll
+                    pollBatch(
+                            client, processingPool, totalReceived, emptyPolls, 
consumedBatches, offset, 0, loopFuture);
+                })
+                .exceptionally(e -> {
+                    // ERROR RECOVERY WITH EXPONENTIAL BACKOFF:
+                    // Don't give up on the first error. Retry with increasing 
delays.
+                    log.error(
+                            "Error polling messages (attempt {}/{}): {}",
+                            retryAttempt + 1,
+                            MAX_RETRY_ATTEMPTS,
+                            e.getMessage());
+
+                    if (retryAttempt < MAX_RETRY_ATTEMPTS) {
+                        int backoffMs = Math.min(INITIAL_BACKOFF_MS * (1 << 
retryAttempt), MAX_BACKOFF_MS);
+                        log.info("Retrying in {} ms...", backoffMs);
+
+                        // Schedule retry after backoff
+                        CompletableFuture.runAsync(
+                                () -> {
+                                    try {
+                                        Thread.sleep(backoffMs);
+                                    } catch (InterruptedException ie) {
+                                        Thread.currentThread().interrupt();
+                                    }
+                                    pollBatch(
+                                            client,
+                                            processingPool,
+                                            totalReceived,
+                                            emptyPolls,
+                                            consumedBatches,
+                                            offset,
+                                            retryAttempt + 1,
+                                            loopFuture);
+                                },
+                                processingPool);
+                    } else {
+                        log.error("Max retry attempts reached. Stopping 
consumer.");
+                        loopFuture.completeExceptionally(e);
+                    }
+                    return null;
+                });
+    }
+
+    private static CompletableFuture<Void> processMessages(
+            PolledMessages polled, AtomicInteger totalReceived, 
ExecutorService processingPool) {
+        // Process each message (this runs on processingPool, not Netty 
threads)
+        return CompletableFuture.runAsync(
+                () -> {
+                    int messageCount = polled.messages().size();
+
+                    for (Message message : polled.messages()) {
+                        String payload = new String(message.payload());
+
+                        // Simulate message processing (in real app: parse, 
validate, store, etc.)
+                        // This could be CPU-intensive or involve blocking I/O 
(database, HTTP calls)
+                        processMessage(payload, message.header().offset());
+                    }
+
+                    int total = totalReceived.addAndGet(messageCount);
+                    log.info("Processed {} messages (total: {})", 
messageCount, total);
+                },
+                processingPool);
+    }
+
+    private static void processMessage(String payload, java.math.BigInteger 
offset) {
+        // In a real application, this would be your business logic:
+        //   - Parse JSON
+        //   - Validate data
+        //   - Call external APIs
+        //   - Update database
+        //   - Send to downstream systems
+
+        // For this example, just log occasionally
+        if (offset.compareTo(java.math.BigInteger.valueOf(5)) < 0

Review Comment:
   Please use imports instead of FQN.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to