fresh-borzoni commented on code in PR #404:
URL: https://github.com/apache/fluss-rust/pull/404#discussion_r2900832494
##########
bindings/cpp/include/fluss.hpp:
##########
@@ -998,6 +998,14 @@ struct Configuration {
// Maximum number of records returned in a single call to Poll() for
LogScanner
size_t scanner_log_max_poll_records{500};
int64_t writer_batch_timeout_ms{100};
+ // Whether to enable idempotent writes
+ bool writer_enable_idempotence{true};
+ // Maximum number of in-flight requests per bucket for idempotent writes
+ size_t writer_max_inflight_requests_per_bucket{5};
+ // Total memory available for buffering write batches (default 64MB)
+ size_t writer_buffer_memory_size{64 * 1024 * 1024};
+ // Maximum time in milliseconds to block waiting for buffer memory
+ uint64_t writer_buffer_wait_timeout_ms{60000};
Review Comment:
Sure, it's a viable concern
This timeout only governs how long send() blocks when the buffer memory pool
is full. It's backpressure, not retry durability. During rebalancing,
already-enqueued batches are retried by the sender (with writer_retries =
i32::MAX and metadata refresh on unknown leaders), so they survive leader
changes regardless of this value.
60s here matches Kafka's max.block.ms default (also 60s) and Java Fluss's
LazyMemorySegmentPool timeout for backpressure
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]