fresh-borzoni commented on code in PR #417:
URL: https://github.com/apache/fluss-rust/pull/417#discussion_r2881101170


##########
crates/fluss/src/config.rs:
##########
@@ -95,6 +98,21 @@ pub struct Config {
     #[arg(long, default_value_t = DEFAULT_MAX_POLL_RECORDS)]
     pub scanner_log_max_poll_records: usize,
 
+    /// Maximum bytes per fetch response for LogScanner.
+    /// Default: 16777216 (16MB)
+    #[arg(long, default_value_t = DEFAULT_SCANNER_LOG_FETCH_MAX_BYTES)]

Review Comment:
   New config fields need Python bindings (FfiConfig + .pyi stub), C++ 
bindings, and website docs (configuration.md for both languages). Also missing 
from api-reference.md.



##########
crates/fluss/src/client/table/scanner.rs:
##########
@@ -1479,8 +1483,9 @@ impl LogFetcher {
                             partition_id: bucket.partition_id(),
                             bucket_id: bucket.bucket_id(),
                             fetch_offset: offset,
-                            // 1M
-                            max_fetch_bytes: 1024 * 1024,
+                            max_fetch_bytes: self
+                                .fetch_max_bytes
+                                .min(DEFAULT_BUCKET_MAX_FETCH_BYTES),

Review Comment:
   This couples the total-response limit with the per-bucket cap, which is 
incorrect
   Java has `client.scanner.log.fetch.max-bytes-for-bucket ` for this



##########
crates/fluss/src/client/table/scanner.rs:
##########
@@ -657,6 +655,9 @@ impl LogFetcher {
         config: &crate::config::Config,
         projected_fields: Option<Vec<usize>>,
     ) -> Result<Self> {
+        config
+            .validate_scanner_fetch()

Review Comment:
   mb we should validate during connection creation like we do with SASL fields?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to