zhaohaidao commented on code in PR #143:
URL: https://github.com/apache/fluss-rust/pull/143#discussion_r2686590858


##########
crates/fluss/src/client/table/log_fetch_buffer.rs:
##########
@@ -360,22 +529,54 @@ impl CompletedFetch for DefaultCompletedFetch {
         &self.table_bucket
     }
 
+    fn api_error(&self) -> Option<&ApiError> {
+        None
+    }
+
+    fn take_error(&mut self) -> Option<Error> {
+        None
+    }
+
     fn fetch_records(&mut self, max_records: usize) -> Result<Vec<ScanRecord>> 
{
-        // todo: handle corrupt_last_record
+        if self.corrupt_last_record {
+            return Err(self.fetch_error());
+        }
+
         if self.consumed {
             return Ok(Vec::new());
         }
 
         let mut scan_records = Vec::new();
 
         for _ in 0..max_records {
-            if let Some(record) = self.next_fetched_record()? {
-                self.next_fetch_offset = record.offset() + 1;
-                self.records_read += 1;
-                scan_records.push(record);
-            } else {
-                break;
+            if self.cached_record_error.is_none() {

Review Comment:
   From your suggestion, I see there's no logic related to `last_record.take`. 
Using `last_record` is to preserve the behavior of "partial success followed by 
error" (returning read records first, then reporting the error on the next 
attempt). 
   This logic itself is also to maintain consistency with Java.



##########
crates/fluss/src/client/table/scanner.rs:
##########
@@ -46,6 +49,7 @@ const LOG_FETCH_MAX_BYTES: i32 = 16 * 1024 * 1024;
 const LOG_FETCH_MAX_BYTES_FOR_BUCKET: i32 = 1024;
 const LOG_FETCH_MIN_BYTES: i32 = 1;
 const LOG_FETCH_WAIT_MAX_TIME: i32 = 500;
+const METADATA_REFRESH_MIN_INTERVAL: Duration = Duration::from_secs(1);

Review Comment:
   ok



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to