zhaohaidao opened a new issue, #90: URL: https://github.com/apache/fluss-rust/issues/90
### Search before asking - [x] I searched in the [issues](https://github.com/apache/fluss-rust/issues) and found nothing similar. ### Please describe the bug 🐞 ### Description When consuming records from Fluss, the Rust client panics with a slice index out of bounds error if the last batch in the fetched data is incomplete. ### Error Message ```range end index 1064000 out of range for slice of length 1048576``` Where: self.current_pos + batch_size = 1064000 self.data.len() = 1048576 ### Root Cause The network layer may return data where the last batch is truncated/incomplete. This is expected behavior in Kafka-style protocols - the server returns as much data as fits in the response, which may cut off the final batch mid-way. Both Kafka and the Fluss Java client handle this case by checking if the declared batch size exceeds the remaining bytes and treating it as end-of-stream. Reference: Fluss Java client implementation in MemorySegmentLogInputStream.java: ```java public LogRecordBatch nextBatch() { Integer batchSize = nextBatchSize(); // should at-least larger than V0 header size, because V1 header is larger than V0. if (batchSize == null || remaining < batchSize || remaining < V0_RECORD_BATCH_HEADER_SIZE) { return null; } // ... } ``` ### Expected Behavior When the declared batch size exceeds the remaining bytes in the buffer, the iterator should return None (end of stream) instead of attempting to slice out-of-bounds data. ### Solution In LogRecordsBatchs::next_batch_size(), add a bounds check before returning the batch size. ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
