thinkharderdev commented on code in PR #2273:
URL: https://github.com/apache/arrow-datafusion/pull/2273#discussion_r854461112


##########
datafusion/core/src/datasource/file_format/parquet.rs:
##########
@@ -362,19 +368,27 @@ fn fetch_statistics(
 }
 
 /// A wrapper around the object reader to make it implement `ChunkReader`
-pub struct ChunkObjectReader(pub Arc<dyn ObjectReader>);
+pub struct ChunkObjectReader {
+    /// The underlying object reader
+    pub object_reader: Arc<dyn ObjectReader>,
+    /// Optional counter which will track total number of bytes scanned
+    pub bytes_scanned: Option<metrics::Count>,
+}
 
 impl Length for ChunkObjectReader {
     fn len(&self) -> u64 {
-        self.0.length()
+        self.object_reader.length()
     }
 }
 
 impl ChunkReader for ChunkObjectReader {
     type T = Box<dyn Read + Send + Sync>;
 
     fn get_read(&self, start: u64, length: usize) -> ParquetResult<Self::T> {
-        self.0
+        if let Some(m) = self.bytes_scanned.as_ref() {
+            m.add(length)

Review Comment:
   I don't think this interface actually specifies so it would really be up to 
the specific `ObjectStore` implementation. I know that that is the way the 
`LocalFileSystem` works and I believe that is the way the AWS S3 API works with 
file ranges. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to