Robert Gruener created PARQUET-1370:
---------------------------------------

             Summary: Read consecutive column chunks in a single scan
                 Key: PARQUET-1370
                 URL: https://issues.apache.org/jira/browse/PARQUET-1370
             Project: Parquet
          Issue Type: Improvement
          Components: parquet-cpp
            Reporter: Robert Gruener


Currently parquet-cpp calls for a filesystem scan with every single data page 
see 
[https://github.com/apache/parquet-cpp/blob/a0d1669cf67b055cd7b724dea04886a0ded53c8f/src/parquet/column_reader.cc#L181]

For remote filesystems this can be very inefficient when reading many small 
columns. The java implementation already does this and will read consecutive 
column chunks (and the resulting pages) in a single scan see 
[https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java#L786]

 

This might be a bit difficult to do, as it would require changing a lot of the 
code structure but it would certainly be valuable for workloads concerned with 
optimal read performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to