[ https://issues.apache.org/jira/browse/PARQUET-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16568403#comment-16568403 ]
Uwe L. Korn commented on PARQUET-1370: -------------------------------------- I'm doing the same, my code looks as follows: {code:java} reader = …some file handle… reader = io.BufferedReader(reader, 512 * 1024) parquet_file = ParquetFile(reader){code} This was so simple that I thought it might not be relevant for now. Having a general C++ implementation of {{io.BufferedReader}} in Arrow C++ might be a simpler approach to our problem. The usage of `io.BufferedReader` involves probably some additional memory copies and overhead as we have to switch between Python and C++ often. (In my case, the file handle is coming from [https://github.com/mbr/simplekv] / [https://github.com/blue-yonder/storefact] ) > Read consecutive column chunks in a single scan > ----------------------------------------------- > > Key: PARQUET-1370 > URL: https://issues.apache.org/jira/browse/PARQUET-1370 > Project: Parquet > Issue Type: Improvement > Components: parquet-cpp > Reporter: Robert Gruener > Priority: Major > > Currently parquet-cpp calls for a filesystem scan with every single data page > see > [https://github.com/apache/parquet-cpp/blob/a0d1669cf67b055cd7b724dea04886a0ded53c8f/src/parquet/column_reader.cc#L181] > For remote filesystems this can be very inefficient when reading many small > columns. The java implementation already does this and will read consecutive > column chunks (and the resulting pages) in a single scan see > [https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java#L786] > > This might be a bit difficult to do, as it would require changing a lot of > the code structure but it would certainly be valuable for workloads concerned > with optimal read performance. -- This message was sent by Atlassian JIRA (v7.6.3#76005)