[ 
https://issues.apache.org/jira/browse/PARQUET-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569210#comment-16569210
 ] 

Wes McKinney commented on PARQUET-1370:
---------------------------------------

I have opened some issues related to buffering / concurrent IO in C++, e.g. 
https://issues.apache.org/jira/browse/ARROW-501

[~rgruener] In 0.10.0 the pyarrow file handles implement RawIOBase now

I don't think it would be to difficult to add a buffering reader to the Parquet 
hot path with a configurable buffer size. We already have a 
{{BufferedInputStream}} which may help

> Read consecutive column chunks in a single scan
> -----------------------------------------------
>
>                 Key: PARQUET-1370
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1370
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-cpp
>            Reporter: Robert Gruener
>            Priority: Major
>
> Currently parquet-cpp calls for a filesystem scan with every single data page 
> see 
> [https://github.com/apache/parquet-cpp/blob/a0d1669cf67b055cd7b724dea04886a0ded53c8f/src/parquet/column_reader.cc#L181]
> For remote filesystems this can be very inefficient when reading many small 
> columns. The java implementation already does this and will read consecutive 
> column chunks (and the resulting pages) in a single scan see 
> [https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java#L786]
>  
> This might be a bit difficult to do, as it would require changing a lot of 
> the code structure but it would certainly be valuable for workloads concerned 
> with optimal read performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to