[
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17541250#comment-17541250
]
ASF GitHub Bot commented on PARQUET-2149:
-----------------------------------------
parthchandra commented on PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#issuecomment-1135366352
@steveloughran thank you very much for taking the time to review and provide
feedback!
> 1. whose s3 client was used for testing here -if the s3a one, which hadoop
release?
I was working with s3a -
Spark 3.2.1
Hadoop (Hadoop-aws) 3.3.2
AWS SDK 1.11.655
> 2. the azure abfs and gcs connectors do async prefetching of the next
block, but are simply assuming that code will read sequentially; if there is
another seek/readFully to a new location, those prefetches will be abandoned.
there is work in s3a to do prefetching here with caching, so as to reduce the
penalty of backwards seeks. https://issues.apache.org/jira/browse/HADOOP-18028
I haven't worked with abfs or gcs. If the connectors do async pre-fetching,
that would be great. Essentially, the time the Parquet reader would have to
block in the file system API would reduce substantially. In such a case, we
could turn the async reader on/off and rerun the benchmark to compare. From
past experience with the MaprFS which had very aggressive read ahead in its
hdfs client, I would still expect better parquet speeds. The fact that the
prefetch is turned off when a seek occurs is usual behaviour, but we may see no
benefit from the connector in that case. So a combination of async reader and
async connector might end up being a great solution (maybe at a slightly
greater CPU utilization). We would still have to do a benchmark to see the real
effect.
The async version in this PR takes care of the sequential read requirement
by a) opening a new stream for each column and ensuring every column is read
sequentially. Footers are read using a separate stream. Except for the footer,
no other stream ever seeks to a new location. b) The amount of data to be read
is predetermined so there is never a read ahead that is discarded.
>
> hadoop is adding a vectored IO api intended for libraries like orc and
parquet to be able to use, where the application provides an unordered list of
ranges, a bytebuffer supplier and gets back a list of futures to wait for. the
base implementation simply reads using readFully APi. s3a (and later abfs) will
do full async retrieval itself, using the http connection pool.
https://issues.apache.org/jira/browse/HADOOP-18103
>
> both vectored io and s3a prefetching will ship this summer in hadoop
3.4.0. i don't see this change conflicting with this, though they may obsolete
a lot of it.
Yes, I became aware of this recently. I'm discussing integration of these
efforts in a separate channel. At the moment I see no conflict, but have yet to
determine how much of this async work would need to be changed. I suspect we
may be able to eliminate or vastly simplify `AsyncMultiBufferInputStream`.
> have you benchmarked this change with abfs or google gcs connectors to see
what difference it makes there?
No I have not. Would love help from anyone in the community with access to
these. I only have access to S3.
> Implement async IO for Parquet file reader
> ------------------------------------------
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
> Issue Type: Improvement
> Components: parquet-mr
> Reporter: Parth Chandra
> Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) -
> - For every column -> Read from storage in 8MB blocks -> Read all
> uncompressed pages into output queue
> - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked
> until the data has been read. Because a large part of the time spent is
> waiting for data from storage, threads are idle and CPU utilization is really
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So
> For Column _i_ -> reading one chunk until end, from storage -> intermediate
> output queue -> read one uncompressed page until end -> output queue ->
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and
> downstream implementations like Iceberg and Spark will automatically be able
> to take advantage without code change as long as the ParquetFileReader apis
> are not changed.
> In past work with async io [Drill - async page reader
> |https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java]
> , I have seen 2x-3x improvement in reading speed for Parquet files.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)