[
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17557307#comment-17557307
]
ASF GitHub Bot commented on PARQUET-2149:
-----------------------------------------
ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r903403821
##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1796,5 +1882,314 @@ public void readAll(SeekableInputStream f,
ChunkListBuilder builder) throws IOEx
public long endPos() {
return offset + length;
}
+
+ @Override
+ public String toString() {
+ return "ConsecutivePartList{" +
+ "offset=" + offset +
+ ", length=" + length +
+ ", chunks=" + chunks +
+ '}';
+ }
}
+
+ /**
+ * Encapsulates the reading of a single page.
+ */
+ public class PageReader implements Closeable {
+ private final Chunk chunk;
+ private final int currentBlock;
+ private final BlockCipher.Decryptor headerBlockDecryptor;
+ private final BlockCipher.Decryptor pageBlockDecryptor;
+ private final byte[] aadPrefix;
+ private final int rowGroupOrdinal;
+ private final int columnOrdinal;
+
+ //state
+ private final LinkedBlockingDeque<Optional<DataPage>> pagesInChunk = new
LinkedBlockingDeque<>();
+ private DictionaryPage dictionaryPage = null;
+ private int pageIndex = 0;
+ private long valuesCountReadSoFar = 0;
+ private int dataPageCountReadSoFar = 0;
+
+ // derived
+ private final PrimitiveType type;
+ private final byte[] dataPageAAD;
+ private final byte[] dictionaryPageAAD;
+ private byte[] dataPageHeaderAAD = null;
+
+ private final BytesInputDecompressor decompressor;
+
+ private final ConcurrentLinkedQueue<Future<Void>> readFutures = new
ConcurrentLinkedQueue<>();
+
+ private final LongAdder totalTimeReadOnePage = new LongAdder();
+ private final LongAdder totalCountReadOnePage = new LongAdder();
+ private final LongAccumulator maxTimeReadOnePage = new
LongAccumulator(Long::max, 0L);
+ private final LongAdder totalTimeBlockedPagesInChunk = new LongAdder();
+ private final LongAdder totalCountBlockedPagesInChunk = new LongAdder();
+ private final LongAccumulator maxTimeBlockedPagesInChunk = new
LongAccumulator(Long::max, 0L);
+
+ public PageReader(Chunk chunk, int currentBlock, Decryptor
headerBlockDecryptor,
+ Decryptor pageBlockDecryptor, byte[] aadPrefix, int rowGroupOrdinal, int
columnOrdinal,
+ BytesInputDecompressor decompressor
+ ) {
+ this.chunk = chunk;
+ this.currentBlock = currentBlock;
+ this.headerBlockDecryptor = headerBlockDecryptor;
+ this.pageBlockDecryptor = pageBlockDecryptor;
+ this.aadPrefix = aadPrefix;
+ this.rowGroupOrdinal = rowGroupOrdinal;
+ this.columnOrdinal = columnOrdinal;
+ this.decompressor = decompressor;
+
+ this.type = getFileMetaData().getSchema()
+ .getType(chunk.descriptor.col.getPath()).asPrimitiveType();
+
+ if (null != headerBlockDecryptor) {
+ dataPageHeaderAAD = AesCipher.createModuleAAD(aadPrefix,
ModuleType.DataPageHeader,
+ rowGroupOrdinal,
+ columnOrdinal, chunk.getPageOrdinal(dataPageCountReadSoFar));
+ }
+ if (null != pageBlockDecryptor) {
+ dataPageAAD = AesCipher.createModuleAAD(aadPrefix,
ModuleType.DataPage, rowGroupOrdinal,
+ columnOrdinal, 0);
+ dictionaryPageAAD = AesCipher.createModuleAAD(aadPrefix,
ModuleType.DictionaryPage,
Review Comment:
sure
> Implement async IO for Parquet file reader
> ------------------------------------------
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
> Issue Type: Improvement
> Components: parquet-mr
> Reporter: Parth Chandra
> Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) -
> - For every column -> Read from storage in 8MB blocks -> Read all
> uncompressed pages into output queue
> - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked
> until the data has been read. Because a large part of the time spent is
> waiting for data from storage, threads are idle and CPU utilization is really
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So
> For Column _i_ -> reading one chunk until end, from storage -> intermediate
> output queue -> read one uncompressed page until end -> output queue ->
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and
> downstream implementations like Iceberg and Spark will automatically be able
> to take advantage without code change as long as the ParquetFileReader apis
> are not changed.
> In past work with async io [Drill - async page reader
> |https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java]
> , I have seen 2x-3x improvement in reading speed for Parquet files.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)