[ 
https://issues.apache.org/jira/browse/PARQUET-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17746094#comment-17746094
 ] 

ASF GitHub Bot commented on PARQUET-1629:
-----------------------------------------

mapleFU commented on code in PR #1044:
URL: https://github.com/apache/parquet-mr/pull/1044#discussion_r1271472674


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1627,23 +1627,36 @@ public ColumnChunkPageReader 
readAllPages(BlockCipher.Decryptor headerBlockDecry
             break;
           case DATA_PAGE_V2:
             DataPageHeaderV2 dataHeaderV2 = 
pageHeader.getData_page_header_v2();
-            int dataSize = compressedPageSize - 
dataHeaderV2.getRepetition_levels_byte_length() - 
dataHeaderV2.getDefinition_levels_byte_length();
-            pagesInChunk.add(
-                new DataPageV2(
-                    dataHeaderV2.getNum_rows(),
-                    dataHeaderV2.getNum_nulls(),
-                    dataHeaderV2.getNum_values(),
-                    
this.readAsBytesInput(dataHeaderV2.getRepetition_levels_byte_length()),
-                    
this.readAsBytesInput(dataHeaderV2.getDefinition_levels_byte_length()),
-                    converter.getEncoding(dataHeaderV2.getEncoding()),
-                    this.readAsBytesInput(dataSize),
-                    uncompressedPageSize,
-                    converter.fromParquetStatistics(
-                        getFileMetaData().getCreatedBy(),
-                        dataHeaderV2.getStatistics(),
-                        type),
-                    dataHeaderV2.isIs_compressed()
-                    ));
+            int dataSize = compressedPageSize - 
dataHeaderV2.getRepetition_levels_byte_length() -
+              dataHeaderV2.getDefinition_levels_byte_length();
+            final BytesInput repetitionLevels = 
this.readAsBytesInput(dataHeaderV2.getRepetition_levels_byte_length());
+            final BytesInput definitionLevels = 
this.readAsBytesInput(dataHeaderV2.getDefinition_levels_byte_length());
+            final BytesInput values = this.readAsBytesInput(dataSize);
+            if (options.usePageChecksumVerification() && 
pageHeader.isSetCrc()) {
+              pageBytes = BytesInput.concat(repetitionLevels, 
definitionLevels, values);

Review Comment:
   Does this concat three parts and get crc? Can it accumulated crc and avoid 
concat these buffers?





> Page-level CRC checksum verification for DataPageV2
> ---------------------------------------------------
>
>                 Key: PARQUET-1629
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1629
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-mr
>            Reporter: Boudewijn Braams
>            Assignee: Gang Wu
>            Priority: Major
>
> In https://jira.apache.org/jira/browse/PARQUET-1580 (Github PR: 
> https://github.com/apache/parquet-mr/pull/647) we implemented page level CRC 
> checksum verification for DataPageV1. As a follow up, we should add support 
> for DataPageV2 that follows the spec (see see 
> https://jira.apache.org/jira/browse/PARQUET-1539).
> What needs to be done:
> * Add writing out checksums for DataPageV2
> * Add checksum verification for DataPageV2
> * Create new test suite
> * Create new benchmarks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to