gszadovszky commented on a change in pull request #868:
URL: https://github.com/apache/parquet-mr/pull/868#discussion_r580868682



##########
File path: parquet-hadoop/src/main/java/org/apache/parquet/hadoop/Offsets.java
##########
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.parquet.hadoop;
+
+import java.io.IOException;
+
+import org.apache.parquet.format.PageHeader;
+import org.apache.parquet.format.Util;
+import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;
+import org.apache.parquet.io.SeekableInputStream;
+
+/**
+ * Class to help gather/calculate the proper values of the dictionary/first 
data page offset values in a column chunk.
+ */
+class Offsets {
+
+  /**
+   * Returns the offset values for the column chunk to be written.
+   *
+   * @param input         the source input stream of the column chunk
+   * @param chunk         the column chunk metadata read from the source file
+   * @param newChunkStart the position of the column chunk to be written
+   * @return the offset values
+   * @throws IOException if any I/O error occurs during the reading of the 
input stream
+   */
+  public static Offsets getOffsets(SeekableInputStream input, 
ColumnChunkMetaData chunk, long newChunkStart)
+      throws IOException {
+    long firstDataPageOffset;
+    long dictionaryPageOffset;
+    if (chunk.hasDictionaryPage()) {
+      long dictionaryPageSize = chunk.getFirstDataPageOffset() - 
chunk.getDictionaryPageOffset();
+      if (dictionaryPageSize <= 0) {
+        // The offset values might be invalid so we have to read the size of 
the dictionary page
+        dictionaryPageSize = readDictionaryPageSize(input, newChunkStart);
+      }
+      firstDataPageOffset = newChunkStart + dictionaryPageSize;
+      dictionaryPageOffset = newChunkStart;
+    } else {
+      firstDataPageOffset = newChunkStart;
+      dictionaryPageOffset = 0;
+    }
+    return new Offsets(firstDataPageOffset, dictionaryPageOffset);
+  }
+
+  private static long readDictionaryPageSize(SeekableInputStream in, long pos) 
throws IOException {
+    long origPos = -1;
+    try {
+      origPos = in.getPos();
+      // TODO: Do we need to handle encryption here?

Review comment:
       We do not have such field in the thrift file currently and even if we 
add it it would not solve the issue of the already existing files. So, I guess 
you've thought about adding an additional field to the parquet-mr side object 
only to be set manually. To set it we need to read the dictionary which do not 
do in the tools we currently use this functionality (mask/prune columns). But 
the whole think is not a problem currently because we do not support 
encryption/decryption in the tools anyway.
   
   I am not sure if we want to support encryption in the tools later or it is 
even feasible but for now it is correct as is. I'll remove the TODO line and 
try to document the situation in the class header.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to