RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660208053



##########
File path: 
arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, 
ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + 
desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + 
dataEncoding);

Review comment:
       I would like to emphasize that a user can use non-vectorized reads to 
handle this file so maybe something like
   
   "Cannot perform a vectorized read of ParquetV2 File with encoding %s, 
disable vectorized reading with $param to read this table/file"

##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -217,6 +222,7 @@ WriteBuilder withWriterVersion(WriterVersion version) {
       String compressionLevel = config.getOrDefault(
           PARQUET_COMPRESSION_LEVEL, PARQUET_COMPRESSION_LEVEL_DEFAULT);
 
+

Review comment:
       nit: added whitespace

##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -170,6 +170,11 @@ public WriteBuilder overwrite(boolean enabled) {
       return this;
     }
 
+    public WriteBuilder writerVersion(WriterVersion version) {

Review comment:
       Is this mostly for testing? Or is it something we want folks to be using 
in general? Just wondering if this should be public

##########
File path: 
arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/BaseVectorizedParquetValuesReader.java
##########
@@ -80,17 +80,23 @@ public BaseVectorizedParquetValuesReader(int maxDefLevel, 
boolean setValidityVec
     this.setArrowValidityVector = setValidityVector;
   }
 
-  public BaseVectorizedParquetValuesReader(
-      int bitWidth,
-      int maxDefLevel,
-      boolean setValidityVector) {
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, 
boolean setValidityVector) {
     this.fixedWidth = true;
     this.readLength = bitWidth != 0;
     this.maxDefLevel = maxDefLevel;
     this.setArrowValidityVector = setValidityVector;
     init(bitWidth);
   }
 
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, 
boolean readLength,
+                                           boolean setValidityVector) {
+    this.fixedWidth = true;
+    this.readLength = readLength;

Review comment:
       It seems a little strange to me that we have this constructor which we 
only use when readLength is false. Perhaps we should swap the original 
constructor's code to call this constructor?
   
   ```java
     public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, 
boolean setValidityVector) {
       this(bitWidth, maxDefLevel, bitWidth != 0, setValidityVector)
     }
   ```

##########
File path: 
parquet/src/main/java/org/apache/iceberg/parquet/BasePageIterator.java
##########
@@ -77,7 +77,8 @@ protected void reset() {
   protected abstract void initDefinitionLevelsReader(DataPageV1 dataPageV1, 
ColumnDescriptor descriptor,
                                                      ByteBufferInputStream in, 
int count) throws IOException;
 
-  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, 
ColumnDescriptor descriptor);
+  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, 
ColumnDescriptor descriptor)
+          throws IOException;

Review comment:
       I didn't see where the IOException can get thrown, is this just to match 
the V1 reader?

##########
File path: 
arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, 
ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + 
desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + 
dataEncoding);

Review comment:
       Actually since we may get users who read some columns successfully but 
fail on others we probably should be specific about which column failed in the 
error message as well. Just so someone doesn't say
   "When i do this projection it works, but when I do this projection it 
doesn't" 

##########
File path: 
arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, 
ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + 
desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + 
dataEncoding);

Review comment:
       Sounds good to me, I do know most of the time we have errors styled as 
"Cannot X " but I think your content suggestion for the error message is solid. 
I would just add that so it fits with the other messages. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to