samarthjain commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660291702
##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -170,6 +170,11 @@ public WriteBuilder overwrite(boolean enabled) {
return this;
}
+ public WriteBuilder writerVersion(WriterVersion version) {
Review comment:
This provides users a way to create Parquet files with different
formats. Currently only used for testing, though. I don't see harm in leaving
it public.
##########
File path:
arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/BaseVectorizedParquetValuesReader.java
##########
@@ -80,17 +80,23 @@ public BaseVectorizedParquetValuesReader(int maxDefLevel,
boolean setValidityVec
this.setArrowValidityVector = setValidityVector;
}
- public BaseVectorizedParquetValuesReader(
- int bitWidth,
- int maxDefLevel,
- boolean setValidityVector) {
+ public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel,
boolean setValidityVector) {
this.fixedWidth = true;
this.readLength = bitWidth != 0;
this.maxDefLevel = maxDefLevel;
this.setArrowValidityVector = setValidityVector;
init(bitWidth);
}
+ public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel,
boolean readLength,
+ boolean setValidityVector) {
+ this.fixedWidth = true;
+ this.readLength = readLength;
Review comment:
Thanks! Good suggestion.
##########
File path:
parquet/src/main/java/org/apache/iceberg/parquet/BasePageIterator.java
##########
@@ -77,7 +77,8 @@ protected void reset() {
protected abstract void initDefinitionLevelsReader(DataPageV1 dataPageV1,
ColumnDescriptor descriptor,
ByteBufferInputStream in,
int count) throws IOException;
- protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2,
ColumnDescriptor descriptor);
+ protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2,
ColumnDescriptor descriptor)
+ throws IOException;
Review comment:
Calling `dataPageV2.getDefinitionLevels.toInputStream()` below throws an
IOException.
```
@Override
protected void initDefinitionLevelsReader(DataPageV2 dataPageV2,
ColumnDescriptor desc) throws IOException {
int bitWidth =
BytesUtils.getWidthFromMaxInt(desc.getMaxDefinitionLevel());
// do not read the length from the stream. v2 pages handle dividing the
page bytes.
this.vectorizedDefinitionLevelReader = new
VectorizedParquetDefinitionLevelReader(bitWidth,
desc.getMaxDefinitionLevel(), false, setArrowValidityVector);
this.vectorizedDefinitionLevelReader.initFromPage(
dataPageV2.getValueCount(),
dataPageV2.getDefinitionLevels().toInputStream());
}
```
##########
File path:
arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding,
ByteBufferInputStream in, i
throw new ParquetDecodingException("could not read page in col " +
desc, e);
}
} else {
+ if (dataEncoding != Encoding.PLAIN) {
+ throw new UnsupportedOperationException("Unsupported encoding: " +
dataEncoding);
Review comment:
I like the idea of specifying the column name and being more descriptive
about why we are failing. However, there are different ways to disable
vectorization using table properties, spark session properties etc. For now, I
am going with something like this:
```
if (dataEncoding != Encoding.PLAIN) {
throw new UnsupportedOperationException("Vectorized reads are not
supported for column " + desc + " with " +
"encoding " + dataEncoding + ". Disable vectorized reads to
read this table/file");
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]