guiyanakuang commented on code in PR #1244:
URL: https://github.com/apache/orc/pull/1244#discussion_r969189953
##########
java/core/src/java/org/apache/orc/impl/ConvertTreeReaderFactory.java:
##########
@@ -554,6 +554,7 @@ public void nextVector(ColumnVector previousVector,
longColVector = (LongColumnVector) previousVector;
} else {
decimalColVector.ensureSize(batchSize, false);
+ decimalColVector.reset();
Review Comment:
Should you revert the changes to `ConvertTreeReaderFactory.java` ?
There is now code to check and set `isRepeating`.
After reverting, decimal will be handled in the same way as the other types.
##########
java/core/src/java/org/apache/orc/impl/TreeReaderFactory.java:
##########
@@ -1551,17 +1551,25 @@ private void nextVector(DecimalColumnVector result,
HiveDecimalWritable[] vector = result.vector;
HiveDecimalWritable decWritable;
if (result.noNulls) {
- for (int r=0; r < batchSize; ++r) {
+ result.isRepeating = true;
+ for (int r = 0; r < batchSize; ++r) {
decWritable = vector[r];
if (!decWritable.serializationUtilsRead(
valueStream, scratchScaleVector[r],
scratchBytes)) {
result.isNull[r] = true;
result.noNulls = false;
}
+ if (result.isRepeating
Review Comment:
Would it be better to extract it as a common method.
##########
java/core/src/test/org/apache/orc/impl/TestConvertTreeReaderFactory.java:
##########
@@ -639,4 +641,94 @@ private void testConvertToDateIncreasingSize() throws
Exception {
private void testConvertToBinaryIncreasingSize() throws Exception {
readORCFileIncreasingBatchSize("binary", BytesColumnVector.class);
}
+
+ @Test
+ public void testDecimalConvertInNullStripe() throws Exception {
+ try {
+ Configuration decimalConf = new Configuration(conf);
+ decimalConf.set(OrcConf.STRIPE_ROW_COUNT.getAttribute(), "1024");
+ decimalConf.set(OrcConf.ROWS_BETWEEN_CHECKS.getAttribute(), "1");
+
+ String typeStr = "decimal(5,1)";
+ TypeDescription schema = TypeDescription.fromString("struct<col1:" +
typeStr + ">");
+ Writer w = OrcFile.createWriter(testFilePath,
OrcFile.writerOptions(decimalConf).setSchema(schema));
+
+ VectorizedRowBatch b = schema.createRowBatch();
+ DecimalColumnVector f1 = (DecimalColumnVector) b.cols[0];
+ f1.isRepeating = true;
+ f1.set(0, (HiveDecimal) null);
+ b.size = 1024;
+ w.addRowBatch(b);
+ b.reset();
+ for (int i = 0; i < 1024; i++) {
+ f1.set(i, HiveDecimal.create(i + 1));
+ }
+ b.size = 1024;
+ w.addRowBatch(b);
Review Comment:
Can we add a third batch of repeated data so we can test for repeated to
non-repeated and non-repeated to repeated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]