rdblue commented on a change in pull request #1636:
URL: https://github.com/apache/iceberg/pull/1636#discussion_r511099333
##########
File path:
flink/src/main/java/org/apache/iceberg/flink/data/FlinkOrcWriters.java
##########
@@ -245,8 +245,19 @@ public void nonNullWrite(int rowId, ArrayData data,
ColumnVector output) {
ListColumnVector cv = (ListColumnVector) output;
cv.lengths[rowId] = data.size();
cv.offsets[rowId] = cv.childCount;
- cv.childCount += cv.lengths[rowId];
+ // cv.childCount is for some reason an int, which generates all of these
+ // NarrowingCompoundAssignment warnings. Although this does nothing to
prevent
+ // overflow from adding too many values to cv.childCount because
`ListColumnVector`
+ // comes from package `org.apache.orc.storage.ql.exec.vector`, it at
least removes
Review comment:
I think the solution to use the int source data makes sense. We probably
don't need to have the long comment, since it is clear that this is correct.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]