openinx commented on a change in pull request #2935:
URL: https://github.com/apache/iceberg/pull/2935#discussion_r715377980
##########
File path: orc/src/main/java/org/apache/iceberg/data/orc/GenericOrcWriters.java
##########
@@ -531,6 +543,32 @@ public void nonNullWrite(int rowId, Map<K, V> map,
ColumnVector output) {
}
}
+ private static class PositionDeleteWriter implements
OrcRowWriter<PositionDelete<Record>> {
Review comment:
`PositionDelete<Record>` ? We cannot use the concrete `Record` data
type here, because when integrating the spark or flink engines, the record to
write could be spark's `InternalRow` or flink's `RowData`. If we are only
limited to write `Record` data type in this `PositionDeleteWriter`, then the
spark and flink's row type will have to be converted to `Record`.
The correct way is using a generic data type for PositionDeleteWriter, so
that we could accept `Record`, `InternalRow`, `RowData`, etc..
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]