pvary commented on code in PR #3204:
URL: https://github.com/apache/hive/pull/3204#discussion_r865029906
##########
iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergV2.java:
##########
@@ -310,6 +310,39 @@ public void testDeleteStatementWithOtherTable() {
HiveIcebergTestUtils.valueForRow(HiveIcebergStorageHandlerTestUtils.CUSTOMER_SCHEMA,
objects), 0);
}
+ @Test
+ public void testUpdateStatementUnpartitioned() {
+ Assume.assumeFalse("Iceberg UPDATEs are only implemented for
non-vectorized mode for now", isVectorized);
Review Comment:
Yeah, the read part is still unvectorized, as we need to handle the delete
files in a vectorized and effective way.
IIRC the Spark implementation was reading the delete files to a
`Roaring64Bitmap` and filtered them out after when the reading was done in a
vectorized way
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]