ajantha-bhat commented on issue #4639: URL: https://github.com/apache/iceberg/issues/4639#issuecomment-1110434440
It is looking like Java API has used the row-level-delete (merge-on-read deletes from V2 spec) by creating a new delete file. But in spark 3.1, from SQL, only copy-on-write update/delete is supported. So, it doesn't understand the delete files. **workaround:** 1. So, In java API we need to use copy-on-write (v1 spec) delete for spark 3.1 to understand it. 2. Or, use spark 3.2 which can understand the merge-on-read deletes by configuring a table property. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
