mehtaashish23 opened a new issue #1511: URL: https://github.com/apache/iceberg/issues/1511
Getting below exception, which using UPSERT. The scenario below, which tries to reproduce that read table fails, when we are accumulating the delete files multiple times, referred by different ScanTask. 1. Create a table with schema say `f1,f2` 2. Add a row into it. 3. Delete the above row from it. 4. Add the same row again (3 & 4 are going UPSERT) 5. Delete the above row again (this is doing DELETE) 6. Read the table now using spark.read, you will get below exception. GIST link is below GIST to reproduce: https://gist.github.com/mehtaashish23/47dc5404e6344b760f7bf05214dd5a04 ```Caused by: java.lang.IllegalArgumentException: Multiple entries with same key: /tools/tables/c05f4674-e2c3-4292-9a3a-4c4b378aca61-master/data/delete-file3.parquet=/tools/tables/c05f4674-e2c3-4292-9a3a-4c4b378aca61-master/data/delete-file3.parquet and /tools/tables/c05f4674-e2c3-4292-9a3a-4c4b378aca61-master/data/delete-file3.parquet=/tools/tables/c05f4674-e2c3-4292-9a3a-4c4b378aca61-master/data/delete-file3.parquet at org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap.conflictException(ImmutableMap.java:214) at org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap.checkNoConflict(ImmutableMap.java:208) at org.apache.iceberg.relocated.com.google.common.collect.RegularImmutableMap.checkNoConflictInKeyBucket(RegularImmutableMap.java:146) at org.apache.iceberg.relocated.com.google.common.collect.RegularImmutableMap.fromEntryArray(RegularImmutableMap.java:109) at org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap$Builder.build(ImmutableMap.java:392) at org.apache.iceberg.spark.source.BaseDataReader.<init>(BaseDataReader.java:70) at org.apache.iceberg.spark.source.RowDataReader.<init>(RowDataReader.java:75) at org.apache.iceberg.spark.source.Reader$RowReader.<init>(Reader.java:509) at org.apache.iceberg.spark.source.Reader$InternalRowReaderFactory.create(Reader.java:487) at org.apache.iceberg.spark.source.Reader$ReadTask.createPartitionReader(Reader.java:437) at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD.compute(DataSourceRDD.scala:42)``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
