aokolnychyi commented on a change in pull request #2452:
URL: https://github.com/apache/iceberg/pull/2452#discussion_r614599746
##########
File path:
spark/src/main/java/org/apache/iceberg/spark/actions/BaseSparkAction.java
##########
@@ -159,7 +159,7 @@ protected Table newStaticTable(TableMetadata metadata,
FileIO io) {
.repartition(spark.sessionState().conf().numShufflePartitions()) //
avoid adaptive execution combining tasks
.as(Encoders.bean(ManifestFileBean.class));
- return allManifests.flatMap(new ReadManifest(ioBroadcast),
Encoders.STRING()).toDF("file_path");
+ return allManifests.flatMap(new ReadManifest(ioBroadcast),
Encoders.STRING()).toDF("file_path").distinct();
Review comment:
This place makes sense to me. We can consider exposing an argument to
make the dedup step optional (I am not sure it is a good idea but I want to
think this through together). The dedup step is going to trigger a shuffle.
Technically, we are fine in the existing expire snapshots action as it does the
dedup in `except`.
The question is what kind of performance impact deduplicating here will
have. We only have duplicates if multiple manifests reference the same files.
In `rewrite_manifests` procedure, we rewrite all manifests, meaning we produce
a new snapshot with new manifests where entries are old data files. Also, there
are updates and deletes that may rewrite manifests.
I think deduplicating here makes sense to me in all cases.
Thoughts, @rdblue @flyrain @RussellSpitzer @karuppayya?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]