vaultah commented on code in PR #13720: URL: https://github.com/apache/iceberg/pull/13720#discussion_r2300458579
########## spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteTablePathSparkAction.java: ########## @@ -494,36 +483,60 @@ public RewriteContentFileResult appendDeleteFile(RewriteResult<DeleteFile> r1) { } } - /** Rewrite manifest files in a distributed manner and return rewritten data files path pairs. */ - private RewriteContentFileResult rewriteManifests( + /** + * Rewrite manifest files in a distributed manner and return the resulting manifests and content + * files selected for rewriting. + */ + private Map<String, RewriteContentFileResult> rewriteManifests( Set<Snapshot> deltaSnapshots, TableMetadata tableMetadata, Set<ManifestFile> toRewrite) { if (toRewrite.isEmpty()) { - return new RewriteContentFileResult(); + return Maps.newHashMap(); } Encoder<ManifestFile> manifestFileEncoder = Encoders.javaSerialization(ManifestFile.class); + Encoder<RewriteContentFileResult> manifestResultEncoder = + Encoders.javaSerialization(RewriteContentFileResult.class); + Encoder<Tuple2<String, RewriteContentFileResult>> tupleEncoder = + Encoders.tuple(Encoders.STRING(), manifestResultEncoder); + Dataset<ManifestFile> manifestDS = spark().createDataset(Lists.newArrayList(toRewrite), manifestFileEncoder); Set<Long> deltaSnapshotIds = deltaSnapshots.stream().map(Snapshot::snapshotId).collect(Collectors.toSet()); - return manifestDS - .repartition(toRewrite.size()) - .map( - toManifests( - tableBroadcast(), - sparkContext().broadcast(deltaSnapshotIds), - stagingDir, - tableMetadata.formatVersion(), - sourcePrefix, - targetPrefix), - Encoders.bean(RewriteContentFileResult.class)) - // duplicates are expected here as the same data file can have different statuses - // (e.g. added and deleted) - .reduce((ReduceFunction<RewriteContentFileResult>) RewriteContentFileResult::append); - } - - private static MapFunction<ManifestFile, RewriteContentFileResult> toManifests( + Iterator<Tuple2<String, RewriteContentFileResult>> resultIterator = + manifestDS + .repartition(toRewrite.size()) + .map( + toManifests( + tableBroadcast(), + sparkContext().broadcast(deltaSnapshotIds), + stagingDir, + tableMetadata.formatVersion(), + sourcePrefix, + targetPrefix), + tupleEncoder) + .toLocalIterator(); Review Comment: That's a great point. I agree that a larger refactoring to create a dedicated, reducible result class would be a much cleaner design in the long run. Given that this PR is already quite large and focused on the correctness bug, would you be open to tackling that refactoring in a follow-up PR? I can create a new issue to track that work. Does that sound like a reasonable plan? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org