aokolnychyi commented on a change in pull request #2314: URL: https://github.com/apache/iceberg/pull/2314#discussion_r592772420
########## File path: spark/src/main/java/org/apache/iceberg/spark/actions/BaseExpireSnapshotsSparkAction.java ########## @@ -0,0 +1,240 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.iceberg.spark.actions; + +import java.util.Iterator; +import java.util.Set; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.atomic.AtomicLong; +import java.util.function.Consumer; +import org.apache.iceberg.HasTableOperations; +import org.apache.iceberg.Table; +import org.apache.iceberg.TableMetadata; +import org.apache.iceberg.TableOperations; +import org.apache.iceberg.actions.BaseExpireSnapshotsActionResult; +import org.apache.iceberg.actions.BaseSparkAction; +import org.apache.iceberg.actions.ExpireSnapshots; +import org.apache.iceberg.exceptions.NotFoundException; +import org.apache.iceberg.exceptions.ValidationException; +import org.apache.iceberg.relocated.com.google.common.base.Preconditions; +import org.apache.iceberg.relocated.com.google.common.collect.Sets; +import org.apache.iceberg.util.PropertyUtil; +import org.apache.iceberg.util.Tasks; +import org.apache.spark.sql.Column; +import org.apache.spark.sql.Dataset; +import org.apache.spark.sql.Row; +import org.apache.spark.sql.SparkSession; +import org.apache.spark.sql.functions; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.iceberg.TableProperties.GC_ENABLED; +import static org.apache.iceberg.TableProperties.GC_ENABLED_DEFAULT; + +/** + * An action that performs the same operation as {@link org.apache.iceberg.ExpireSnapshots} but uses Spark + * to determine the delta in files between the pre and post-expiration table metadata. All of the same + * restrictions of {@link org.apache.iceberg.ExpireSnapshots} also apply to this action. + * <p> + * This action first leverages {@link org.apache.iceberg.ExpireSnapshots} to expire snapshots and then + * uses metadata tables to find files that can be safely deleted. This is done by anti-joining two Datasets + * that contain all manifest and data files before and after the expiration. The snapshot expiration + * will be fully committed before any deletes are issued. + * <p> + * This operation performs a shuffle so the parallelism can be controlled through 'spark.sql.shuffle.partitions'. + * <p> + * Deletes are still performed locally after retrieving the results from the Spark executors. + */ +@SuppressWarnings("UnnecessaryAnonymousClass") +public class BaseExpireSnapshotsSparkAction Review comment: I am not sure it is a good idea to build a common hierarchy for different query engines. We tried that in the past but it led to a really weird situation with our Spark actions now. For example, we can no longer use our BaseSparkAction in some cases due to single class inheritance in Java. So some Spark actions extend that some don't. That forced us to create more static methods in places where we don't need them. There will be more problems like this, I guess. I agree about sharing code wherever possible, though. I'd prefer to do that using utility classes instead of building a common hierarchy. Refactoring common code into utility classes is beyond this PR. I tried to basically create a new action while keeping the backward compatibility. Also, designing utility classes seems easier when we know what parts Flink can reuse. This action heavily depends on Spark `Row` and metadata tables, for example. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
