[ 
https://issues.apache.org/jira/browse/HIVE-27019?focusedWorklogId=847738&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-847738
 ]

ASF GitHub Bot logged work on HIVE-27019:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 27/Feb/23 07:55
            Start Date: 27/Feb/23 07:55
    Worklog Time Spent: 10m 
      Work Description: deniskuzZ commented on code in PR #4032:
URL: https://github.com/apache/hive/pull/4032#discussion_r1118387353


##########
ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/FSRemover.java:
##########
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.txn.compactor;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.ReplChangeManager;
+import org.apache.hadoop.hive.metastore.api.Database;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
+import org.apache.hadoop.hive.metastore.utils.FileUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hive.common.util.Ref;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+import static org.apache.hadoop.hive.metastore.HMSHandler.getMSForConf;
+import static 
org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.getDefaultCatalog;
+
+/**
+ * A runnable class which takes in cleaningRequestHandler and cleaning request 
and deletes the files
+ * according to the cleaning request.
+ */
+public class FSRemover {
+  private static final Logger LOG = LoggerFactory.getLogger(FSRemover.class);
+  private final HiveConf conf;
+  private final ReplChangeManager replChangeManager;
+
+  public FSRemover(HiveConf conf, ReplChangeManager replChangeManager) {
+    this.conf = conf;
+    this.replChangeManager = replChangeManager;
+  }
+
+  public List<Path> clean(CleaningRequest cr) throws MetaException {
+    Ref<List<Path>> removedFiles = Ref.from(new ArrayList<>());
+    try {
+      Callable<List<Path>> cleanUpTask;
+      cleanUpTask = () -> removeFiles(cr);
+
+      if (CompactorUtil.runJobAsSelf(cr.runAs())) {
+        removedFiles.value = cleanUpTask.call();
+      } else {
+        LOG.info("Cleaning as user {} for {}", cr.runAs(), 
cr.getFullPartitionName());
+        UserGroupInformation ugi = 
UserGroupInformation.createProxyUser(cr.runAs(),
+                UserGroupInformation.getLoginUser());
+        try {
+          ugi.doAs((PrivilegedExceptionAction<Void>) () -> {
+            removedFiles.value = cleanUpTask.call();
+            return null;
+          });
+        } finally {
+          try {
+            FileSystem.closeAllForUGI(ugi);
+          } catch (IOException exception) {
+            LOG.error("Could not clean up file-system handles for UGI: {} for 
{}",
+                    ugi, cr.getFullPartitionName(), exception);
+          }
+        }
+      }
+    } catch (Exception ex) {
+      LOG.error("Caught exception when cleaning, unable to complete cleaning 
of {} due to {}", cr,
+              StringUtils.stringifyException(ex));
+    }
+    return removedFiles.value;
+  }
+
+  /**
+   * @param cr Cleaning request
+   * @return List of deleted files if any files were removed
+   */
+  private List<Path> removeFiles(CleaningRequest cr)
+          throws MetaException, IOException {
+    List<Path> deleted = new ArrayList<>();
+    if (cr.getObsoleteDirs().isEmpty()) {
+      return deleted;
+    }
+    LOG.info("About to remove {} obsolete directories from {}. {}", 
cr.getObsoleteDirs().size(),
+            cr.getLocation(), 
CompactorUtil.getDebugInfo(cr.getObsoleteDirs()));
+    boolean needCmRecycle;
+    try {
+      Database db = getMSForConf(conf).getDatabase(getDefaultCatalog(conf), 
cr.getDbName());

Review Comment:
   if the metadata cache contains db as well why do we need to fetch it every 
time?





Issue Time Tracking
-------------------

    Worklog Id:     (was: 847738)
    Time Spent: 10h 50m  (was: 10h 40m)

> Split Cleaner into separate manageable modular entities
> -------------------------------------------------------
>
>                 Key: HIVE-27019
>                 URL: https://issues.apache.org/jira/browse/HIVE-27019
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Sourabh Badhya
>            Assignee: Sourabh Badhya
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> As described by the parent task - 
> Cleaner can be divided into separate entities like -
> *1) Handler* - This entity fetches the data from the metastore DB from 
> relevant tables and converts it into a request entity called CleaningRequest. 
> It would also do SQL operations post cleanup (postprocess). Every type of 
> cleaning request is provided by a separate handler.
> *2) Filesystem remover* - This entity fetches the cleaning requests from 
> various handlers and deletes them according to the cleaning request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to