xunliu commented on code in PR #5786:
URL: https://github.com/apache/gravitino/pull/5786#discussion_r1896520466


##########
authorizations/authorization-chain/src/main/java/org/apache/gravitino/authorization/chain/ChainAuthorizationPlugin.java:
##########
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.gravitino.authorization.chain;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Lists;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import org.apache.gravitino.Catalog;
+import org.apache.gravitino.MetadataObject;
+import org.apache.gravitino.authorization.ChainAuthorizationProperties;
+import org.apache.gravitino.authorization.Group;
+import org.apache.gravitino.authorization.MetadataObjectChange;
+import org.apache.gravitino.authorization.Owner;
+import org.apache.gravitino.authorization.RangerAuthorizationProperties;
+import org.apache.gravitino.authorization.Role;
+import org.apache.gravitino.authorization.RoleChange;
+import org.apache.gravitino.authorization.User;
+import org.apache.gravitino.connector.authorization.AuthorizationPlugin;
+import org.apache.gravitino.connector.authorization.BaseAuthorization;
+import org.apache.gravitino.exceptions.AuthorizationPluginException;
+import org.apache.gravitino.utils.IsolatedClassLoader;
+
+/** Chain authorization operations plugin class. <br> */
+public class ChainAuthorizationPlugin implements AuthorizationPlugin {
+  private List<AuthorizationPlugin> plugins = Lists.newArrayList();
+  private final String metalake;
+
+  public ChainAuthorizationPlugin(
+      String metalake, String catalogProvider, Map<String, String> config) {
+    this.metalake = metalake;
+    initPlugins(catalogProvider, config);
+  }
+
+  private void initPlugins(String catalogProvider, Map<String, String> 
properties) {
+    ChainAuthorizationProperties.validate(properties);
+    // Validate the properties for each plugin
+    ChainAuthorizationProperties.plugins(properties)
+        .forEach(
+            pluginName -> {
+              Map<String, String> pluginProperties =
+                  
ChainAuthorizationProperties.fetchAuthPluginProperties(pluginName, properties);
+              String authProvider =
+                  ChainAuthorizationProperties.getPluginProvider(pluginName, 
properties);
+              if ("ranger".equals(authProvider)) {

Review Comment:
   The community discussed it before, just like Catalog shortName uses string 
variables (`Hive`, `Hadoop`), not uses constant variable, So I kept consistent.



##########
authorizations/authorization-chain/src/main/java/org/apache/gravitino/authorization/chain/ChainAuthorization.java:
##########
@@ -16,24 +16,27 @@
  * specific language governing permissions and limitations
  * under the License.
  */
-package org.apache.gravitino.connector.authorization.mysql;
+package org.apache.gravitino.authorization.chain;
 
 import java.util.Map;
 import org.apache.gravitino.connector.authorization.AuthorizationPlugin;
 import org.apache.gravitino.connector.authorization.BaseAuthorization;
 
-public class TestMySQLAuthorization extends 
BaseAuthorization<TestMySQLAuthorization> {
-
-  public TestMySQLAuthorization() {}
-
+/** Implementation of a Chain authorization in Gravitino. */
+public class ChainAuthorization extends BaseAuthorization<ChainAuthorization> {
   @Override
   public String shortName() {
-    return "mysql";
+    return "chain";
   }
 
   @Override
   public AuthorizationPlugin newPlugin(
       String metalake, String catalogProvider, Map<String, String> config) {
-    return new TestMySQLAuthorizationPlugin();
+    switch (catalogProvider) {

Review Comment:
   Currently, we only support the Hive catalog in this PR, But we can support 
more types in the future.
   We need rigorous testing to enable this limit.



##########
authorizations/authorization-ranger/src/main/java/org/apache/gravitino/authorization/ranger/RangerHelper.java:
##########
@@ -49,8 +49,10 @@ public class RangerHelper {
   private static final Logger LOG = 
LoggerFactory.getLogger(RangerHelper.class);
 
   public static final String MANAGED_BY_GRAVITINO = "MANAGED_BY_GRAVITINO";
-  /** The `*` gives access to all resources */
+  /** The `*` gives access to all table resources */
   public static final String RESOURCE_ALL = "*";
+  /** The `/` gives access to all path resources */
+  public static final String RESOURCE_ROOT_PATH = "/test/";

Review Comment:
   rollback this change.



##########
authorizations/authorization-ranger/src/main/java/org/apache/gravitino/authorization/ranger/RangerAuthorizationHDFSPlugin.java:
##########
@@ -137,10 +171,52 @@ public List<AuthorizationSecurableObject> 
translatePrivilege(SecurableObject sec
                   .forEach(
                       rangerPrivilege ->
                           rangerPrivileges.add(
-                              new RangerPrivileges.RangerHivePrivilegeImpl(
+                              new RangerPrivileges.RangerHDFSPrivilegeImpl(
                                   rangerPrivilege, 
gravitinoPrivilege.condition())));
-
               switch (gravitinoPrivilege.name()) {
+                case USE_CATALOG:
+                case CREATE_CATALOG:
+                  // When HDFS is used as the Hive storage layer, Hive does 
not support the
+                  // `USE_CATALOG` and `CREATE_CATALOG` privileges. So, we 
ignore these
+                  // in the RangerAuthorizationHDFSPlugin.
+                  break;
+                case USE_SCHEMA:
+                  break;
+                case CREATE_SCHEMA:
+                  switch (securableObject.type()) {
+                    case METALAKE:
+                    case CATALOG:
+                      {
+                        String locationPath = getLocationPath(securableObject);
+                        if (locationPath != null && !locationPath.isEmpty()) {
+                          RangerHDFSMetadataObject rangerHDFSMetadataObject =
+                              new RangerHDFSMetadataObject(
+                                  locationPath, 
RangerHDFSMetadataObject.Type.PATH);
+                          rangerSecurableObjects.add(
+                              generateAuthorizationSecurableObject(
+                                  rangerHDFSMetadataObject.names(),
+                                  RangerHDFSMetadataObject.Type.PATH,
+                                  rangerPrivileges));
+                        }
+                      }
+                      break;
+                    case FILESET:
+                      rangerSecurableObjects.add(
+                          generateAuthorizationSecurableObject(
+                              translateMetadataObject(securableObject).names(),
+                              RangerHDFSMetadataObject.Type.PATH,
+                              rangerPrivileges));
+                      break;
+                    default:
+                      throw new AuthorizationPluginException(
+                          "The privilege %s is not supported for the securable 
object: %s",
+                          gravitinoPrivilege.name(), securableObject.type());
+                  }
+                  break;
+                case SELECT_TABLE:

Review Comment:
   I split `SELECT_TABLE ` operations in the next PR.



##########
integration-test-common/src/test/java/org/apache/gravitino/integration/test/util/BaseIT.java:
##########
@@ -422,4 +424,37 @@ protected static void copyBundleJarsToHadoop(String 
bundleName) {
     String hadoopLibDirs = ITUtils.joinPath(gravitinoHome, "catalogs", 
"hadoop", "libs");
     copyBundleJarsToDirectory(bundleName, hadoopLibDirs);
   }
+
+  public static void runInEnv(String key, String value, Runnable lambda) {
+    String originalValue = System.getenv(key);
+    try {
+      setEnv(key, value);
+      if (key.equals("HADOOP_USER_NAME") && value != null) {
+        UserGroupInformation.setLoginUser(null);
+        System.setProperty("user.name", value);
+      }
+      lambda.run();
+    } catch (Exception e) {
+      throw new IllegalStateException("Failed to set environment variable", e);
+    } finally {
+      setEnv(key, originalValue);

Review Comment:
   OK, I improved this code.



##########
authorizations/authorization-ranger/src/main/java/org/apache/gravitino/authorization/ranger/RangerAuthorizationHDFSPlugin.java:
##########
@@ -212,27 +287,77 @@ public AuthorizationMetadataObject 
translateMetadataObject(MetadataObject metada
     Preconditions.checkArgument(
         nsMetadataObject.size() > 0, "The metadata object must have at least 
one name.");
 
-    if (metadataObject.type() == MetadataObject.Type.FILESET) {
-      RangerPathBaseMetadataObject rangerHDFSMetadataObject =
-          new RangerPathBaseMetadataObject(
-              getFileSetPath(metadataObject), 
RangerPathBaseMetadataObject.Type.PATH);
-      rangerHDFSMetadataObject.validateAuthorizationMetadataObject();
-      return rangerHDFSMetadataObject;
-    } else {
-      return new RangerPathBaseMetadataObject("", 
RangerPathBaseMetadataObject.Type.PATH);
+    RangerHDFSMetadataObject rangerHDFSMetadataObject;
+    switch (metadataObject.type()) {
+      case METALAKE:
+      case CATALOG:
+        rangerHDFSMetadataObject =
+            new RangerHDFSMetadataObject("", 
RangerHDFSMetadataObject.Type.PATH);
+        break;
+      case SCHEMA:
+        rangerHDFSMetadataObject =
+            new RangerHDFSMetadataObject(
+                metadataObject.fullName(), RangerHDFSMetadataObject.Type.PATH);
+        break;
+      case FILESET:
+        rangerHDFSMetadataObject =
+            new RangerHDFSMetadataObject(
+                getLocationPath(metadataObject), 
RangerHDFSMetadataObject.Type.PATH);
+        break;
+      default:
+        throw new AuthorizationPluginException(
+            "The metadata object type %s is not supported in the 
RangerAuthorizationHDFSPlugin",
+            metadataObject.type());
     }
+    rangerHDFSMetadataObject.validateAuthorizationMetadataObject();
+    return rangerHDFSMetadataObject;
   }
 
-  public String getFileSetPath(MetadataObject metadataObject) {
-    FilesetDispatcher filesetDispatcher = 
GravitinoEnv.getInstance().filesetDispatcher();
-    NameIdentifier identifier =
-        NameIdentifier.parse(String.format("%s.%s", metalake, 
metadataObject.fullName()));
-    Fileset fileset = filesetDispatcher.loadFileset(identifier);
-    Preconditions.checkArgument(
-        fileset != null, String.format("Fileset %s is not found", identifier));
-    String filesetLocation = fileset.storageLocation();
-    Preconditions.checkArgument(
-        filesetLocation != null, String.format("Fileset %s location is not 
found", identifier));
-    return pattern.matcher(filesetLocation).replaceAll("");
+  private NameIdentifier getObjectNameIdentifier(MetadataObject 
metadataObject) {
+    return NameIdentifier.parse(String.format("%s.%s", metalake, 
metadataObject.fullName()));
+  }
+
+  @VisibleForTesting
+  public String getLocationPath(MetadataObject metadataObject) throws 
NoSuchEntityException {
+    String locationPath = null;
+    switch (metadataObject.type()) {
+      case METALAKE:
+      case SCHEMA:
+      case TABLE:

Review Comment:
   This PR only supports `Create_schema` in the `Catalog` in the chain `Ranger 
Hive` and `Ranger HDFS`. I split other operations in the next PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to