jerryshao commented on code in PR #5190:
URL: https://github.com/apache/gravitino/pull/5190#discussion_r1816303097


##########
authorizations/authorization-ranger/src/test/java/org/apache/gravitino/authorization/ranger/integration/test/RangerHiveE2EIT.java:
##########
@@ -300,21 +606,28 @@ private static void createCatalog() {
     LOG.info("Catalog created: {}", catalog);
   }
 
-  private static void createSchema() {
-    Map<String, String> properties = Maps.newHashMap();
-    properties.put("key1", "val1");
-    properties.put("key2", "val2");
-    properties.put(
-        "location",
-        String.format(
-            "hdfs://%s:%d/user/hive/warehouse/%s.db",
-            containerSuite.getHiveRangerContainer().getContainerIpAddress(),
-            HiveContainer.HDFS_DEFAULTFS_PORT,
-            schemaName.toLowerCase()));
-    String comment = "comment";
+  private static void waitForUpdatingPolicies() throws InterruptedException {
+    // After Ranger authorization, Must wait a period of time for the Ranger 
Spark plugin to update
+    // the policy Sleep time must be greater than the policy update interval
+    // (ranger.plugin.spark.policy.pollIntervalMs) in the
+    // `resources/ranger-spark-security.xml.template`
+    Thread.sleep(1000L);
+  }
 
-    catalog.asSchemas().createSchema(schemaName, comment, properties);
-    Schema loadSchema = catalog.asSchemas().loadSchema(schemaName);
-    Assertions.assertEquals(schemaName.toLowerCase(), loadSchema.name());
+  private static void setEnv(String key, String value) {
+    try {
+      Map<String, String> env = System.getenv();
+      Class<?> cl = env.getClass();
+      Field field = cl.getDeclaredField("m");
+      field.setAccessible(true);
+      Map<String, String> writableEnv = (Map<String, String>) field.get(env);
+      if (value == null) {
+        writableEnv.remove(key);
+      } else {
+        writableEnv.put(key, value);
+      }
+    } catch (Exception e) {
+      throw new IllegalStateException("Failed to set environment variable", e);
+    }

Review Comment:
   Can you please also add the tests about alter table, and also verify the 
ownership mechanism if possible?



##########
authorizations/authorization-ranger/src/test/java/org/apache/gravitino/authorization/ranger/integration/test/RangerHiveE2EIT.java:
##########
@@ -300,21 +606,28 @@ private static void createCatalog() {
     LOG.info("Catalog created: {}", catalog);
   }
 
-  private static void createSchema() {
-    Map<String, String> properties = Maps.newHashMap();
-    properties.put("key1", "val1");
-    properties.put("key2", "val2");
-    properties.put(
-        "location",
-        String.format(
-            "hdfs://%s:%d/user/hive/warehouse/%s.db",
-            containerSuite.getHiveRangerContainer().getContainerIpAddress(),
-            HiveContainer.HDFS_DEFAULTFS_PORT,
-            schemaName.toLowerCase()));
-    String comment = "comment";
+  private static void waitForUpdatingPolicies() throws InterruptedException {
+    // After Ranger authorization, Must wait a period of time for the Ranger 
Spark plugin to update
+    // the policy Sleep time must be greater than the policy update interval
+    // (ranger.plugin.spark.policy.pollIntervalMs) in the
+    // `resources/ranger-spark-security.xml.template`
+    Thread.sleep(1000L);

Review Comment:
   How can you make sure that `1000` is enough?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to