xunliu commented on code in PR #5733:
URL: https://github.com/apache/gravitino/pull/5733#discussion_r1871031722


##########
authorizations/authorization-ranger/src/main/java/org/apache/gravitino/authorization/ranger/RangerMetadataObject.java:
##########
@@ -112,12 +112,12 @@ public void validateAuthorizationMetadataObject() throws 
IllegalArgumentExceptio
         type != null, "Cannot create a Ranger metadata object with no type");
 
     Preconditions.checkArgument(
-        names.size() != 1 || type == RangerMetadataObject.Type.SCHEMA,
-        "If the length of names is 1, it must be the SCHEMA type");
+        names.size() != 1 || type == RangerMetadataObject.Type.SCHEMA || type 
== Type.PATH,
+        "If the length of names is 1, it must be the SCHEMA type of PATH 
type");
 
     Preconditions.checkArgument(
-        names.size() != 2 || type == RangerMetadataObject.Type.TABLE || type 
== Type.PATH,
-        "If the length of names is 2, it must be the TABLE type of PATH type");
+        names.size() != 2 || type == RangerMetadataObject.Type.TABLE,
+        "If the length of names is 2, it must be the TABLE type");

Review Comment:
   I think we can split `RangerMetadataObject` to 
`RangerHadoopSQLMetadataObject` and `RangerHDFSMetadataObject`.
   and management `Path` in the `RangerHDFSMetadataObject`.



##########
authorizations/authorization-ranger/src/test/java/org/apache/gravitino/authorization/ranger/integration/test/RangerAuthorizationHDFSPluginIT.java:
##########


Review Comment:
   hi @theoryxu 
   I think we need add hadoop fileset E2E integration test in the here.
   you can refer to 
   - (Hadoop fileset env) 
https://github.com/apache/gravitino/blob/main/catalogs/catalog-hadoop/src/test/java/org/apache/gravitino/catalog/hadoop/integration/test/HadoopCatalogIT.java
   - (authorization grant operation) 
https://github.com/apache/gravitino/blob/main/authorizations/authorization-ranger/src/test/java/org/apache/gravitino/authorization/ranger/integration/test/RangerBaseE2EIT.java#L216
   
   These ITs are completed Hadoop fileset integration test, will help you save 
times.
   
   I think we need add :
   1. Use Hadoop fileset to create path in the HDFS.
   2. Use Filesystem object list/read/write this path, but haven't permission.
   3. Use Gravitino authorization grant role to use
   4. Use the Filesystem object list/read/write this path, haven permission can 
access, or other operatons.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to