-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72989/
-----------------------------------------------------------
(Updated Oct. 26, 2020, 5:45 p.m.)
Review request for ranger and Ramesh Mani.
Repository: ranger
Description
-------
Currently RangerHiveAuthorizer has specific logic flows for HDFS and S3/Ozone.
If the fs scheme is part of hivePlugin#getFSScheme[1], then it will go and
check privileges via fs.
[1] private static String RANGER_PLUGIN_HIVE_ULRAUTH_FILESYSTEM_SCHEMES_DEFAULT
= "hdfs:,file:";
Flow will come to the following code peice:
if (!isURIAccessAllowed(user, permission, path, fs))
{ throw new HiveAccessControlException(String.format( "Permission denied: user
[%s] does not have [%s] privilege on [%s]", user, permission.name(), path));
}
continue;
but, when we have paths mounted to other fs, like ozone, the current path will
hdfs based path, but in reality that patch is ozone fs path, later this
resolution happens inside mount fs. That time, when fs#access will be called to
check permissions. Currently access API implemented only in HDFS. Once
resolution happens, it will be delegated to OzoneFs. But OzoneFS does not
implemented access API.
So, the default abstract FileSystem implementation is to just expect
permissions matching to the expected mode.
Here the expected action mode for createTable is ALL. But Ozone/s3 paths will
not have rwx permissions on keys. So, it will fail.
0: jdbc:hive2://umag-1.umag.root.xxx.site:218> CREATE EXTERNAL TABLE testtable1
(order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT
DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test';
Error: Error while compiling statement: FAILED: HiveAccessControlException
Permission denied: user [systest] does not have [ALL] privilege on
[hdfs://ns1/test] (state=42000,code=40000)
0: jdbc:hive2://umag-1.umag.root.xxx.site:218>
My mount point on hdfs configured as follows:
fs.viewfs.mounttable.ns1.link./test --> o3fs://bucket.volume.ozone1/test
hdfs://ns1/test will be resolved as o3fs://bucket.volume.ozone1/test.
So, checkPrevildges will fail
Caused by:
org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException:
Permission denied: user [systest] does not have [ALL] privilege on
[hdfs://ns1/test]
at
org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:810)
~[?:?]
at
org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at
org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at
org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127)
~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
at
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199)
~[hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
... 15 more
I will add more trace details in the comments.
For more details, please see the RANGER-3058 JIRA.
(https://issues.apache.org/jira/browse/RANGER-3058)
Diffs
-----
hive-agent/src/main/java/org/apache/ranger/authorization/hive/authorizer/RangerHiveAuthorizer.java
1bec50b37
Diff: https://reviews.apache.org/r/72989/diff/1/
Testing
-------
Testing steps done as follows:
I have created a cluster with ranger enabled.
Copied the sample-sales.csv file to ozone /test folder.
Created a mount point in hdfs://ns1/test to o3fs://bucket.volume.ozone1/test
( before this step ozone bucket and volume created )
add this in core-site.xml file fs.viewfs.mounttable.ns1.link./test =
o3fs://bucket.volume.ozone1/test
now create external table with the following query:
CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item
STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS
TEXTFILE LOCATION '/test'
It fails to create the table without this patch. It succeeded to create the
table with this patch.
Also verified the normal hdfs folder path table creation with this patch to
ensure, regular hdfs paths not impacted. Yes, it succeeded to create table.
Thanks,
Uma Maheswara Rao Gangumalla