[
https://issues.apache.org/jira/browse/HIVE-16413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Niklaus Xiao updated HIVE-16413:
--------------------------------
Attachment: HIVE-16413.patch
> Create table as select does not check ownership of the location
> ---------------------------------------------------------------
>
> Key: HIVE-16413
> URL: https://issues.apache.org/jira/browse/HIVE-16413
> Project: Hive
> Issue Type: Bug
> Components: Authorization, SQLStandardAuthorization
> Affects Versions: 1.3.0, 1.2.2, 2.1.1
> Environment: hive-1.2.2, with following conf:
> hive.security.authorization.enabled: true
> hive.security.authorization.manager:
> org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory
> hive.security.authenticator.manager:
> org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator
> Reporter: Niklaus Xiao
> Assignee: Niklaus Xiao
> Fix For: 2.2.0
>
> Attachments: HIVE-16413.patch
>
>
> 1. following statement failed:
> {code}
> create table foo(id int) location 'hdfs:///tmp/foo';
> Error: Error while compiling statement: FAILED: HiveAccessControlException
> Permission denied: Principal [name=userx, type=USER] does not have following
> privileges for operation CREATETABLE [[OBJECT OWNERSHIP] on Object
> [type=DFS_URI, name=hdfs://hacluster/tmp/foo]] (state=42000,code=40000)
> {code}
> 2. but when use create table as select, it successed:
> {code}
> 0: jdbc:hive2://189.39.151.44:21066/> create table foo location
> 'hdfs:///tmp/foo' as select * from xxx2;
> INFO : Number of reduce tasks is set to 0 since there's no reduce operator
> INFO : number of splits:1
> INFO : Submitting tokens for job: job_1491449632882_0094
> INFO : Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hacluster
> INFO : The url to track the job:
> https://189-39-151-44:26001/proxy/application_1491449632882_0094/
> INFO : Starting Job = job_1491449632882_0094, Tracking URL =
> https://189-39-151-44:26001/proxy/application_1491449632882_0094/
> INFO : Kill Command = /opt/hive-1.3.0/bin/..//../hadoop/bin/hadoop job
> -kill job_1491449632882_0094
> INFO : Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 0
> INFO : 2017-04-10 09:44:49,185 Stage-1 map = 0%, reduce = 0%
> INFO : 2017-04-10 09:44:57,202 Stage-1 map = 100%, reduce = 0%, Cumulative
> CPU 1.98 sec
> INFO : MapReduce Total cumulative CPU time: 1 seconds 980 msec
> INFO : Ended Job = job_1491449632882_0094
> INFO : Stage-3 is selected by condition resolver.
> INFO : Stage-2 is filtered out by condition resolver.
> INFO : Stage-4 is filtered out by condition resolver.
> INFO : Moving data to directory
> hdfs://hacluster/user/hive/warehouse/.hive-staging_hive_2017-04-10_09-44-32_462_4902211653847168915-1/-ext-10001
> from
> hdfs://hacluster/user/hive/warehouse/.hive-staging_hive_2017-04-10_09-44-32_462_4902211653847168915-1/-ext-10003
> INFO : Moving data to directory hdfs:/tmp/foo from
> hdfs://hacluster/user/hive/warehouse/.hive-staging_hive_2017-04-10_09-44-32_462_4902211653847168915-1/-ext-10001
> No rows affected (26.969 seconds)
> {code}
> 3. and the table location is hdfs://hacluster/tmp/foo :
> {code}
> 0: jdbc:hive2://189.39.151.44:21066/> desc formatted foo;
> +-------------------------------+-------------------------------------------------------+-----------------------+--+
> | col_name | data_type
> | comment |
> +-------------------------------+-------------------------------------------------------+-----------------------+--+
> | # col_name | data_type
> | comment |
> | | NULL
> | NULL |
> | id | int
> | |
> | | NULL
> | NULL |
> | # Detailed Table Information | NULL
> | NULL |
> | Database: | default
> | NULL |
> | Owner: | userx
> | NULL |
> | CreateTime: | Mon Apr 10 09:44:59 CST 2017
> | NULL |
> | LastAccessTime: | UNKNOWN
> | NULL |
> | Protect Mode: | None
> | NULL |
> | Retention: | 0
> | NULL |
> | Location: | hdfs://hacluster/tmp/foo
> | NULL |
> | Table Type: | MANAGED_TABLE
> | NULL |
> | Table Parameters: | NULL
> | NULL |
> | | COLUMN_STATS_ACCURATE
> | false |
> | | numFiles
> | 1 |
> | | numRows
> | -1 |
> | | rawDataSize
> | -1 |
> | | totalSize
> | 56 |
> | | transient_lastDdlTime
> | 1491788699 |
> | | NULL
> | NULL |
> | # Storage Information | NULL
> | NULL |
> | SerDe Library: |
> org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe | NULL
> |
> | InputFormat: |
> org.apache.hadoop.hive.ql.io.RCFileInputFormat | NULL
> |
> | OutputFormat: |
> org.apache.hadoop.hive.ql.io.RCFileOutputFormat | NULL
> |
> | Compressed: | No
> | NULL |
> | Num Buckets: | -1
> | NULL |
> | Bucket Columns: | []
> | NULL |
> | Sort Columns: | []
> | NULL |
> | Storage Desc Params: | NULL
> | NULL |
> | | serialization.format
> | 1 |
> +-------------------------------+-------------------------------------------------------+-----------------------+--+
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)