<name>dfs.permissions</name>
<value>true</value>
..
<name>dfs.permissions.supergroup</name>
<value>hdfs</value>

You mentioned: "I think the thrift server can use the dfs processor." - were 
you suggesting the metastore implementation in HiveMetastore should always do 
chown user:user on create_table_core() (or selectively look at the conf and 
known it is being run as a thrift server and chown only in that case)?

Pradeep
 
-----Original Message-----
From: Edward Capriolo [mailto:[email protected]] 
Sent: Tuesday, July 13, 2010 4:52 PM
To: [email protected]
Subject: Re: Thrift metastore server and dfs file owner

On Tue, Jul 13, 2010 at 6:20 PM, Pradeep Kamath <[email protected]> wrote:
> I tried:
> hive -e "set user.name=$USER;create table foo2 ( name string);"
>
> My warehouse table dir still got created by "root" (the user my thrift server 
> is running as)
> drwxr-xr-x   - root supergroup          0 2010-07-13 15:19 
> /user/pradeepk/hive/warehouse/foo2
>
> -----Original Message-----
> From: Edward Capriolo [mailto:[email protected]]
> Sent: Tuesday, July 13, 2010 2:47 PM
> To: [email protected]
> Subject: Re: Thrift metastore server and dfs file owner
>
> On Tue, Jul 13, 2010 at 5:04 PM, Pradeep Kamath <[email protected]> 
> wrote:
>> Hi,
>>
>>    I suspect this is true but wanted to confirm: If I start a thrift
>> metastore service as user "joe" then all internal tables created will have
>> directories under the warehouse directory owned by "joe" regardless of the
>> actual user running the create table statement - is this correct? There is
>> no way for the thrift server to create the directory as the actual user?
>> However if thrift service is not used and the hive client directly works
>> against the metastore database, then the directories are created by the
>> actual user - is this correct?
>>
>>
>>
>> Thanks,
>>
>> Pradeep
>
> The hive web interface does this:
>
>    queries.add("set hadoop.job.ugi=" + auth.getUser() + ","
>        + auth.getGroups()[0]);
>    queries.add("set user.name=" + auth.getUser());
>
> You should be able to accomplish the same thing using set commands
> with the Thrift Server to impersonate.
>
> Regards,
> Edward
>

You are right. That technique may only affect files created during the
map/reduce job. I think the thrift server can use the dfs processor.

hive> dfs -chown user:user /user/hive/warehouse/foo2;

Questions:
Who is your hadoop superuser?
Are you enforcing dfs permissions?

If you are enforcing permissions only the hadoop superuser (hadoop)
will be able to chown files to other users and groups.

Reply via email to