Thanks John for an interesting analogy.
Can you pls. also elaborate the use case for this feature? It seems that after 
this feature, the best practice would be to always keep no_drop mode ON to 
avoid accidental drop even by the creator.

Ankita

________________________________
From: John Sichi [mailto:[email protected]]
Sent: Wednesday, August 04, 2010 11:15 AM
To: <[email protected]>
Subject: Re: how to prevent user from dropping table created by another user

Slight clarification:  the no_drop support Siying is adding will apply to all 
users (even the creator of the table).

The analogy is as follows:  no_drop mode is like the safety catch on a gun (it 
prevents the gun from being fired by anyone, even the person holding it, until 
explicitly taken off), whereas HDFS or other permission is like the key to the 
gun cabinet.

JVS

On Aug 4, 2010, at 10:36 AM, Ning Zhang wrote:


Siying is working on a JIRA (HIVE-1413) addressing exactly the issue 
(non-dropable tables).

On Aug 4, 2010, at 10:25 AM, Bakshi, Ankita wrote:



Thanks Carl for the pointers.
Good news is - user can recover from the failure by doing following steps:
1. For tables without partition, it is as simple as creating table definition 
again.
2. For tables with partition, it will involve creating table definition 
followed by creating partitions.

Thanks,
Ankita

________________________________
From: Carl Steinbach [mailto:[email protected]]
Sent: Tuesday, August 03, 2010 4:17 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: how to prevent user from dropping table created by another user

Hi Ankita,
We wanted to avoid user from dropping table created by another user. By 
changing the hdfs permission of the table dir, we were able to prevent the 
table from getting deleted from hdfs. But unfortunately, hive deletes the 
metadata related to the table from the mysql metastore.

The strategy that Hive currently employs for operations like this is to first 
attempt update the data in the metastore db, and iff that succeeds it then 
attempts to make the corresponding changes in HDFS. Eventually we hope to build 
authorization facilities into the MetaStore (see 
https://issues.apache.org/jira/browse/HIVE-78).

I am wondering if any one has any pointers to this problem.
It would also help if someone could point me to the hive code where it is doing 
deletion. It seems that we just have to exit if hdfs throws error and should 
not delete metadata in this case.

The code you're looking for is the drop_table_core() method in 
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java.

Thanks.

Carl


The information contained in this email message and its attachments is intended 
only for the private and confidential use of the recipient(s) named above, 
unless the sender expressly agrees otherwise. Transmission of email over the 
Internet is not a secure communications medium. If you are requesting or have 
requested the transmittal of personal data, as defined in applicable privacy 
laws by means of email or in an attachment to email, you must select a more 
secure alternate means of transmittal that supports your obligations to protect 
such personal data. If the reader of this message is not the intended recipient 
and/or you have received this email in error, you must take no action based on 
the information in this email and you are hereby notified that any 
dissemination, misuse or copying or disclosure of this communication is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by email and delete the original message.


Reply via email to