[ 
https://issues.apache.org/jira/browse/HIVE-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16060102#comment-16060102
 ] 

Hive QA commented on HIVE-14688:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12844114/HIVE-14688.4.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/5736/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/5736/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-5736/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-06-22 22:15:46.299
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-5736/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-06-22 22:15:46.301
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   7819cd3..b47736f  master     -> origin/master
   3298e7f..f4a8fef  branch-2   -> origin/branch-2
+ git reset --hard HEAD
HEAD is now at 7819cd3 HIVE-16867: Extend shared scan optimizer to reuse 
computation from other operators (Jesus Camacho Rodriguez, reviewed by Ashutosh 
Chauhan)
+ git clean -f -d
Removing ql/src/test/queries/clientpositive/llap_smb.q
Removing ql/src/test/results/clientpositive/llap/llap_smb.q.out
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at b47736f HIVE-16930: HoS should verify the value of Kerberos 
principal and keytab file before adding them to spark-submit command parameters 
(Yibing Shi via Chaoyu Tang)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-06-22 22:15:52.164
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: itests/src/test/resources/testconfiguration.properties:710
error: itests/src/test/resources/testconfiguration.properties: patch does not 
apply
error: patch failed: 
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java:1786
error: metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
patch does not apply
error: patch failed: 
ql/src/test/results/clientpositive/encrypted/encryption_drop_partition.q.out:111
error: 
ql/src/test/results/clientpositive/encrypted/encryption_drop_partition.q.out: 
patch does not apply
error: patch failed: 
ql/src/test/results/clientpositive/encrypted/encryption_drop_table.q.out:67
error: 
ql/src/test/results/clientpositive/encrypted/encryption_drop_table.q.out: patch 
does not apply
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12844114 - PreCommit-HIVE-Build

> Hive drop call fails in presence of TDE
> ---------------------------------------
>
>                 Key: HIVE-14688
>                 URL: https://issues.apache.org/jira/browse/HIVE-14688
>             Project: Hive
>          Issue Type: Bug
>          Components: Security
>    Affects Versions: 1.2.1, 2.0.0
>            Reporter: Deepesh Khandelwal
>            Assignee: Wei Zheng
>         Attachments: HIVE-14688.1.patch, HIVE-14688.2.patch, 
> HIVE-14688.3.patch, HIVE-14688.4.patch
>
>
> This should be committed to when Hive moves to Hadoop 2.8
> In Hadoop 2.8.0 TDE trash collection was fixed through HDFS-8831. This 
> enables us to make drop table calls for Hive managed tables where Hive 
> metastore warehouse directory is in encrypted zone. However even with the 
> feature in HDFS, Hive drop table currently fail:
> {noformat}
> $ hdfs crypto -listZones
> /apps/hive/warehouse  key2 
> $ hdfs dfs -ls /apps/hive/warehouse
> Found 1 items
> drwxrwxrwt   - hdfs hdfs          0 2016-09-01 02:54 
> /apps/hive/warehouse/.Trash
> hive> create table abc(a string, b int);
> OK
> Time taken: 5.538 seconds
> hive> dfs -ls /apps/hive/warehouse;
> Found 2 items
> drwxrwxrwt   - hdfs   hdfs          0 2016-09-01 02:54 
> /apps/hive/warehouse/.Trash
> drwxrwxrwx   - deepesh hdfs          0 2016-09-01 17:15 
> /apps/hive/warehouse/abc
> hive> drop table if exists abc;
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to drop 
> default.abc because it is in an encryption zone and trash is enabled.  Use 
> PURGE option to skip trash.)
> {noformat}
> The problem lies here:
> {code:title=metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java}
> private void checkTrashPurgeCombination(Path pathToData, String objectName, 
> boolean ifPurge)
> ...
>       if (trashEnabled) {
>         try {
>           HadoopShims.HdfsEncryptionShim shim =
>             
> ShimLoader.getHadoopShims().createHdfsEncryptionShim(FileSystem.get(hiveConf),
>  hiveConf);
>           if (shim.isPathEncrypted(pathToData)) {
>             throw new MetaException("Unable to drop " + objectName + " 
> because it is in an encryption zone" +
>               " and trash is enabled.  Use PURGE option to skip trash.");
>           }
>         } catch (IOException ex) {
>           MetaException e = new MetaException(ex.getMessage());
>           e.initCause(ex);
>           throw e;
>         }
>       }
> {code}
> As we can see that we are making an assumption that delete wouldn't be 
> successful in encrypted zone. We need to modify this logic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to