[
https://issues.apache.org/jira/browse/HIVE-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16088057#comment-16088057
]
Hive QA commented on HIVE-13989:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12877381/HIVE-13989-branch-2.3.patch
{color:red}ERROR:{color} -1 due to build exiting with an error
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6040/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6040/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6040/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-07-14 21:01:32.392
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-6040/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z branch-2.3 ]]
+ [[ -d apache-github-branch-2.3-source ]]
+ [[ ! -d apache-github-branch-2.3-source/.git ]]
+ [[ ! -d apache-github-branch-2.3-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-07-14 21:01:32.395
+ cd apache-github-branch-2.3-source
+ git fetch origin
>From https://github.com/apache/hive
31cee7e..6f4c35c branch-2.3 -> origin/branch-2.3
4514ec9..d3ba76d master -> origin/master
* [new tag] release-2.3.0-rc1 -> release-2.3.0-rc1
+ git reset --hard HEAD
HEAD is now at 31cee7e HIVE-15144: JSON.org license is now CatX (Owen O'Malley,
reviewed by Alan Gates)
+ git clean -f -d
+ git checkout branch-2.3
Already on 'branch-2.3'
Your branch is behind 'origin/branch-2.3' by 2 commits, and can be
fast-forwarded.
(use "git pull" to update your local branch)
+ git reset --hard origin/branch-2.3
HEAD is now at 6f4c35c Release Notes
+ git merge --ff-only origin/branch-2.3
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-07-14 21:01:36.454
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: patch -p1
patching file
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FileOutputCommitterContainer.java
patching file
itests/hive-unit-hadoop2/src/test/java/org/apache/hadoop/hive/ql/security/TestExtendedAcls.java
patching file
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/FolderPermissionBase.java
patching file ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
patching file ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
patching file
shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java
patching file
shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q
-Dmaven.repo.local=/data/hiveptest/working/maven
[ERROR] Failed to execute goal on project spark-client: Could not resolve
dependencies for project org.apache.hive:spark-client:jar:2.3.0: Could not find
artifact org.apache.hive:hive-storage-api:jar:2.4.0 in datanucleus
(http://www.datanucleus.org/downloads/maven2) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :spark-client
+ exit 1
'
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12877381 - PreCommit-HIVE-Build
> Extended ACLs are not handled according to specification
> --------------------------------------------------------
>
> Key: HIVE-13989
> URL: https://issues.apache.org/jira/browse/HIVE-13989
> Project: Hive
> Issue Type: Bug
> Components: HCatalog
> Affects Versions: 1.2.1, 2.0.0
> Reporter: Chris Drome
> Assignee: Chris Drome
> Attachments: HIVE-13989.1-branch-1.patch, HIVE-13989.1.patch,
> HIVE-13989-branch-1.patch, HIVE-13989-branch-2.3.patch
>
>
> Hive takes two approaches to working with extended ACLs depending on whether
> data is being produced via a Hive query or HCatalog APIs. A Hive query will
> run an FsShell command to recursively set the extended ACLs for a directory
> sub-tree. HCatalog APIs will attempt to build up the directory sub-tree
> programmatically and runs some code to set the ACLs to match the parent
> directory.
> Some incorrect assumptions were made when implementing the extended ACLs
> support. Refer to https://issues.apache.org/jira/browse/HDFS-4685 for the
> design documents of extended ACLs in HDFS. These documents model the
> implementation after the POSIX implementation on Linux, which can be found at
> http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html.
> The code for setting extended ACLs via HCatalog APIs is found in
> HdfsUtils.java:
> {code}
> if (aclEnabled) {
> aclStatus = sourceStatus.getAclStatus();
> if (aclStatus != null) {
> LOG.trace(aclStatus.toString());
> aclEntries = aclStatus.getEntries();
> removeBaseAclEntries(aclEntries);
> //the ACL api's also expect the tradition user/group/other permission
> in the form of ACL
> aclEntries.add(newAclEntry(AclEntryScope.ACCESS, AclEntryType.USER,
> sourcePerm.getUserAction()));
> aclEntries.add(newAclEntry(AclEntryScope.ACCESS, AclEntryType.GROUP,
> sourcePerm.getGroupAction()));
> aclEntries.add(newAclEntry(AclEntryScope.ACCESS, AclEntryType.OTHER,
> sourcePerm.getOtherAction()));
> }
> }
> {code}
> We found that DEFAULT extended ACL rules were not being inherited properly by
> the directory sub-tree, so the above code is incomplete because it
> effectively drops the DEFAULT rules. The second problem is with the call to
> {{sourcePerm.getGroupAction()}}, which is incorrect in the case of extended
> ACLs. When extended ACLs are used the GROUP permission is replaced with the
> extended ACL mask. So the above code will apply the wrong permissions to the
> GROUP. Instead the correct GROUP permissions now need to be pulled from the
> AclEntry as returned by {{getAclStatus().getEntries()}}. See the
> implementation of the new method {{getDefaultAclEntries}} for details.
> Similar issues exist with the HCatalog API. None of the API accounts for
> setting extended ACLs on the directory sub-tree. The changes to the HCatalog
> API allow the extended ACLs to be passed into the required methods similar to
> how basic permissions are passed in. When building the directory sub-tree the
> extended ACLs of the table directory are inherited by all sub-directories,
> including the DEFAULT rules.
> Replicating the problem:
> Create a table to write data into (I will use acl_test as the destination and
> words_text as the source) and set the ACLs as follows:
> {noformat}
> $ hdfs dfs -setfacl -m
> default:user::rwx,default:group::r-x,default:mask::rwx,default:user:hdfs:rwx,group::r-x,user:hdfs:rwx
> /user/cdrome/hive/acl_test
> $ hdfs dfs -ls -d /user/cdrome/hive/acl_test
> drwxrwx---+ - cdrome hdfs 0 2016-07-13 20:36
> /user/cdrome/hive/acl_test
> $ hdfs dfs -getfacl -R /user/cdrome/hive/acl_test
> # file: /user/cdrome/hive/acl_test
> # owner: cdrome
> # group: hdfs
> user::rwx
> user:hdfs:rwx
> group::r-x
> mask::rwx
> other::---
> default:user::rwx
> default:user:hdfs:rwx
> default:group::r-x
> default:mask::rwx
> default:other::---
> {noformat}
> Note that the basic GROUP permission is set to {{rwx}} after setting the
> ACLs. The ACLs explicitly set the DEFAULT rules and a rule specifically for
> the {{hdfs}} user.
> Run the following query to populate the table:
> {noformat}
> insert into acl_test partition (dt='a', ds='b') select a, b from words_text
> where dt = 'c';
> {noformat}
> Note that words_text only has a single partition key.
> Now examine the ACLs for the resulting directories:
> {noformat}
> $ hdfs dfs -getfacl -R /user/cdrome/hive/acl_test
> # file: /user/cdrome/hive/acl_test
> # owner: cdrome
> # group: hdfs
> user::rwx
> user:hdfs:rwx
> group::r-x
> mask::rwx
> other::---
> default:user::rwx
> default:user:hdfs:rwx
> default:group::r-x
> default:mask::rwx
> default:other::---
> # file: /user/cdrome/hive/acl_test/dt=a
> # owner: cdrome
> # group: hdfs
> user::rwx
> user:hdfs:rwx
> group::rwx
> mask::rwx
> other::---
> default:user::rwx
> default:user:hdfs:rwx
> default:group::rwx
> default:mask::rwx
> default:other::---
> # file: /user/cdrome/hive/acl_test/dt=a/ds=b
> # owner: cdrome
> # group: hdfs
> user::rwx
> user:hdfs:rwx
> group::rwx
> mask::rwx
> other::---
> default:user::rwx
> default:user:hdfs:rwx
> default:group::rwx
> default:mask::rwx
> default:other::---
> # file: /user/cdrome/hive/acl_test/dt=a/ds=b/000000_0.deflate
> # owner: cdrome
> # group: hdfs
> user::rwx
> user:hdfs:rwx
> group::rwx
> mask::rwx
> other::---
> {noformat}
> Note that the GROUP permission is now erroneously set to {{rwx}} because of
> the code mentioned above; it is set to the same value as the ACL mask.
> The code changes for the HCatalog APIs is synonymous to the
> {{applyGroupAndPerms}} method which ensures that all new directories are
> created with the same permissions as the table. This patch will ensure that
> changes to intermediate directories will not be propagated, instead the table
> ACLs will be applied to all new directories created.
> I would also like to call out that the older versions of HDFS which support
> ACLs had a number issues in addition to those mentioned here which appear to
> have been addressed in later versions of Hadoop. This patch was originally
> written to work with a version of Hadoop-2.6, we are now using Hadoop-2.7
> which appears to have fixed some of them. However, I think that this patch is
> still required for correct behavior of ACLs with Hive/HCatalog.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)