[
https://issues.apache.org/jira/browse/OOZIE-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832683#comment-16832683
]
Hadoop QA commented on OOZIE-3478:
----------------------------------
Testing JIRA OOZIE-3478
Cleaning local git workspace
----------------------------
{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:red}-1 RAW_PATCH_ANALYSIS{color}
. {color:green}+1{color} the patch does not introduce any @author tags
. {color:green}+1{color} the patch does not introduce any tabs
. {color:green}+1{color} the patch does not introduce any trailing spaces
. {color:green}+1{color} the patch does not introduce any star imports
. {color:green}+1{color} the patch does not introduce any line longer than
132
. {color:red}-1{color} the patch does not add/modify any testcase
{color:green}+1 RAT{color}
. {color:green}+1{color} the patch does not seem to introduce new RAT
warnings
{color:green}+1 JAVADOC{color}
. {color:green}+1{color} Javadoc generation succeeded with the patch
. {color:green}+1{color} the patch does not seem to introduce new Javadoc
warning(s)
{color:green}+1 COMPILE{color}
. {color:green}+1{color} HEAD compiles
. {color:green}+1{color} patch compiles
. {color:green}+1{color} the patch does not seem to introduce new javac
warnings
{color:red}-1{color} There are [7] new bugs found below threshold in total that
must be fixed.
. {color:green}+1{color} There are no new bugs found in [sharelib/hive2].
. {color:green}+1{color} There are no new bugs found in [sharelib/spark].
. {color:green}+1{color} There are no new bugs found in [sharelib/oozie].
. {color:green}+1{color} There are no new bugs found in [sharelib/pig].
. {color:green}+1{color} There are no new bugs found in [sharelib/streaming].
. {color:green}+1{color} There are no new bugs found in [sharelib/hive].
. {color:green}+1{color} There are no new bugs found in [sharelib/distcp].
. {color:green}+1{color} There are no new bugs found in [sharelib/hcatalog].
. {color:green}+1{color} There are no new bugs found in [sharelib/sqoop].
. {color:green}+1{color} There are no new bugs found in [sharelib/git].
. {color:green}+1{color} There are no new bugs found in [client].
. {color:green}+1{color} There are no new bugs found in [docs].
. {color:green}+1{color} There are no new bugs found in [tools].
. {color:green}+1{color} There are no new bugs found in
[fluent-job/fluent-job-api].
. {color:green}+1{color} There are no new bugs found in [server].
. {color:green}+1{color} There are no new bugs found in [webapp].
. {color:green}+1{color} There are no new bugs found in [examples].
. {color:red}-1{color} There are [7] new bugs found below threshold in
[core] that must be fixed, listing only the first [5] ones.
. You can find the SpotBugs diff here (look for the red and orange ones):
core/findbugs-new.html
. The top [5] most important SpotBugs errors are:
. At BulkJPAExecutor.java:[line 207]: This use of
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
can be vulnerable to SQL/JPQL injection
. At BulkJPAExecutor.java:[line 177]: At BulkJPAExecutor.java:[line 176]
. At BulkJPAExecutor.java:[line 206]: At BulkJPAExecutor.java:[line 200]
. This use of
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
can be vulnerable to SQL/JPQL injection: At BulkJPAExecutor.java:[line 207]
. At BulkJPAExecutor.java:[line 112]: At BulkJPAExecutor.java:[line 128]
{color:green}+1 BACKWARDS_COMPATIBILITY{color}
. {color:green}+1{color} the patch does not change any JPA
Entity/Colum/Basic/Lob/Transient annotations
. {color:green}+1{color} the patch does not modify JPA files
{color:green}+1 TESTS{color}
. Tests run: 3170
. {color:orange}Tests failed at first run:{color}
TestPurgeXCommand#testPurgeableBundleUnpurgeableCoordinatorUnpurgeableWorkflow
. For the complete list of flaky tests, see TEST-SUMMARY-FULL files.
{color:green}+1 DISTRO{color}
. {color:green}+1{color} distro tarball builds with the patch
----------------------------
{color:red}*-1 Overall result, please check the reported -1(s)*{color}
The full output of the test-patch run is available at
. https://builds.apache.org/job/PreCommit-OOZIE-Build/1104/
> Oozie needs execute permission on the submitting users home directory
> ---------------------------------------------------------------------
>
> Key: OOZIE-3478
> URL: https://issues.apache.org/jira/browse/OOZIE-3478
> Project: Oozie
> Issue Type: Bug
> Components: action, security
> Affects Versions: 5.1.0
> Reporter: Andras Salamon
> Assignee: Andras Salamon
> Priority: Major
> Attachments: OOZIE-3478-01-wip.patch, OOZIE-3478-02.patch
>
>
> On a secure cluster oozie user needs execute permission on the submitting
> user's home directory. The bug affects multiple actions ( probably all which
> is based on JavaActionExecutor ). Easiest way to reproduce is to use a shell
> action, where the {{workflow.xml}} contains the following action:
> {noformat}<action name="shell-node">
> <shell xmlns="uri:oozie:shell-action:1.0">
> <resource-manager>${resourceManager}</resource-manager>
> <name-node>${nameNode}</name-node>
> <configuration>
> <property>
> <name>mapred.job.queue.name</name>
> <value>${queueName}</value>
> </property>
> </configuration>
> <exec>test.sh</exec>
> <file>/user/systest/test.sh#test.sh</file>
> <capture-output/>
> </shell>
> <ok to="check-output"/>
> <error to="fail"/>
> </action>
> {noformat}
> If the directory has the following permissions:
> {noformat}drwx------ - systest supergroup 0 2019-04-16 08:19
> /user/systest
> {noformat}
> then running the workflow gives JA009 error code with the following exception:
> {noformat}ozie-oozi-W@shell-node] Error starting action [shell-node].
> ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Permission denied:
> user=oozie, access=EXECUTE,
> inode="/user/systest":systest:supergroup:drwx------
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:316)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:243)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:605)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1804)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1822)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:674)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3060)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> ]
> org.apache.oozie.action.ActionExecutorException: JA009: Permission denied:
> user=oozie, access=EXECUTE,
> inode="/user/systest":systest:supergroup:drwx------
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:316)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:243)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:605)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1804)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1822)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:674)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3060)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> at
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:469)
> at
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:443)
> at
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1103)
> at
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1589)
> at
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:243)
> at
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:68)
> at org.apache.oozie.command.XCommand.call(XCommand.java:291)
> at
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:363)
> at
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:210)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=oozie, access=EXECUTE,
> inode="/user/systest":systest:supergroup:drwx------
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:316)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:243)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:605)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1804)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1822)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:674)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3060)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)
> at org.apache.hadoop.ipc.Client.call(Client.java:1445)
> at org.apache.hadoop.ipc.Client.call(Client.java:1355)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1624)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:931)
> at
> org.apache.oozie.util.ClasspathUtils.addToClasspathIfNotJar(ClasspathUtils.java:183)
> at
> org.apache.oozie.util.ClasspathUtils.setupClasspath(ClasspathUtils.java:73)
> at
> org.apache.oozie.action.hadoop.JavaActionExecutor.setEnvironmentVariables(JavaActionExecutor.java:1355)
> at
> org.apache.oozie.action.hadoop.JavaActionExecutor.createAppSubmissionContext(JavaActionExecutor.java:1175)
> at
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1090)
> ... 11 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)