[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-07 Thread Andras Salamon (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945739#comment-16945739
 ] 

Andras Salamon commented on OOZIE-3529:
---

[~dionusos] Yes, the failures are unrelated, hopefully OOZIE-3445 will 
eliminate these failures soon.

Thanks for the fixes and the follow-up jiras, +1, committed to master.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, OOZIE-3529.007.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945738#comment-16945738
 ] 

ASF subversion and git services commented on OOZIE-3529:


Commit 21794c3fbd0db3d7c1c8836ace758176629e73eb in oozie's branch 
refs/heads/master from Andras Salamon
[ https://gitbox.apache.org/repos/asf?p=oozie.git;h=21794c3 ]

OOZIE-3529 Oozie not supported for s3 as filesystem (dionusos via asalamon74)


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, OOZIE-3529.007.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-07 Thread Denes Bodo (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945715#comment-16945715
 ] 

Denes Bodo commented on OOZIE-3529:
---

Thanks [~asalamon74] for your review comments. I think all the failures are 
completely unrelated to my change.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, OOZIE-3529.007.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945651#comment-16945651
 ] 

Hadoop QA commented on OOZIE-3529:
--


Testing JIRA OOZIE-3529

Cleaning local git workspace



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:green}+1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any star imports
.{color:green}+1{color} the patch does not introduce any line longer than 
132
.{color:green}+1{color} the patch adds/modifies 2 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} Javadoc generation succeeded with the patch
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:red}-1{color} There are [19] new bugs found below threshold in total 
that must be fixed.
.{color:green}+1{color} There are no new bugs found in 
[fluent-job/fluent-job-api].
.{color:green}+1{color} There are no new bugs found in [docs].
.{color:red}-1{color} There are [4] new bugs found below threshold in 
[core] that must be fixed.
.You can find the SpotBugs diff here (look for the red and orange ones): 
core/findbugs-new.html
.The most important SpotBugs errors are:
.At BulkJPAExecutor.java:[line 206]: This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection
.At BulkJPAExecutor.java:[line 176]: At BulkJPAExecutor.java:[line 175]
.At BulkJPAExecutor.java:[line 205]: At BulkJPAExecutor.java:[line 199]
.java/io/File.init(Ljava/lang/String;Ljava/lang/String;)V reads a 
file whose location might be specified by user input: At 
BulkJPAExecutor.java:[line 206]
.At AuthorizationService.java:[line 189]: At 
AuthorizationService.java:[line 192]
.{color:green}+1{color} There are no new bugs found in [sharelib/spark].
.{color:green}+1{color} There are no new bugs found in [sharelib/git].
.{color:green}+1{color} There are no new bugs found in [sharelib/sqoop].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive2].
.{color:green}+1{color} There are no new bugs found in [sharelib/streaming].
.{color:green}+1{color} There are no new bugs found in [sharelib/pig].
.{color:green}+1{color} There are no new bugs found in [sharelib/oozie].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive].
.{color:green}+1{color} There are no new bugs found in [sharelib/hcatalog].
.{color:green}+1{color} There are no new bugs found in [sharelib/distcp].
.{color:red}-1{color} There are [15] new bugs found below threshold in 
[tools] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
tools/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At OozieDBCLI.java:[line 584]: This use of 
java/sql/Statement.executeUpdate(Ljava/lang/String;)I can be vulnerable to SQL 
injection
.At OozieDBCLI.java:[line 574]: At OozieDBCLI.java:[line 573]
.At OozieDBCLI.java:[line 577]: At OozieDBCLI.java:[line 575]
.At OozieDBCLI.java:[line 579]: At OozieDBCLI.java:[line 578]
.At OozieDBCLI.java:[line 584]: At OozieDBCLI.java:[line 581]
.{color:green}+1{color} There are no new bugs found in [server].
.{color:green}+1{color} There are no new bugs found in [client].
.{color:green}+1{color} There are no new bugs found in [examples].
.{color:green}+1{color} There are no new bugs found in [webapp].
{color:green}+1 BACKWARDS_COMPATIBILITY{color}
.{color:green}+1{color} the patch does not change any JPA 
Entity/Colum/Basic/Lob/Transient annotations
.{color:green}+1{color} the patch does not modify JPA files
{color:green}+1 TESTS{color}
.Tests run: 3190
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 
{color:green}+1 MODERNIZER{color}


{color:red}*-1 Overall result, please check the reported -1(s)*{color}


The full output of the test-patch run is available at

. https://builds.apache.org/job/PreCommit-OOZIE-Build/1234/



> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16945604#comment-16945604
 ] 

Hadoop QA commented on OOZIE-3529:
--

PreCommit-OOZIE-Build started


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, OOZIE-3529.007.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944361#comment-16944361
 ] 

Hadoop QA commented on OOZIE-3529:
--


Testing JIRA OOZIE-3529

Cleaning local git workspace



{color:red}-1{color} Patch failed to apply to head of branch




> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944350#comment-16944350
 ] 

Hadoop QA commented on OOZIE-3529:
--

PreCommit-OOZIE-Build started


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, 
> OOZIE-3529.006.patch, id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941634#comment-16941634
 ] 

Hadoop QA commented on OOZIE-3529:
--


Testing JIRA OOZIE-3529

Cleaning local git workspace



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:green}+1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any star imports
.{color:green}+1{color} the patch does not introduce any line longer than 
132
.{color:green}+1{color} the patch adds/modifies 2 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} Javadoc generation succeeded with the patch
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:red}-1{color} There are [19] new bugs found below threshold in total 
that must be fixed.
.{color:green}+1{color} There are no new bugs found in 
[fluent-job/fluent-job-api].
.{color:green}+1{color} There are no new bugs found in [docs].
.{color:red}-1{color} There are [4] new bugs found below threshold in 
[core] that must be fixed.
.You can find the SpotBugs diff here (look for the red and orange ones): 
core/findbugs-new.html
.The most important SpotBugs errors are:
.At BulkJPAExecutor.java:[line 206]: This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection
.At BulkJPAExecutor.java:[line 176]: At BulkJPAExecutor.java:[line 175]
.At BulkJPAExecutor.java:[line 205]: At BulkJPAExecutor.java:[line 199]
.java/io/File.init(Ljava/lang/String;Ljava/lang/String;)V reads a 
file whose location might be specified by user input: At 
BulkJPAExecutor.java:[line 206]
.At AuthorizationService.java:[line 189]: At 
AuthorizationService.java:[line 192]
.{color:green}+1{color} There are no new bugs found in [sharelib/spark].
.{color:green}+1{color} There are no new bugs found in [sharelib/git].
.{color:green}+1{color} There are no new bugs found in [sharelib/sqoop].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive2].
.{color:green}+1{color} There are no new bugs found in [sharelib/streaming].
.{color:green}+1{color} There are no new bugs found in [sharelib/pig].
.{color:green}+1{color} There are no new bugs found in [sharelib/oozie].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive].
.{color:green}+1{color} There are no new bugs found in [sharelib/hcatalog].
.{color:green}+1{color} There are no new bugs found in [sharelib/distcp].
.{color:red}-1{color} There are [15] new bugs found below threshold in 
[tools] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
tools/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At OozieDBCLI.java:[line 584]: This use of 
java/sql/Statement.executeUpdate(Ljava/lang/String;)I can be vulnerable to SQL 
injection
.At OozieDBCLI.java:[line 574]: At OozieDBCLI.java:[line 573]
.At OozieDBCLI.java:[line 577]: At OozieDBCLI.java:[line 575]
.At OozieDBCLI.java:[line 579]: At OozieDBCLI.java:[line 578]
.At OozieDBCLI.java:[line 584]: At OozieDBCLI.java:[line 581]
.{color:green}+1{color} There are no new bugs found in [server].
.{color:green}+1{color} There are no new bugs found in [client].
.{color:green}+1{color} There are no new bugs found in [examples].
.{color:green}+1{color} There are no new bugs found in [webapp].
{color:green}+1 BACKWARDS_COMPATIBILITY{color}
.{color:green}+1{color} the patch does not change any JPA 
Entity/Colum/Basic/Lob/Transient annotations
.{color:green}+1{color} the patch does not modify JPA files
{color:green}+1 TESTS{color}
.Tests run: 3190
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 
{color:green}+1 MODERNIZER{color}


{color:red}*-1 Overall result, please check the reported -1(s)*{color}


The full output of the test-patch run is available at

. https://builds.apache.org/job/PreCommit-OOZIE-Build/1232/



> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-10-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941570#comment-16941570
 ] 

Hadoop QA commented on OOZIE-3529:
--

PreCommit-OOZIE-Build started


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, OOZIE-3529.005.patch, id.pig, 
> job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938755#comment-16938755
 ] 

Hadoop QA commented on OOZIE-3529:
--


Testing JIRA OOZIE-3529

Cleaning local git workspace



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:green}+1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any star imports
.{color:green}+1{color} the patch does not introduce any line longer than 
132
.{color:green}+1{color} the patch adds/modifies 2 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} Javadoc generation succeeded with the patch
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:red}-1{color} There are [21] new bugs found below threshold in total 
that must be fixed.
.{color:green}+1{color} There are no new bugs found in 
[fluent-job/fluent-job-api].
.{color:green}+1{color} There are no new bugs found in [docs].
.{color:red}-1{color} There are [6] new bugs found below threshold in 
[core] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
core/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At BulkJPAExecutor.java:[line 206]: This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection
.At BulkJPAExecutor.java:[line 176]: At BulkJPAExecutor.java:[line 175]
.At BulkJPAExecutor.java:[line 205]: At BulkJPAExecutor.java:[line 199]
.This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection: At BulkJPAExecutor.java:[line 206]
.At BulkJPAExecutor.java:[line 111]: At BulkJPAExecutor.java:[line 127]
.{color:green}+1{color} There are no new bugs found in [sharelib/spark].
.{color:green}+1{color} There are no new bugs found in [sharelib/git].
.{color:green}+1{color} There are no new bugs found in [sharelib/sqoop].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive2].
.{color:green}+1{color} There are no new bugs found in [sharelib/streaming].
.{color:green}+1{color} There are no new bugs found in [sharelib/pig].
.{color:green}+1{color} There are no new bugs found in [sharelib/oozie].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive].
.{color:green}+1{color} There are no new bugs found in [sharelib/hcatalog].
.{color:green}+1{color} There are no new bugs found in [sharelib/distcp].
.{color:red}-1{color} There are [15] new bugs found below threshold in 
[tools] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
tools/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At OozieDBCLI.java:[line 584]: This use of 
java/sql/Statement.executeUpdate(Ljava/lang/String;)I can be vulnerable to SQL 
injection
.At OozieDBCLI.java:[line 574]: At OozieDBCLI.java:[line 573]
.At OozieDBCLI.java:[line 577]: At OozieDBCLI.java:[line 575]
.At OozieDBCLI.java:[line 579]: At OozieDBCLI.java:[line 578]
.At OozieDBCLI.java:[line 584]: At OozieDBCLI.java:[line 581]
.{color:green}+1{color} There are no new bugs found in [server].
.{color:green}+1{color} There are no new bugs found in [client].
.{color:green}+1{color} There are no new bugs found in [examples].
.{color:green}+1{color} There are no new bugs found in [webapp].
{color:green}+1 BACKWARDS_COMPATIBILITY{color}
.{color:green}+1{color} the patch does not change any JPA 
Entity/Colum/Basic/Lob/Transient annotations
.{color:green}+1{color} the patch does not modify JPA files
{color:green}+1 TESTS{color}
.Tests run: 3190
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 
{color:green}+1 MODERNIZER{color}


{color:red}*-1 Overall result, please check the reported -1(s)*{color}


The full output of the test-patch run is available at

. https://builds.apache.org/job/PreCommit-OOZIE-Build/1231/



> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938653#comment-16938653
 ] 

Hadoop QA commented on OOZIE-3529:
--

PreCommit-OOZIE-Build started


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, OOZIE-3529.004.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938387#comment-16938387
 ] 

Hadoop QA commented on OOZIE-3529:
--


Testing JIRA OOZIE-3529

Cleaning local git workspace



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:red}-1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:red}-1{color} the patch contains 1 line(s) with trailing spaces
.{color:green}+1{color} the patch does not introduce any star imports
.{color:green}+1{color} the patch does not introduce any line longer than 
132
.{color:green}+1{color} the patch adds/modifies 2 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} Javadoc generation succeeded with the patch
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:red}-1{color} There are [22] new bugs found below threshold in total 
that must be fixed.
.{color:green}+1{color} There are no new bugs found in 
[fluent-job/fluent-job-api].
.{color:green}+1{color} There are no new bugs found in [docs].
.{color:red}-1{color} There are [7] new bugs found below threshold in 
[core] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
core/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At BulkJPAExecutor.java:[line 206]: This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection
.At BulkJPAExecutor.java:[line 176]: At BulkJPAExecutor.java:[line 175]
.At BulkJPAExecutor.java:[line 205]: At BulkJPAExecutor.java:[line 199]
.This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection: At BulkJPAExecutor.java:[line 206]
.At BulkJPAExecutor.java:[line 111]: At BulkJPAExecutor.java:[line 127]
.{color:green}+1{color} There are no new bugs found in [sharelib/spark].
.{color:green}+1{color} There are no new bugs found in [sharelib/git].
.{color:green}+1{color} There are no new bugs found in [sharelib/sqoop].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive2].
.{color:green}+1{color} There are no new bugs found in [sharelib/streaming].
.{color:green}+1{color} There are no new bugs found in [sharelib/pig].
.{color:green}+1{color} There are no new bugs found in [sharelib/oozie].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive].
.{color:green}+1{color} There are no new bugs found in [sharelib/hcatalog].
.{color:green}+1{color} There are no new bugs found in [sharelib/distcp].
.{color:red}-1{color} There are [15] new bugs found below threshold in 
[tools] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
tools/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At OozieDBCLI.java:[line 584]: This use of 
java/sql/Statement.executeUpdate(Ljava/lang/String;)I can be vulnerable to SQL 
injection
.At OozieDBCLI.java:[line 574]: At OozieDBCLI.java:[line 573]
.At OozieDBCLI.java:[line 577]: At OozieDBCLI.java:[line 575]
.At OozieDBCLI.java:[line 579]: At OozieDBCLI.java:[line 578]
.At OozieDBCLI.java:[line 584]: At OozieDBCLI.java:[line 581]
.{color:green}+1{color} There are no new bugs found in [server].
.{color:green}+1{color} There are no new bugs found in [client].
.{color:green}+1{color} There are no new bugs found in [examples].
.{color:green}+1{color} There are no new bugs found in [webapp].
{color:green}+1 BACKWARDS_COMPATIBILITY{color}
.{color:green}+1{color} the patch does not change any JPA 
Entity/Colum/Basic/Lob/Transient annotations
.{color:green}+1{color} the patch does not modify JPA files
{color:green}+1 TESTS{color}
.Tests run: 3190
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 
{color:green}+1 MODERNIZER{color}


{color:red}*-1 Overall result, please check the reported -1(s)*{color}


The full output of the test-patch run is available at

. https://builds.apache.org/job/PreCommit-OOZIE-Build/1230/



> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
>

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938305#comment-16938305
 ] 

Hadoop QA commented on OOZIE-3529:
--

PreCommit-OOZIE-Build started


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, 
> OOZIE-3529.003.patch, id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937866#comment-16937866
 ] 

Hadoop QA commented on OOZIE-3529:
--


Testing JIRA OOZIE-3529

Cleaning local git workspace



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:red}-1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any star imports
.{color:red}-1{color} the patch contains 2 line(s) longer than 132 
characters
.{color:green}+1{color} the patch adds/modifies 2 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} Javadoc generation succeeded with the patch
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:red}-1{color} There are [22] new bugs found below threshold in total 
that must be fixed.
.{color:green}+1{color} There are no new bugs found in 
[fluent-job/fluent-job-api].
.{color:green}+1{color} There are no new bugs found in [docs].
.{color:red}-1{color} There are [7] new bugs found below threshold in 
[core] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
core/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At BulkJPAExecutor.java:[line 206]: This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection
.At BulkJPAExecutor.java:[line 176]: At BulkJPAExecutor.java:[line 175]
.At BulkJPAExecutor.java:[line 205]: At BulkJPAExecutor.java:[line 199]
.This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection: At BulkJPAExecutor.java:[line 206]
.At BulkJPAExecutor.java:[line 111]: At BulkJPAExecutor.java:[line 127]
.{color:green}+1{color} There are no new bugs found in [sharelib/spark].
.{color:green}+1{color} There are no new bugs found in [sharelib/git].
.{color:green}+1{color} There are no new bugs found in [sharelib/sqoop].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive2].
.{color:green}+1{color} There are no new bugs found in [sharelib/streaming].
.{color:green}+1{color} There are no new bugs found in [sharelib/pig].
.{color:green}+1{color} There are no new bugs found in [sharelib/oozie].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive].
.{color:green}+1{color} There are no new bugs found in [sharelib/hcatalog].
.{color:green}+1{color} There are no new bugs found in [sharelib/distcp].
.{color:red}-1{color} There are [15] new bugs found below threshold in 
[tools] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
tools/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At OozieDBCLI.java:[line 584]: This use of 
java/sql/Statement.executeUpdate(Ljava/lang/String;)I can be vulnerable to SQL 
injection
.At OozieDBCLI.java:[line 574]: At OozieDBCLI.java:[line 573]
.At OozieDBCLI.java:[line 577]: At OozieDBCLI.java:[line 575]
.At OozieDBCLI.java:[line 579]: At OozieDBCLI.java:[line 578]
.At OozieDBCLI.java:[line 584]: At OozieDBCLI.java:[line 581]
.{color:green}+1{color} There are no new bugs found in [server].
.{color:green}+1{color} There are no new bugs found in [client].
.{color:green}+1{color} There are no new bugs found in [examples].
.{color:green}+1{color} There are no new bugs found in [webapp].
{color:green}+1 BACKWARDS_COMPATIBILITY{color}
.{color:green}+1{color} the patch does not change any JPA 
Entity/Colum/Basic/Lob/Transient annotations
.{color:green}+1{color} the patch does not modify JPA files
{color:green}+1 TESTS{color}
.Tests run: 3190
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 
{color:green}+1 MODERNIZER{color}


{color:red}*-1 Overall result, please check the reported -1(s)*{color}


The full output of the test-patch run is available at

. https://builds.apache.org/job/PreCommit-OOZIE-Build/1228/



> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
>   

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-09-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937752#comment-16937752
 ] 

Hadoop QA commented on OOZIE-3529:
--

PreCommit-OOZIE-Build started


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Fix For: 5.2.0
>
> Attachments: OOZIE-3529.001.patch, OOZIE-3529.002.patch, id.pig, 
> job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894407#comment-16894407
 ] 

Steve Loughran commented on OOZIE-3529:
---

Creating a real S3A filesystem instance kicks off a HEAD request to the bucket; 
will require the caller to be online and inevitably some creds (unless the 
AnonymousCredentialProvider is used). 

You could register a fake S3 client underneath, but it'd be brittle as to how 
bits of the AWS SDK work with things.

If you do create a real S3A connection, then you can actually call toString() 
on it and look for blockFactory=. I'd recommend you just do that for 
asserts, e.g. 

assertTrue("wrong block factory in " + fs, fs.getConf(...)), so it stops the 
test being brittle there, but does at least dump enough information to start 
debugging things


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: OOZIE-3529.001.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894139#comment-16894139
 ] 

Hadoop QA commented on OOZIE-3529:
--


Testing JIRA OOZIE-3529

Cleaning local git workspace



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:red}-1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any star imports
.{color:red}-1{color} the patch contains 1 line(s) longer than 132 
characters
.{color:green}+1{color} the patch adds/modifies 1 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} Javadoc generation succeeded with the patch
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:red}-1{color} There are [20] new bugs found below threshold in total 
that must be fixed.
.{color:green}+1{color} There are no new bugs found in [webapp].
.{color:orange}0{color} There are [4] new bugs found in [server] that would 
be nice to have fixed.
.You can find the SpotBugs diff here: server/findbugs-new.html
.{color:green}+1{color} There are no new bugs found in [sharelib/git].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive2].
.{color:green}+1{color} There are no new bugs found in [sharelib/pig].
.{color:green}+1{color} There are no new bugs found in [sharelib/oozie].
.{color:green}+1{color} There are no new bugs found in [sharelib/streaming].
.{color:green}+1{color} There are no new bugs found in [sharelib/sqoop].
.{color:green}+1{color} There are no new bugs found in [sharelib/hcatalog].
.{color:green}+1{color} There are no new bugs found in [sharelib/distcp].
.{color:green}+1{color} There are no new bugs found in [sharelib/hive].
.{color:green}+1{color} There are no new bugs found in [sharelib/spark].
.{color:green}+1{color} There are no new bugs found in 
[fluent-job/fluent-job-api].
.{color:red}-1{color} There are [5] new bugs found below threshold in 
[core] that must be fixed.
.You can find the SpotBugs diff here (look for the red and orange ones): 
core/findbugs-new.html
.The most important SpotBugs errors are:
.At BulkJPAExecutor.java:[line 206]: This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection
.At BulkJPAExecutor.java:[line 176]: At BulkJPAExecutor.java:[line 175]
.At BulkJPAExecutor.java:[line 205]: At BulkJPAExecutor.java:[line 199]
.This use of 
javax/persistence/EntityManager.createQuery(Ljava/lang/String;)Ljavax/persistence/Query;
 can be vulnerable to SQL/JPQL injection: At BulkJPAExecutor.java:[line 206]
.At CoordJobGetActionsSubsetJPAExecutor.java:[line 76]: At 
CoordJobGetActionsSubsetJPAExecutor.java:[line 111]
.{color:green}+1{color} There are no new bugs found in [client].
.{color:green}+1{color} There are no new bugs found in [docs].
.{color:green}+1{color} There are no new bugs found in [examples].
.{color:red}-1{color} There are [15] new bugs found below threshold in 
[tools] that must be fixed, listing only the first [5] ones.
.You can find the SpotBugs diff here (look for the red and orange ones): 
tools/findbugs-new.html
.The top [5] most important SpotBugs errors are:
.At OozieDBCLI.java:[line 584]: This use of 
java/sql/Statement.executeUpdate(Ljava/lang/String;)I can be vulnerable to SQL 
injection
.At OozieDBCLI.java:[line 574]: At OozieDBCLI.java:[line 573]
.At OozieDBCLI.java:[line 577]: At OozieDBCLI.java:[line 575]
.At OozieDBCLI.java:[line 579]: At OozieDBCLI.java:[line 578]
.At OozieDBCLI.java:[line 584]: At OozieDBCLI.java:[line 581]
{color:green}+1 BACKWARDS_COMPATIBILITY{color}
.{color:green}+1{color} the patch does not change any JPA 
Entity/Colum/Basic/Lob/Transient annotations
.{color:green}+1{color} the patch does not modify JPA files
{color:red}-1 TESTS{color}
.Tests run: 1355
.Tests failed : 0
.Tests in error   : 0
.Tests timed out  : 1

Check console output for the full list of errors/failures
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 


{color:red}*-1 Overall result, please check the reported -1(s)*{color}


The full output of the test-patch run 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894073#comment-16894073
 ] 

Hadoop QA commented on OOZIE-3529:
--

PreCommit-OOZIE-Build started


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: OOZIE-3529.001.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> org.apache.oozie.action.hadoop.LauncherMapperHelper.setupLauncherInfo(LauncherMapperHelper.java:156)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-26 Thread Peter Cseh (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893884#comment-16893884
 ] 

Peter Cseh commented on OOZIE-3529:
---

I can't log into RB for some reason :(

I'm fine with the overall but the comma-separated issue [~asalamon74] brought 
up is real.

e.g.:
{quote}
fs.s3a.aws.credentials.provider
  
Comma-separated class names of credential provider classes which implement
com.amazonaws.auth.AWSCredentialsProvider.
{quote}
https://hadoop.apache.org/docs/r3.2.0/hadoop-aws/tools/hadoop-aws/index.html

This sounds like a property one might want to use with oozie and has comma as a 
separator already.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: OOZIE-3529.001.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-26 Thread Andras Salamon (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893851#comment-16893851
 ] 

Andras Salamon commented on OOZIE-3529:
---

[~dionusos] I like this approach, it allows us to set other filesystem 
properties as well. I left a few comments on the review board.

What do you think, should we set fs.s3a.fast.upload.buffer,  and 
fs.s3a.impl.disable.cache in oozie-default.xml?

[~kmarton] [~gezapeti] could you please also review the patch?

 

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: OOZIE-3529.001.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-26 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893753#comment-16893753
 ] 

Denes Bodo commented on OOZIE-3529:
---

I know that documentation is missing. If the approach is acceptable then I'll 
document it. Thanks for your comments.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Assignee: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: OOZIE-3529.001.patch, id.pig, job.properties, 
> workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-25 Thread Andras Salamon (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892434#comment-16892434
 ] 

Andras Salamon commented on OOZIE-3529:
---

Thanks for the info [~ste...@apache.org].

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> org.apache.oozie.action.hadoop.LauncherMapperHelper.setupLauncherInfo(LauncherMapperHelper.java:156)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891926#comment-16891926
 ] 

Steve Loughran commented on OOZIE-3529:
---

# "slow upload" has been removed from branch 3
# and all it does is save the entire file to disk before uploading in close()
# so even if it hadn't been removed, you'd be seeing roughly the same stack 
trace.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-24 Thread Andras Salamon (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891841#comment-16891841
 ] 

Andras Salamon commented on OOZIE-3529:
---

I've discussed this with [~gezapeti], do we really need the fastupload here? 
What about setting {{fs.s3a.fast.upload}} to false.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-24 Thread Andras Salamon (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891820#comment-16891820
 ] 

Andras Salamon commented on OOZIE-3529:
---

[~dionusos] Thanks for testing. OOZIE-3179 could be very useful here, one 
single {{config-default.xml}} file would be enough.

Adding it to {{core-site.xml}} would affect non-Oozie S3 tasks also. Couldn't 
it be a problem?

What if Oozie sets these properties automatically (maybe with an option not to 
set it just to be sure) ?

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-24 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891802#comment-16891802
 ] 

Denes Bodo commented on OOZIE-3529:
---

What can go wrong if we put {{fs.s3a.fast.upload.buffer=bytebuffer}} into 
core-site.xml globally?

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1026)
>   at 
> org.apache.oozie.action.hadoop.LauncherMapperHelper.setupLauncherInfo(LauncherMapperHelper.java:156)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-24 Thread Denes Bodo (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891793#comment-16891793
 ] 

Denes Bodo commented on OOZIE-3529:
---

As I experienced mentioning the following two properties in workflow.xml action 
configuration, the S3AFileSystem will not use local file system as temp.
{code:xml}

oozie.launcher.fs.s3a.fast.upload.buffer
bytebuffer


oozie.launcher.fs.s3a.impl.disable.cache
true

{code}

Conclusion now:
This can be a workaround if customer wants to have the cve fixed and accept to 
modify their workflows.
This cannot be a solution as it requires modification of all the job 
configuration.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-23 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891172#comment-16891172
 ] 

Steve Loughran commented on OOZIE-3529:
---

It's the only approach which will work other than have the S3A code fall back 
to using java.io.tmp if it can't instantiate a dir allocator. We only need that 
Dir allocator for two services HDFS needs; rotating use of multiple disks and 
resilience to offline storage. There's no fundamental reason why downgrading to 
the normal file temp dir shouldn't be blocked.

Let me know how using bytebuffers work

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-23 Thread Andras Salamon (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890728#comment-16890728
 ] 

Andras Salamon commented on OOZIE-3529:
---

Thanks for checking [~ste...@apache.org]

Oozie has its own 
[RawLocalFileSystem|https://github.com/apache/oozie/blob/master/webapp/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java]
 implementation to protect agains CVE-2017-15712.

Setting {{fs.s3a.fast.upload.buffer}} would eliminate the error but it would 
keep oozie protected agains the CVS, I like this approach.

> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
>   at 

[jira] [Commented] (OOZIE-3529) Oozie not supported for s3 as filesystem

2019-07-22 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/OOZIE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890243#comment-16890243
 ] 

Steve Loughran commented on OOZIE-3529:
---

This error message/stack trace is *not* from hadoop-common. Does Oozie have a 
special binding to block access to the local FS?

The S3A connector needs a local directory to buffer data before uploading it 
multi-MB blocks; it uses the LocalDirAllocator to take the list of paths in 
"fs.s3a.buffer.dir" (falling back to hadoop.tmp.dir), and expects to be given a 
path in a local FS to which it can save temp files.

That stack trace implies something in in Oozie is stopping applications get at 
that local filesystem, so we can't allocate data

As a short term fix, set {{fs.s3a.fast.upload.buffer}} to {{bytebuffer}}. This 
will switch to using off-heap byte buffers for buffering data. Provided Oozie 
doesn't write data faster than it can be uploaded to S3  at such a rate that 
you run out of memory, this should work


> Oozie not supported for s3 as filesystem
> 
>
> Key: OOZIE-3529
> URL: https://issues.apache.org/jira/browse/OOZIE-3529
> Project: Oozie
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.3.1, 5.1.0
>Reporter: Denes Bodo
>Priority: Critical
>  Labels: S3
> Attachments: id.pig, job.properties, workflow.xml
>
>
> Many customer who uses s3 file system as secondary one experiences the 
> following error when Oozie tries to submit the Yarn application:
> {noformat}
> 2019-04-29 13:02:53,770  WARN ForkedActionStartXCommand:523 - 
> SERVER[hwnode1.puretec.purestorage.com] USER[hrt_qa] GROUP[-] TOKEN[] 
> APP[demo-wf] JOB[001-190423141707256-oozie-oozi-W] 
> ACTION[001-190423141707256-oozie-oozi-W@streaming-node] Error starting 
> action [streaming-node]. ErrorType [ERROR], ErrorCode 
> [UnsupportedOperationException], Message [UnsupportedOperationException: 
> Accessing local file system is not allowed]
> org.apache.oozie.action.ActionExecutorException: 
> UnsupportedOperationException: Accessing local file system is not allowed
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1092)
>   at 
> org.apache.oozie.action.hadoop.MapReduceActionExecutor.createLauncherConf(MapReduceActionExecutor.java:309)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1197)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1472)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:41)
>   at 
> org.apache.oozie.command.wf.ForkedActionStartXCommand.execute(ForkedActionStartXCommand.java:30)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:287)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsupportedOperationException: Accessing local file 
> system is not allowed
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
>   at 
> org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
>   at 
>