[ 
https://issues.apache.org/jira/browse/FLINK-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276651#comment-16276651
 ] 

Fabian Hueske edited comment on FLINK-8164 at 12/4/17 12:53 PM:
----------------------------------------------------------------

This is caused by a too restrictive check in the MemoryArchivist. This check 
limits the supported file-system schemes to "file", "hdfs" and "maprfs". I 
can't think of a quick workaround, apart from writing to the local file-system 
and uploading these files to s3 manually.

This check was removed for 1.4.


was (Author: zentol):
This is caused by a too restrictive check in the MemoryArchivist. This check 
limits the supported file-system schemes to "file", "hdfs" and "maprfs". I 
can't think of a quick workaround, apart from writing to the local file-system 
and uploading these files to s3 manually.

This check was for 1.4.

> JobManager's archiving does not work on S3
> ------------------------------------------
>
>                 Key: FLINK-8164
>                 URL: https://issues.apache.org/jira/browse/FLINK-8164
>             Project: Flink
>          Issue Type: Bug
>          Components: History Server, JobManager
>    Affects Versions: 1.3.2
>            Reporter: Cristian
>
> I'm trying to configure JobManager's archiving mechanism 
> (https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/historyserver.html)
>  to use S3 but I'm getting this:
> {code}
> 2017-11-28 19:11:09,751 WARN  
> org.apache.flink.runtime.jobmanager.MemoryArchivist           - Failed to 
> create Path for Some(s3a://bucket/completed-jobs). Job will not be archived.
> java.lang.IllegalArgumentException: No file system found with scheme s3, 
> referenced in file URI 's3://bucket/completed-jobs'.
>       at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.validateAndNormalizeUri(MemoryArchivist.scala:297)
>       at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.org$apache$flink$runtime$jobmanager$MemoryArchivist$$archiveJsonFiles(MemoryArchivist.scala:201)
>       at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist$$anonfun$handleMessage$1.applyOrElse(MemoryArchivist.scala:107)
>       at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>       at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>       at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>       at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>       at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>       at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>       at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.aroundReceive(MemoryArchivist.scala:65)
>       at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>       at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>       at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>       at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>       at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>       at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>       at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>       at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>       at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}
> Which is very weird since I'm able to write to S3 from within the job itself. 
> I have also tried using s3a instead to no avail.
> This happens running Flink v1.3.2 on EMR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to