Hi,

To cope with the issue with META-INF that Sean is pointing out, my solution
is replacing maven-assembly.plugin with maven-shade-plugin, using the
ServicesResourceTransformer (
http://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html#ServicesResourceTransformer)
"to merge multiple implementations of the same interface into one service
entry"

Hope that helps,

Greetings



2014-07-04 9:50 GMT+02:00 Sean Owen <so...@cloudera.com>:

> "No file system for scheme", in the past for me, has meant that files
> in META-INF/services have collided when building an uber jar. There's
> a sort-of-obscure mechanism in Java for registering implementations of
> a service's interface, and Hadoop uses it for FileSystem. It consists
> of listing classes in a file in META-INF/services. If two jars have a
> copy and they collide and one overwrites the other -- or you miss
> packaging these files -- you can end up with this error. Ring any
> bells?
>
> On Fri, Jul 4, 2014 at 2:45 AM, Steven Cox <s...@renci.org> wrote:
> > ...and a real subject line.
> > ________________________________
> > From: Steven Cox [s...@renci.org]
> > Sent: Thursday, July 03, 2014 9:21 PM
> > To: user@spark.apache.org
> > Subject:
> >
> > Folks, I have a program derived from the Kafka streaming wordcount
> example
> > which works fine standalone.
> >
> >
> > Running on Mesos is not working so well. For starters, I get the error
> below
> > "No FileSystem for scheme: hdfs".
> >
> >
> > I've looked at lots of promising comments on this issue so now I have -
> >
> > * Every jar under hadoop in my classpath
> >
> > * Hadoop HDFS and Client in my pom.xml
> >
> >
> > I find it odd that the app writes checkpoint files to HDFS successfully
> for
> > a couple of cycles then throws this exception. This would suggest the
> > problem is not with the syntax of the hdfs URL, for example.
> >
> >
> > Any thoughts on what I'm missing?
> >
> >
> > Thanks,
> >
> >
> > Steve
> >
> >
> > Mesos : 0.18.2
> >
> > Spark : 0.9.1
> >
> >
> >
> > 14/07/03 21:14:20 WARN TaskSetManager: Lost TID 296 (task 1514.0:0)
> >
> > 14/07/03 21:14:20 WARN TaskSetManager: Lost TID 297 (task 1514.0:1)
> >
> > 14/07/03 21:14:20 WARN TaskSetManager: Lost TID 298 (task 1514.0:0)
> >
> > 14/07/03 21:14:20 ERROR TaskSetManager: Task 1514.0:0 failed 10 times;
> > aborting job
> >
> > 14/07/03 21:14:20 ERROR JobScheduler: Error running job streaming job
> > 1404436460000 ms.0
> >
> > org.apache.spark.SparkException: Job aborted: Task 1514.0:0 failed 10
> times
> > (most recent failure: Exception failure: java.io.IOException: No
> FileSystem
> > for scheme: hdfs)
> >
> >         at
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
> >
> >         at
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
> >
> >         at
> >
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> >
> >         at
> > scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> >
> >         at
> > org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018)
> >
> >         at
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
> >
> >         at
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
> >
> >         at scala.Option.foreach(Option.scala:236)
> >
> >         at
> >
> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604)
> >
> >         at
> >
> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
> >
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> >
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> >
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> >
> >
>

Reply via email to