>> Do those also happen if you run other hadoop versions (e.g. try 1.0.4)?
With Hadoop 1.0.4, the sbt test completed with fewer errors than with
Hadoop 1.2.1. I'll run the test for other Hadoop versions and report back
later.

--------------------------------
sbt test errors with Hadoop 1.0.4

[info] FlumeStreamSuite:

2014-07-01 23:18:55.057 java[90699:5903] Unable to load realm info from
SCDynamicStore

[info] - flume input stream *** FAILED ***

[info]   java.io.IOException: Error connecting to localhost/127.0.0.1:9999

[info]   at
org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)

[info]   at
org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203)

[info]   at
org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152)

[info]   at
org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:120)

[info]   at
org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:107)

[info]   at
org.apache.spark.streaming.flume.FlumeStreamSuite$$anonfun$1.apply$mcV$sp(FlumeStreamSuite.scala:54)

[info]   at
org.apache.spark.streaming.flume.FlumeStreamSuite$$anonfun$1.apply(FlumeStreamSuite.scala:40)

[info]   at
org.apache.spark.streaming.flume.FlumeStreamSuite$$anonfun$1.apply(FlumeStreamSuite.scala:40)

[info]   at
org.scalatest.Transformer$$anonfun$apply$1.apply(Transformer.scala:22)

[info]   at
org.scalatest.Transformer$$anonfun$apply$1.apply(Transformer.scala:22)

[info]   ...

[info]   Cause: java.net.ConnectException: Connection refused: localhost/
127.0.0.1:9999

[info]   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

[info]   at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)

[info]   at
org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:150)

[info]   at
org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)

[info]   at
org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)

[info]   at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)

[info]   at
org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)

[info]   at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

[info]   at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

[info]   at java.lang.Thread.run(Thread.java:744)

[info]   ...

[info] - cached post-shuffle

[ERROR] [07/01/2014 23:24:58.083] [test-akka.actor.default-dispatcher-3]
[akka://test/user/dagSupervisor/$a] error

org.apache.spark.SparkException: error

at
org.apache.spark.scheduler.BuggyDAGEventProcessActor$$anonfun$receive$1.applyOrElse(DAGSchedulerSuite.scala:36)

at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)

at akka.actor.ActorCell.invoke(ActorCell.scala:456)

at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)

at akka.dispatch.Mailbox.run(Mailbox.scala:219)

at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)

at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


[info] - DAGSchedulerActorSupervisor closes the SparkContext when
EventProcessActor crashes

[ERROR] [07/01/2014 23:24:58.115]
[DAGSchedulerSuite-akka.actor.default-dispatcher-6]
[akka://DAGSchedulerSuite/user/$$a] Job cancelled because SparkContext was
shut down

org.apache.spark.SparkException: Job cancelled because SparkContext was
shut down

at
org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:639)

at
org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:638)

at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)

at
org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:638)

at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor.postStop(DAGScheduler.scala:1227)

at
akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:201)

at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:163)

at akka.actor.ActorCell.terminate(ActorCell.scala:338)

at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431)

at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)

at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)

at
akka.testkit.CallingThreadDispatcher.process$1(CallingThreadDispatcher.scala:244)

at
akka.testkit.CallingThreadDispatcher.runQueue(CallingThreadDispatcher.scala:284)

at
akka.testkit.CallingThreadDispatcher.systemDispatch(CallingThreadDispatcher.scala:192)

at akka.actor.dungeon.Dispatch$class.stop(Dispatch.scala:106)

at akka.actor.ActorCell.stop(ActorCell.scala:338)

at akka.actor.LocalActorRef.stop(ActorRef.scala:340)

at akka.actor.dungeon.Children$class.stop(Children.scala:66)

at akka.actor.ActorCell.stop(ActorCell.scala:338)

at
akka.actor.dungeon.FaultHandling$$anonfun$terminate$1.apply(FaultHandling.scala:149)

at
akka.actor.dungeon.FaultHandling$$anonfun$terminate$1.apply(FaultHandling.scala:149)

at scala.collection.Iterator$class.foreach(Iterator.scala:727)

at
akka.util.Collections$PartialImmutableValuesIterable$$anon$1.foreach(Collections.scala:27)

at
akka.util.Collections$PartialImmutableValuesIterable.foreach(Collections.scala:52)

at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:149)

at akka.actor.ActorCell.terminate(ActorCell.scala:338)

at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431)

at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)

at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)

at akka.dispatch.Mailbox.run(Mailbox.scala:218)

at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)

at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)










On Tue, Jul 1, 2014 at 1:04 AM, Patrick Wendell <pwend...@gmail.com> wrote:

> Do those also happen if you run other hadoop versions (e.g. try 1.0.4)?
>
> On Tue, Jul 1, 2014 at 1:00 AM, Taka Shinagawa <taka.epsi...@gmail.com>
> wrote:
> > Since Spark 1.0.0, I've been seeing multiple errors when running sbt
> test.
> >
> > I ran the following commands from Spark 1.0.1 RC1 on Mac OSX 10.9.2.
> >
> > $ sbt/sbt clean
> > $ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly
> > $ sbt/sbt test
> >
> >
> > I'm attaching the log file generated by the sbt test.
> >
> > Here's the summary part of the test.
> >
> > [info] Run completed in 30 minutes, 57 seconds.
> > [info] Total number of tests run: 605
> > [info] Suites: completed 83, aborted 0
> > [info] Tests: succeeded 600, failed 5, canceled 0, ignored 5, pending 0
> > [info] *** 5 TESTS FAILED ***
> > [error] Failed: Total 653, Failed 5, Errors 0, Passed 648, Ignored 5
> > [error] Failed tests:
> > [error] org.apache.spark.ShuffleNettySuite
> > [error] org.apache.spark.ShuffleSuite
> > [error] org.apache.spark.FileServerSuite
> > [error] org.apache.spark.DistributedSuite
> > [error] (core/test:test) sbt.TestsFailedException: Tests unsuccessful
> > [error] Total time: 2033 s, completed Jul 1, 2014 12:08:03 AM
> >
> > Is anyone else seeing errors like this?
> >
> >
> > Thanks,
> > Taka
>

Reply via email to