yes, indeed the URI should be fine, besides I don't use ZK and still
getting the same error...
M.


2014-02-04 Mark Hamstra <[email protected]>:

> Nope, sorry -- looks like that particular issue has been fixed so that
> your URI should be fine.
>
>
> On Tue, Feb 4, 2014 at 2:33 AM, Mark Hamstra <[email protected]>wrote:
>
>> export MASTER=mesos://zk://10.10.0.141:2181/mesos
>>
>>
>> On Tue, Feb 4, 2014 at 2:20 AM, Francesco Bongiovanni <
>> [email protected]> wrote:
>>
>>> Hi everyone,
>>>
>>> I installed the latest Spark release (0.9.0), on top of Mesos,  linked
>>> to my
>>> HDFS 1.2.1 (sbt assembly success, make-distribution success), and when I
>>> try
>>> to launch some ops from the spark-shell, I got the following error. I
>>> configured my spark-env.sh and exported the correct env variables, but I
>>> am
>>> stucked on this error. I have tried building Spark from the sources, from
>>> the binaries with Hadoop1, cleaned my .ivy2 and .m2 caches, and the same
>>> error arises...what am I missing ?
>>>
>>> Here are my spark-env and the stderr from Mesos.
>>>
>>>
>>> =================SPARK-ENV.SH==============================
>>> export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so
>>> export
>>> SPARK_EXECUTOR_URI=hdfs://
>>> 10.10.0.141:9000/spark/spark-0.9.0-incubating.tgz
>>> export MASTER=zk://10.10.0.141:2181/mesos
>>> export SPARK_LOCAL_IP=10.10.0.141
>>>
>>> if [ -z "$SPARK_MEM" ] ; then
>>>   SPARK_MEM="15g"
>>> fi
>>>
>>>
>>> if [ -z "$SPARK_WORKER_MEMORY" ] ; then
>>>   SPARK_WORKER_MEMORY="40g"
>>> fi
>>>
>>>
>>>
>>>
>>>
>>> ===============STDERR=======================================
>>> 14/02/04 11:04:22 INFO MesosExecutorBackend: Using Spark's default log4j
>>> profile: org/apache/spark/log4j-defaults.properties
>>> 14/02/04 11:04:22 INFO MesosExecutorBackend: Registered with Mesos as
>>> executor ID 201402040838-2365590026-5050-31560-7
>>> 14/02/04 11:04:22 INFO Slf4jLogger: Slf4jLogger started
>>> 14/02/04 11:04:22 INFO Remoting: Starting remoting
>>> 14/02/04 11:04:22 INFO Remoting: Remoting started; listening on addresses
>>> :[akka.tcp://[email protected]:56046]
>>> 14/02/04 11:04:22 INFO Remoting: Remoting now listens on addresses:
>>> [akka.tcp://[email protected]:56046]
>>> 14/02/04 11:04:23 INFO SparkEnv: Connecting to BlockManagerMaster:
>>> akka.tcp://spark@localhost:7077/user/BlockManagerMaster
>>> akka.actor.ActorNotFound: Actor not found for:
>>> ActorSelection[Actor[akka.tcp://spark@localhost
>>> :7077/]/user/BlockManagerMaster]
>>>         at
>>>
>>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)
>>>         at
>>>
>>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:64)
>>>         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>>>         at
>>>
>>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>>>         at
>>>
>>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>>>         at
>>>
>>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>>>         at
>>>
>>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>>>         at
>>> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>>>         at
>>> akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>>>         at
>>>
>>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
>>>         at
>>> akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
>>>         at
>>>
>>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
>>>         at
>>> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>>>         at
>>>
>>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>>>         at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:269)
>>>         at
>>> akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:512)
>>>         at
>>> akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:545)
>>>         at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:535)
>>>         at
>>>
>>> akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:91)
>>>         at akka.actor.ActorRef.tell(ActorRef.scala:125)
>>>         at
>>> akka.dispatch.Mailboxes$$anon$1$$anon$2.enqueue(Mailboxes.scala:44)
>>>         at
>>> akka.dispatch.QueueBasedMessageQueue$class.cleanUp(Mailbox.scala:438)
>>>         at
>>>
>>> akka.dispatch.UnboundedDequeBasedMailbox$MessageQueue.cleanUp(Mailbox.scala:650)
>>>         at akka.dispatch.Mailbox.cleanUp(Mailbox.scala:309)
>>>         at
>>> akka.dispatch.MessageDispatcher.unregister(AbstractDispatcher.scala:204)
>>>         at
>>> akka.dispatch.MessageDispatcher.detach(AbstractDispatcher.scala:140)
>>>         at
>>>
>>> akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:203)
>>>         at
>>> akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:163)
>>>         at akka.actor.ActorCell.terminate(ActorCell.scala:338)
>>>         at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431)
>>>         at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
>>>         at
>>> akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
>>>         at akka.dispatch.Mailbox.run(Mailbox.scala:218)
>>>         at
>>>
>>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>>>         at
>>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>         at
>>>
>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>>         at
>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>         at
>>>
>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>> Exception in thread "Thread-0"
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/spark-0-9-0-on-top-of-Mesos-error-Akka-Actor-not-found-tp1164.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>
>>
>

Reply via email to