Github user dragos commented on the pull request:

    https://github.com/apache/spark/pull/9301#issuecomment-158113117
  
    A couple of things while trying to run this on Mesos 0.25:
    
    - `ENV SPARK_HOME /opt/spark` doesn't seem to do anything. It always picks 
up my local `SPARK_HOME`, which fails on Docker
    - I got past that by setting `spark.mesos.executor.home=/opt/spark`. This 
also failed, because it canonicalizes this path, and it happens that 
`/opt/spark` is a symlink on my system, and Docker uses the resolved link 
(which does not exist either).
    - I got past this by renaming my symlink, but things are still not working. 
I'm not sure if this last one is due to my setup, or something is really wrong 
here. Docker networking might need some tweaks (running docker 1.9).
    
    How did you test this? Any comments on how this should be run (do I need to 
set `spark.mesos.executor.home`?)
    
    ```
    15/11/19 16:31:40 INFO Utils: Successfully started service 
'driverPropsFetcher' on port 49918.
    15/11/19 16:31:40 WARN ReliableDeliverySupervisor: Association with remote 
system [akka.tcp://[email protected]:62271] has failed, address is now 
gated for [5000] ms. Reason: [Disassociated] 
    Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1563)
        at 
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:68)
        at 
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:149)
        at 
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:250)
        at 
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
    Caused by: org.apache.spark.rpc.RpcTimeoutException: Futures timed out 
after [120 seconds]. This timeout is controlled by spark.rpc.lookupTimeout
        at 
org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcEnv.scala:214)
        at 
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcEnv.scala:229)
        at 
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcEnv.scala:225)
        at 
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
        at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcEnv.scala:242)
        at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:98)
        at 
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:162)
        at 
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:69)
        at 
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        ... 4 more
    Caused by: java.util.concurrent.TimeoutException: Futures timed out after 
[120 seconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at 
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
        at 
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
        at scala.concurrent.Await$.result(package.scala:107)
        at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcEnv.scala:241)
        ... 11 more
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to