Jeff Zhang created ZEPPELIN-1425:
------------------------------------

             Summary: sparkr.zip is not distributed to executors
                 Key: ZEPPELIN-1425
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1425
             Project: Zeppelin
          Issue Type: Bug
    Affects Versions: 0.6.1
            Reporter: Jeff Zhang
            Assignee: Jeff Zhang


So if R daemon is required in executor, the R script will fail.

How to reproduce it
{code}
df <- createDataFrame(sqlContext, mtcars)
showDF(df)
{code}

Exception in executor side:
{noformat}
10:16:20,024  INFO org.apache.spark.storage.memory.MemoryStore:54 - Block 
broadcast_1 stored as values in memory (estimated size 14.2 KB, free 366.3 MB)
10:16:21,018  INFO org.apache.spark.api.r.BufferedStreamThread:54 - Fatal 
error: cannot open file 
'/Users/jzhang/Temp/hadoop_tmp/nm-local-dir/usercache/jzhang/appcache/application_1473129941656_0037/container_1473129941656_0037_01_000002/sparkr/SparkR/worker/daemon.R':
 No such file or directory
10:16:31,023 ERROR org.apache.spark.executor.Executor:91 - Exception in task 
0.2 in stage 1.0 (TID 3)
java.net.SocketTimeoutException: Accept timed out
    at java.net.PlainSocketImpl.socketAccept(Native Method)
    at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404)
    at java.net.ServerSocket.implAccept(ServerSocket.java:545)
    at java.net.ServerSocket.accept(ServerSocket.java:513)
    at org.apache.spark.api.r.RRunner$.createRWorker(RRunner.scala:367)
    at org.apache.spark.api.r.RRunner.compute(RRunner.scala:69)
    at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:49)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to