[ 
https://issues.apache.org/jira/browse/SPARK-756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324663#comment-14324663
 ] 

Sean Owen commented on SPARK-756:
---------------------------------

Given the version and "MLbase", I'm guessing this is obsolete?

> JAR file appears corrupt to workers. - MLbase
> ---------------------------------------------
>
>                 Key: SPARK-756
>                 URL: https://issues.apache.org/jira/browse/SPARK-756
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 0.8.0
>            Reporter: Evan Sparks
>
> Running Spark on EC2 with the default AMI and standard configuration.
> This bug affects the spark master branch (as of 2013-05-29) - notably, the 
> same code does not crash under spark 0.7.0.
> I am creating a 55MB JAR file via the sbt-assembly plugin which is valid - 
> code inside the JAR is callable and works fine if I rsync it to each worker 
> and put that file  on the SPARK_CLASSPATH. 
> However, when I share the JAR by passing it to the SparkContext (e.g. : val 
> sc = new SparkContext("spark://"+ sys.env("SPARK_MASTER_IP") + ":7077", 
> "VisionClass", "/root/spark", List("target/visionClass-assembly-1.0.jar")) )
> Worker tasks die - with a "java.util.zip.ZipException" when attempting to 
> load the JAR file.
> The jar file in the spark worker temp directory seems complete (MD5 sums 
> match what I have on the master) upon later inspection - so I suspect the 
> Worker is attempting to open the file before it has been fully flushed to 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to