Hi Ian,

When you run your packaged application, are you adding its jar file to
the SparkContext (by calling the addJar() method)?

That will distribute the code to all the worker nodes. The failure
you're seeing seems to indicate the worker nodes do not have access to
your code.

On Mon, Apr 14, 2014 at 9:17 AM, Ian Bonnycastle <ibo...@gmail.com> wrote:
> Good afternoon,
>
> I'm attempting to get the wordcount example working, and I keep getting an
> error in the "reduceByKey(_ + _)" call. I've scoured the mailing lists, and
> haven't been able to find a sure fire solution, unless I'm missing something
> big. I did find something close, but it didn't appear to work in my case.
> The error is:
>
> org.apache.spark.SparkException: Job aborted: Task 2.0:3 failed 4 times
> (most recent failure: Exception failure: java.lang.ClassNotFoundException:
> SimpleApp$$anonfun$3)
>         at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)

-- 
Marcelo

Reply via email to