Hi Nick/Igor,
Any solution for this ?
Even I am having the same issue and copying jar to each executor is not
feasible if we use lot of jars.
Thanks,
Vipul
Yes, putting the jar on each node and adding it manually to the executor
classpath does it. So, it seems that's where the issue lies.
I'll do some experimenting and see if I can narrow down the problem; but,
for now, at least I can run my job!
Thanks for your help.
On Tue, Sep 8, 2015 at 8:40 A
another idea - you can add this fat jar explicitly to the classpath of
executors...it's not a solution, but might be it work...
I mean place it somewhere locally on executors and add it to cp with
spark.executor.extraClassPath
On 8 September 2015 at 18:30, Nick Peterson wrote:
> Yeah... none of
Yeah... none of the jars listed on the classpath contain this class. The
only jar that does is the fat jar that I'm submitting with spark-submit,
which as mentioned isn't showing up on the classpath anywhere.
-- Nick
On Tue, Sep 8, 2015 at 8:26 AM Igor Berman wrote:
> hmm...out of ideas.
> can
hmm...out of ideas.
can you check in spark ui environment tab that this jar is not somehow
appears 2 times or more...? or more generally - any 2 jars that can contain
this class by any chance
regarding your question about classloader - no idea, probably there is, I
remember stackoverflow has some
Yes, the jar contains the class:
$ jar -tf lumiata-evaluation-assembly-1.0.jar | grep 2028/Document/Document
com/i2028/Document/Document$1.class
com/i2028/Document/Document.class
What else can I do? Is there any way to get more information about the
classes available to the particular classloade
java.lang.ClassNotFoundException: com.i2028.Document.Document
1. so have you checked that jar that you create(fat jar) contains this class?
2. might be there is some stale cache issue...not sure though
On 8 September 2015 at 16:12, Nicholas R. Peterson
wrote:
> Here is the stack trace: (Sorr
Here is the stack trace: (Sorry for the duplicate, Igor -- I forgot
to include the list.)
15/09/08 05:56:43 WARN scheduler.TaskSetManager: Lost task 183.0 in
stage 41.0 (TID 193386, ds-compute2.lumiata.com): java.io.IOException:
com.esotericsoftware.kryo.KryoException: Error constructing instanc
Thans, Igor; I've got it running again right now, and can attach the stack
trace when it finishes.
In the mean time, I've noticed something interesting: in the Spark UI, the
application jar that I submit is not being included on the classpath. It
has been successfully uploaded to the nodes -- in
as a starting point, attach your stacktrace...
ps: look for duplicates in your classpath, maybe you include another jar
with same class
On 8 September 2015 at 06:38, Nicholas R. Peterson
wrote:
> I'm trying to run a Spark 1.4.1 job on my CDH5.4 cluster, through Yarn.
> Serialization is set to us
I'm trying to run a Spark 1.4.1 job on my CDH5.4 cluster, through Yarn.
Serialization is set to use Kryo.
I have a large object which I send to the executors as a Broadcast. The
object seems to serialize just fine. When it attempts to deserialize,
though, Kryo throws a ClassNotFoundException... fo
11 matches
Mail list logo