(bcc: user@spark, cc: cdh-user@cloudera)

This is a CDH issue, so I'm moving it to the CDH mailing list.

We're taking a look at how we're packaging dependencies so that these
issues happen less when running on CDH. But in the meantime, instead of
using "--jars", you could instead add the newer jars to
spark.driver.extraClassPath and spark.executor.extraClassPath (which will
prepend entries to Spark's classpath).

On Fri, Jul 24, 2015 at 1:02 PM, Shushant Arora <[email protected]>
wrote:

> Hi
>
> I am running a spark stream app on yarn and using apache httpasyncclient
> 4.1
> This client Jar internally has a dependency on jar http-core4.4.1.jar.
>
> This jar's( http-core .jar) old version i.e. httpcore-4.2.5.jar is also
> present in class path and has higher priority in classpath(coming earlier
> in classpath)
> Jar is at /apps/cloudera/parcels/CDH/jars/httpcore-4.2.5.jar
>
> This is conflicting with job and making job to kill.
>
> I have packaged my jobs jar using maven.
> When I ran the job - it killed the executors with below exception :
>
> Exception in thread "main" org.apache.spark.SparkException: Job aborted
> due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent
> failure: Lost task 0.3 in stage 3.0 (TID 76, ip):
> java.lang.NoSuchFieldError: INSTANCE
>         at
> org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.<init
>
>                                   >(DefaultHttpRequestWriterFactory.java:52)
>         at
> org.apache.http.impl.nio.codecs.DefaultHttpRequestWriterFactory.<init
>
>                                   >(DefaultHttpRequestWriterFactory.java:56)
>
>
>
> When I specify latets version of http-core in --jars argument then also it
> picked old versioned jar only.
>
> Is there any way to make it not to use spark path's jar rather my jar
> while executing the job ?
>
> Thanks
>



-- 
Marcelo

Reply via email to