We're using Mesos, is there a reasonable expectation that
spark.files.userClassPathFirst will actually work?

On Mon, Sep 22, 2014 at 1:42 PM, Marcelo Vanzin <van...@cloudera.com> wrote:

> Hi Cody,
>
> I'm still writing a test to make sure I understood exactly what's
> going on here, but from looking at the stack trace, it seems like the
> newer Guava library is picking up the "Optional" class from the Spark
> assembly.
>
> Could you try one of the options that put the user's classpath before
> the Spark assembly? (spark.files.userClassPathFirst or
> spark.yarn.user.classpath.first depending on which master you're
> running)
>
> People seem to have run into issues with those options in the past,
> but if they work for you, then Guava should pick its own Optional
> class (instead of the one shipped with Spark) and things should then
> work.
>
> I'll investigate a way to fix it in Spark in the meantime.
>
>
> On Fri, Sep 19, 2014 at 10:30 PM, Cody Koeninger <c...@koeninger.org>
> wrote:
> > After the recent spark project changes to guava shading, I'm seeing
> issues
> > with the datastax spark cassandra connector (which depends on guava 15.0)
> > and the datastax cql driver (which depends on guava 16.0.1)
> >
> > Building an assembly for a job (with spark marked as provided) that
> > includes either guava 15.0 or 16.0.1, results in errors like the
> following:
> >
> > scala> session.close
> >
> > scala> s[14/09/20 04:56:35 ERROR Futures$CombinedFuture: input future
> > failed.
> > java.lang.IllegalAccessError: tried to access class
> > org.spark-project.guava.common.base.Absent from class
> > com.google.common.base.Optional
> >         at com.google.common.base.Optional.absent(Optional.java:79)
> >         at com.google.common.base.Optional.fromNullable(Optional.java:94)
> >         at
> >
> com.google.common.util.concurrent.Futures$CombinedFuture.setOneValue(Futures.java:1608)
> >         at
> >
> com.google.common.util.concurrent.Futures$CombinedFuture.access$400(Futures.java:1470)
> >         at
> >
> com.google.common.util.concurrent.Futures$CombinedFuture$2.run(Futures.java:1548)
> >         at
> >
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
> >         at
> >
> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
> >         at
> >
> com.google.common.util.concurrent.ExecutionList.add(ExecutionList.java:101)
> >         at
> >
> com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:170)
> >         at
> >
> com.google.common.util.concurrent.Futures$CombinedFuture.init(Futures.java:1545)
> >         at
> >
> com.google.common.util.concurrent.Futures$CombinedFuture.<init>(Futures.java:1491)
> >         at
> > com.google.common.util.concurrent.Futures.listFuture(Futures.java:1640)
> >         at
> > com.google.common.util.concurrent.Futures.allAsList(Futures.java:983)
> >         at
> >
> com.datastax.driver.core.CloseFuture$Forwarding.<init>(CloseFuture.java:73)
> >         at
> >
> com.datastax.driver.core.HostConnectionPool.closeAsync(HostConnectionPool.java:398)
> >         at
> >
> com.datastax.driver.core.SessionManager.closeAsync(SessionManager.java:157)
> >         at
> > com.datastax.driver.core.SessionManager.close(SessionManager.java:172)
> >         at
> >
> com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$destroySession(CassandraConnector.scala:180)
> >         at
> >
> com.datastax.spark.connector.cql.CassandraConnector$$anonfun$5.apply(CassandraConnector.scala:151)
> >         at
> >
> com.datastax.spark.connector.cql.CassandraConnector$$anonfun$5.apply(CassandraConnector.scala:151)
> >         at com.datastax.spark.connector.cql.RefCountedCache.com
> >
> $datastax$spark$connector$cql$RefCountedCache$$releaseImmediately(RefCountedCache.scala:86)
> >         at
> >
> com.datastax.spark.connector.cql.RefCountedCache$ReleaseTask.run(RefCountedCache.scala:26)
> >         at
> >
> com.datastax.spark.connector.cql.RefCountedCache$$anonfun$com$datastax$spark$connector$cql$RefCountedCache$$processPendingReleases$2.apply(RefCountedCache.scala:150)
> >         at
> >
> com.datastax.spark.connector.cql.RefCountedCache$$anonfun$com$datastax$spark$connector$cql$RefCountedCache$$processPendingReleases$2.apply(RefCountedCache.scala:147)
> >         at
> >
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> >         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> >         at
> > scala.collection.concurrent.TrieMapIterator.foreach(TrieMap.scala:922)
> >         at
> > scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> >         at scala.collection.concurrent.TrieMap.foreach(TrieMap.scala:632)
> >         at
> >
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> >         at com.datastax.spark.connector.cql.RefCountedCache.com
> >
> $datastax$spark$connector$cql$RefCountedCache$$processPendingReleases(RefCountedCache.scala:147)
> >         at
> >
> com.datastax.spark.connector.cql.RefCountedCache$$anon$1.run(RefCountedCache.scala:157)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> >         at
> >
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> >         at
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> >         at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> >         at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:722)
>
>
>
> --
> Marcelo
>

Reply via email to