Hi,

I have some code that parses a snappy thrift file for objects.  This code
works fine when run standalone (outside of the Spark environment).  However,
when running from within Spark, I get an IllegalAccessError exception from
the org.iq80.snappy package.  Has anyone else seen this error and/or do you
have any suggestions?  Any pointers appreciated.  Thanks!

Vasu

-- 
Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.IllegalAccessError:
tried to access class org.iq80.snappy.BufferRecycler from class
org.iq80.snappy.AbstractSnappyInputStream
        at
org.iq80.snappy.AbstractSnappyInputStream.<init>(AbstractSnappyInputStream.java:91)
        at
org.iq80.snappy.SnappyFramedInputStream.<init>(SnappyFramedInputStream.java:38)
        at DistMatchMetric$1.call(DistMatchMetric.java:131)
        at DistMatchMetric$1.call(DistMatchMetric.java:123)
        at
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1015)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at
scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
        at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1157)
        at
org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$14.apply(RDD.scala:1011)
        at
org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$14.apply(RDD.scala:1009)
        at 
org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:1951)
        at 
org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:1951)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Potential-conflict-with-org-iq80-snappy-in-Spark-1-6-0-environment-tp26539.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to