Hi,

Unless you manually patched Spark, if you have Reynold’s patch for SPARK-2878, 
you also have the patch for SPARK-2893 which makes the underlying cause much 
more obvious and explicit.  So the below is unlikely to be related to 
SPARK-2878.

Graham

On 26 Aug 2014, at 4:13 am, npanj <nitinp...@gmail.com> wrote:

> I am running the code with @rxin's patch in standalone mode.  In my case I am
> registering "org.apache.spark.graphx.GraphKryoRegistrator" . 
> 
> Recently I started to see "com.esotericsoftware.kryo.KryoException:
> java.io.IOException: failed to uncompress the chunk: PARSING_ERROR" . Has
> anyone seen this? Could it be related to this issue? > Here it trace: 
> --
> vids (org.apache.spark.graphx.impl.VertexAttributeBlock)
>        com.esotericsoftware.kryo.io.Input.fill(Input.java:142)
>        com.esotericsoftware.kryo.io.Input.require(Input.java:169)
>        com.esotericsoftware.kryo.io.Input.readLong_slow(Input.java:710)
>        com.esotericsoftware.kryo.io.Input.readLong(Input.java:665)
> 
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$LongArraySerializer.read(DefaultArraySerializers.java:127)
> 
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$LongArraySerializer.read(DefaultArraySerializers.java:107)
>        com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:699)
> 
> com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:611)
> 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221)
>        com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
>        com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:43)
>        com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:34)
>        com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
> 
> org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:133)
> 
> org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
>        org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
> 
> org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1054)
>        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
> 
> org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
> 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>        scala.collection.Iterator$class.foreach(Iterator.scala:727)
>        scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> 
> org.apache.spark.graphx.impl.VertexPartitionBaseOps.innerJoinKeepLeft(VertexPartitionBaseOps.scala:192)
> 
> org.apache.spark.graphx.impl.EdgePartition.updateVertices(EdgePartition.scala:78)
> 
> org.apache.spark.graphx.impl.ReplicatedVertexView$$anonfun$2$$anonfun$apply$1.apply(ReplicatedVertexView.scala:75)
> 
> org.apache.spark.graphx.impl.ReplicatedVertexView$$anonfun$2$$anonfun$apply$1.apply(ReplicatedVertexView.scala:73)
>        scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> 
> org.apache.spark.graphx.EdgeRDD$$anonfun$mapEdgePartitions$1.apply(EdgeRDD.scala:87)
> 
> org.apache.spark.graphx.EdgeRDD$$anonfun$mapEdgePartitions$1.apply(EdgeRDD.scala:85)
>        org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
>        org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
> 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
> 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>        org.apache.spark.scheduler.Task.run(Task.scala:54)
> 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:202)
> 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 
> --
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-developers-list.1001551.n3.nabble.com/SPARK-2878-Kryo-serialisation-with-custom-Kryo-registrator-failing-tp7719p7989.html
> Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to