[ 
https://issues.apache.org/jira/browse/SPARK-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14161900#comment-14161900
 ] 

DB Tsai edited comment on SPARK-3630 at 10/7/14 2:07 PM:
---------------------------------------------------------

We also see similar issue when we perform map -> reduceByKey, and then take(10) 
with fairly large dataset (around 600 parquet files) with the spark built 
against master on Oct 2. 

{noformat}
    Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most 
recent failure: Lost task 0.3 in stage 6.0 (TID 8312, ams03-002.ff): 
java.io.IOException: PARSING_ERROR(2)
        org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:84)
        org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
        org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:594)
        
org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
        
org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
        org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
        
org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
        
org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:1004)
        
org.apache.spark.storage.ShuffleBlockFetcherIterator$$anon$1$$anonfun$onBlockFetchSuccess$1.apply(ShuffleBlockFetcherIterator.scala:116)
        
org.apache.spark.storage.ShuffleBlockFetcherIterator$$anon$1$$anonfun$onBlockFetchSuccess$1.apply(ShuffleBlockFetcherIterator.scala:115)
        
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:243)
        
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:52)
        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
        
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
        
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        org.apache.spark.Aggregator.combineCombinersByKey(Aggregator.scala:89)
        
org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:44)
        org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
        org.apache.spark.scheduler.Task.run(Task.scala:56)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:182)
        
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:744)
Driver stacktrace:
{noformat}



was (Author: dbtsai):
We also see similar issue when we perform map -> reduceByKey, and then take(10) 
with fairly large dataset (around 600 parquet files) with the spark built 
against master on Oct 2. 

{noformat}
    Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most 
recent failure: Lost task 0.3 in stage 6.0 (TID 8312, ams03-002.ff.avast.com): 
java.io.IOException: PARSING_ERROR(2)
        org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:84)
        org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
        org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:594)
        
org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
        
org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
        org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
        
org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
        
org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:1004)
        
org.apache.spark.storage.ShuffleBlockFetcherIterator$$anon$1$$anonfun$onBlockFetchSuccess$1.apply(ShuffleBlockFetcherIterator.scala:116)
        
org.apache.spark.storage.ShuffleBlockFetcherIterator$$anon$1$$anonfun$onBlockFetchSuccess$1.apply(ShuffleBlockFetcherIterator.scala:115)
        
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:243)
        
org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:52)
        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
        
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
        
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        org.apache.spark.Aggregator.combineCombinersByKey(Aggregator.scala:89)
        
org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:44)
        org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
        org.apache.spark.scheduler.Task.run(Task.scala:56)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:182)
        
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:744)
Driver stacktrace:
{noformat}


> Identify cause of Kryo+Snappy PARSING_ERROR
> -------------------------------------------
>
>                 Key: SPARK-3630
>                 URL: https://issues.apache.org/jira/browse/SPARK-3630
>             Project: Spark
>          Issue Type: Task
>          Components: Spark Core
>    Affects Versions: 1.1.0
>            Reporter: Andrew Ash
>            Assignee: Ankur Dave
>
> A recent GraphX commit caused non-deterministic exceptions in unit tests so 
> it was reverted (see SPARK-3400).
> Separately, [~aash] observed the same exception stacktrace in an 
> application-specific Kryo registrator:
> {noformat}
> com.esotericsoftware.kryo.KryoException: java.io.IOException: failed to 
> uncompress the chunk: PARSING_ERROR(2)
> com.esotericsoftware.kryo.io.Input.fill(Input.java:142) 
> com.esotericsoftware.kryo.io.Input.require(Input.java:169) 
> com.esotericsoftware.kryo.io.Input.readInt(Input.java:325) 
> com.esotericsoftware.kryo.io.Input.readFloat(Input.java:624) 
> com.esotericsoftware.kryo.serializers.DefaultSerializers$FloatSerializer.read(DefaultSerializers.java:127)
>  
> com.esotericsoftware.kryo.serializers.DefaultSerializers$FloatSerializer.read(DefaultSerializers.java:117)
>  
> com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:732) 
> com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:109)
>  
> com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>  
> com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:732)
> ...
> {noformat}
> This ticket is to identify the cause of the exception in the GraphX commit so 
> the faulty commit can be fixed and merged back into master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to