[ 
https://issues.apache.org/jira/browse/SPARK-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17317066#comment-17317066
 ] 

Nick Hryhoriev edited comment on SPARK-11844 at 4/8/21, 10:49 AM:
------------------------------------------------------------------

I have experienced the same issue with spark 2.4.7 and 3.1.1. with Spark 
Structure Stream FileSink and DeltaSink.  writing to s3 using S3a API.
This specific was a writer with the s3 multipart upload, which hidden for spark 
developer in S3a API(Hadoop-aws).
 It happens twice in production for 5 months.
 I was not lucky to reproduce it. 
 Because we write near 10000 files every hour. files add every 10 minutes,
 And only two damaged.

One file has PageHeader: Null exception.
 One file has PageHeader: unknown type -15
 And this issue always reproduced on these files.
 The Footer of the file is OK, which means I can read schema and do an 
operation that requires only statistics like count.

Data itself do not affect this. We have a backup of data and successfully re 
ETL this data. New files are ok.

My personal *intuition* suggests that problem somewhere in reception handling 
in ParquetWriter + S3aOutputStream.
Why? 
1. S3 SDK can confirm or abort upload. because it's atomic
2. While Hadoop Files system API on FsDataOutputStream level, can only close 
the stream.
So in case of high-level exception in spark or parquet, it just does not close 
stream.
But because it's Stream, which is part of the java.io decorator pattern. 
Multiple streams wrap each other. So maybe somewhere there is a `try finally` 
block which calls `close`.
Which will commit the wrong file and hide the underlying exception.
It's only a guess, I was not able to confirm it with code or reproduce it.


was (Author: hryhoriev.nick):
I have experienced the same issue with spark 2.4.7 and 3.1.1. with Spark 
Structure Stream FileSink and DeltaSink.  writing to s3 using S3a api.
 It happens twice in production for 5 months.
 I was not lucky to reproduce it. 
 Because we write near 10000 files every hour. files add every 10 minutes,
 And only two damaged.

One file has PageHeader: Null exception.
One file has PageHeader: unknown type -15
And this issue always reproduced on these files.
The Footer of the file is OK, which means I can read schema and do an operation 
that requires only statistics like count.

> can not read class org.apache.parquet.format.PageHeader: don't know what 
> type: 13
> ---------------------------------------------------------------------------------
>
>                 Key: SPARK-11844
>                 URL: https://issues.apache.org/jira/browse/SPARK-11844
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Yin Huai
>            Priority: Minor
>              Labels: bulk-closed
>
> I got the following error once when I was running a query
> {code}
> java.io.IOException: can not read class org.apache.parquet.format.PageHeader: 
> don't know what type: 13
>       at org.apache.parquet.format.Util.read(Util.java:216)
>       at org.apache.parquet.format.Util.readPageHeader(Util.java:65)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader$Chunk.readPageHeader(ParquetFileReader.java:534)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader$Chunk.readAllPages(ParquetFileReader.java:546)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:496)
>       at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>       at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:208)
>       at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
>       at 
> org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.hasNext(SqlNewHadoopRDD.scala:168)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>       at 
> org.apache.spark.sql.execution.joins.HashJoin$$anon$1.hasNext(HashJoin.scala:77)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>       at 
> org.apache.spark.sql.execution.joins.HashJoin$$anon$1.hasNext(HashJoin.scala:77)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>       at 
> org.apache.spark.sql.execution.joins.HashJoin$$anon$1.hasNext(HashJoin.scala:77)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>       at 
> org.apache.spark.sql.execution.joins.HashJoin$$anon$1.hasNext(HashJoin.scala:77)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>       at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:88)
>       at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:704)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:704)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>       at org.apache.spark.scheduler.Task.run(Task.scala:88)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: parquet.org.apache.thrift.protocol.TProtocolException: don't know 
> what type: 13
>       at 
> parquet.org.apache.thrift.protocol.TCompactProtocol.getTType(TCompactProtocol.java:806)
>       at 
> parquet.org.apache.thrift.protocol.TCompactProtocol.readFieldBegin(TCompactProtocol.java:500)
>       at 
> org.apache.parquet.format.InterningProtocol.readFieldBegin(InterningProtocol.java:158)
>       at 
> parquet.org.apache.thrift.protocol.TProtocolUtil.skip(TProtocolUtil.java:108)
> {code}
> The next retry was good. Right now, seems not critical. But, let's still 
> track it in case we see it in future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to