Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
```
- recovery with file input stream *** FAILED *** (10 seconds, 205
milliseconds)
The code passed to eventually never returned normally. Attempted 660
times over 10.014272499999999 seconds. Last failure message: Unexpected
internal error near index 1
\
^. (CheckpointSuite.scala:680)
- SPARK-18220: read Hive orc table with varchar column *** FAILED *** (2
seconds, 563 milliseconds)
org.apache.spark.sql.execution.QueryExecutionException: FAILED:
Execution Error, return code -101 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
- rolling file appender - size-based rolling (compressed) *** FAILED ***
(15 milliseconds)
1000 was not less than 1000 (FileAppenderSuite.scala:128)
- recover from node failures with replication *** FAILED *** (34 seconds,
613 milliseconds)
org.apache.spark.SparkException: Job aborted due to stage failure: Task
1 in stage 6.0 failed 4 times, most recent failure: Lost task 1.3 in stage 6.0
(TID 33, localhost, executor 28): java.io.IOException:
org.apache.spark.SparkException: Failed to get broadcast_6_piece0 of broadcast_6
...
Caused by: org.apache.spark.SparkException: Failed to get
broadcast_6_piece0 of broadcast_6
...
```
`recovery with file input stream` - seems the same problem with this.
`SPARK-18220: read Hive orc table with varchar column` - I could not see
the cause at the first look but it seems not related with this problem because
apparently the test does not use any path.
`rolling file appender - size-based rolling (compressed)` - this test seems
possible flaky. It was passed in
https://ci.appveyor.com/project/spark-test/spark/build/503-6DEDA384-4A91-45CD-AD26-EE0757D3D2AC/job/etb359vk0fwbrqgo
`recover from node failures with replication` - this test seems possibly
flaky too. It was passed in
https://ci.appveyor.com/project/spark-test/spark/build/503-6DEDA384-4A91-45CD-AD26-EE0757D3D2AC/job/etb359vk0fwbrqgo
Other test failures are deal with in
https://github.com/apache/spark/pull/16501.
Let me add a fix for the first one in `CheckpointSuite` after verifying it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]