Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18716#discussion_r128908504
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -783,52 +780,6 @@ abstract class HadoopFsRelationTest extends QueryTest
with SQLTestUtils with Tes
}
}
- test("SPARK-8578 specified custom output committer will not be used to
append data") {
- withSQLConf(SQLConf.FILE_COMMIT_PROTOCOL_CLASS.key ->
- classOf[SQLHadoopMapReduceCommitProtocol].getCanonicalName) {
- val extraOptions = Map[String, String](
- SQLConf.OUTPUT_COMMITTER_CLASS.key ->
classOf[AlwaysFailOutputCommitter].getName,
- // Since Parquet has its own output committer setting, also set it
- // to AlwaysFailParquetOutputCommitter at here.
- "spark.sql.parquet.output.committer.class" ->
- classOf[AlwaysFailParquetOutputCommitter].getName
- )
-
- val df = spark.range(1, 10).toDF("i")
- withTempPath { dir =>
-
df.write.mode("append").format(dataSourceName).save(dir.getCanonicalPath)
- // Because there data already exists,
- // this append should succeed because we will use the output
committer associated
- // with file format and AlwaysFailOutputCommitter will not be used.
-
df.write.mode("append").format(dataSourceName).save(dir.getCanonicalPath)
--- End diff --
this test is wrong because it didn't call `.options(extraOptions)`, that's
why I missed it in my previous PR...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]