Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/947#discussion_r13320315
--- Diff: docs/configuration.md ---
@@ -487,6 +487,13 @@ Apart from these, the following properties are also
available, and may be useful
this duration will be cleared as well.
</td>
</tr>
+<tr>
+ <td>spark.hadoop.validateOutputSpecs</td>
+ <td>true</td>
+ <td>If set to true, validates the output specification (e.g. checking
if the output directory already exists)
+ used in saveAsHadoopFile and other variants. This can be disabled to
silence exceptions due to pre-existing
+ output directories.</td>
--- End diff --
Add "We recommend that users do not disable this except if trying to
achieve compatibility with previous versions of Spark. Simply use Hadoop's
FileSystem API to delete output directories by hand."
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---