[
https://issues.apache.org/jira/browse/SPARK-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213739#comment-14213739
]
Sean Owen commented on SPARK-4402:
----------------------------------
Look at the code in PairRDDFunctions.saveAsHadoopDataset, which is what
ultimately gets called. You'll see it try to check the output configuration
upfront:
{code}
if (self.conf.getBoolean("spark.hadoop.validateOutputSpecs", true)) {
// FileOutputFormat ignores the filesystem parameter
val ignoredFs = FileSystem.get(hadoopConf)
hadoopConf.getOutputFormat.checkOutputSpecs(ignoredFs, hadoopConf)
}
{code}
It's enabled by default. I wonder if the code path is somehow using a
nonstandard InputFormat that doesn't check?
But this should cause an exception if the output path exists, before it starts,
and was committed in SPARK-1100 for 1.0.
> Output path validation of an action statement resulting in runtime exception
> ----------------------------------------------------------------------------
>
> Key: SPARK-4402
> URL: https://issues.apache.org/jira/browse/SPARK-4402
> Project: Spark
> Issue Type: Wish
> Reporter: Vijay
> Priority: Minor
>
> Output path validation is happening at the time of statement execution as a
> part of lazyevolution of action statement. But if the path already exists
> then it throws a runtime exception. Hence all the processing completed till
> that point is lost which results in resource wastage (processing time and CPU
> usage).
> If this I/O related validation is done before the RDD action operations then
> this runtime exception can be avoided.
> I believe similar validation/ feature is implemented in hadoop also.
> Example:
> SchemaRDD.saveAsTextFile() evaluated the path during runtime
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]