[
https://issues.apache.org/jira/browse/SPARK-26052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17564878#comment-17564878
]
Apache Spark commented on SPARK-26052:
--------------------------------------
User 'danielhaviv' has created a pull request for this issue:
https://github.com/apache/spark/pull/37153
> Spark should output a _SUCCESS file for every partition correctly written
> -------------------------------------------------------------------------
>
> Key: SPARK-26052
> URL: https://issues.apache.org/jira/browse/SPARK-26052
> Project: Spark
> Issue Type: Improvement
> Components: Block Manager, Spark Core
> Affects Versions: 2.3.0
> Reporter: Matt Matolcsi
> Priority: Minor
> Labels: bulk-closed
>
> When writing a set of partitioned Parquet files to HDFS using
> dataframe.write.parquet(), a _SUCCESS file is written to hdfs://path/to/table
> after successful completion, though the actual Parquet files will end up in
> hdfs://path/to/table/partition_key1=val1/partition_key2=val2/....
> If partitions are written out one at a time (e.g., an hourly ETL), the
> _SUCCESS file is overwritten by each subsequent run and information on what
> partitions were correctly written is lost.
> I would like to be able to keep track of what partitions were successfully
> written in HDFS. I think this could be done by writing the _SUCCESS files to
> the same partition directories where the Parquet files reside, i.e.,
> hdfs://path/to/table/partition_key1=val1/partition_key2=val2/....
> Since https://issues.apache.org/jira/browse/SPARK-13207 has been resolved, I
> don't think this should break partition discovery.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]