[
https://issues.apache.org/jira/browse/SPARK-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552120#comment-14552120
]
Cheng Lian commented on SPARK-7755:
-----------------------------------
Thanks for reporting, would you mind to elaborate on some more details?
# Could you provide more details about the failure (e.g. full stack trace)?
# What version of Spark SQL were you using? (So that we can fill the "Affects
Version/s" field of this ticket.)
# What {{OutputCommitter}} were you using? For normal
{{ParquetOutputCommitter}}, if files are partially written, they won't be
committed. But the newly introduced {{DirectParquetOutputCommitter}} may
confront this problem.
Taking {{_SUCCESS}} into account should make sense for most cases. But the
{{_SUCCESS}} marker is also optional, and can be turned off by setting
{{mapreduce.fileoutputcommitter.marksuccessfuljobs}} to false. Spark SQL uses
this property internally when writing Hive dynamic partitions to workaround a
Hadoop compatibility issue ([PR
#2663|https://github.com/apache/spark/pull/2663]). Not sure whether there are
other scenarios that disable {{_SUCCESS}}.
> MetadataCache.refresh does not take into account _SUCCESS
> ---------------------------------------------------------
>
> Key: SPARK-7755
> URL: https://issues.apache.org/jira/browse/SPARK-7755
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Reporter: Rowan Chattaway
> Priority: Minor
> Original Estimate: 1h
> Remaining Estimate: 1h
>
> When you make a call to sqlc.parquetFile(path) where that path contains
> partially written files, then refresh will fail in strange ways when it
> attempts to read footer files.
> I would like to adjust the file discovery to take into account the presence
> of _SUCCESS and therefore only attempt to ready is we have the success marker.
> I have made the changes locally and it doesn't appear to have any side
> effects.
> What are peoples thoughts about this?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]