[ 
https://issues.apache.org/jira/browse/SPARK-15849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330238#comment-15330238
 ] 

Thomas Demoor commented on SPARK-15849:
---------------------------------------

Seems like typical list-after-write inconsistency. However, you can avoid this 
issue. With S3, you should use a direct committer instead of the standard 
Hadoop ones. Googling for DirectParquetOutputCommitter should help you along.

There is no reason to have the "write to _temporary and atomically rename to 
final version" as S3 can handle concurrent writers. We are working to get this 
behaviour directly into Hadoop (HADOOP-9565).

> FileNotFoundException on _temporary while doing saveAsTable to S3
> -----------------------------------------------------------------
>
>                 Key: SPARK-15849
>                 URL: https://issues.apache.org/jira/browse/SPARK-15849
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.1
>         Environment: AWS EC2 with spark on yarn and s3 storage
>            Reporter: Sandeep
>
> When submitting spark jobs to yarn cluster, I occasionally see these error 
> messages while doing saveAsTable. I have tried doing this with 
> spark.speculation=false, and get the same error. These errors are similar to 
> SPARK-2984, but my jobs are writing to S3(s3n) :
> Caused by: java.io.FileNotFoundException: File 
> s3n://xxxxxxx/_temporary/0/task_201606080516_0004_m_000079 does not exist.
> at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:506)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:360)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:310)
> at 
> org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:46)
> at 
> org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitJob(WriterContainer.scala:230)
> at 
> org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
> ... 42 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to