[ 
https://issues.apache.org/jira/browse/HADOOP-17216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17199364#comment-17199364
 ] 

Steve Loughran edited comment on HADOOP-17216 at 9/21/20, 1:31 PM:
-------------------------------------------------------------------

aah, sorry to hear that. Without installing S3Guard, (i.e. you still want to 
work with an inconsistent s3 bucket), you'll have to take it up with the -spark 
team.- the delta lake team.

File a bug report there, with the key points

# A probe for a file existing before it is created is adding a 404 to that path 
in the S3 load balancers
#  when HEAD is next called on the path, a 404 comes back, which is mapped to a 
FileNotFound

Workarounds

* don't check for the file existing before you create it. Hadoop < 3.3.0 will 
always do this in createFile(); 3.3.0+ will skip when overwrite=true.
* if there are any other probes (exists, isFile), remove them
* and/or, if you know the file should be there, spin briefly waiting for the 
file to exist, *with a retry delay > 20s

There's nothing else we can do in the Hadoop-aws module: we've eliminated 
surplus HEAD requests in our code.

Here's the URL Go to : https://github.com/delta-io/delta

Closing this as duplicate. Not our codebase. Not even ASF code. We have done 
all we can.





was (Author: [email protected]):
aah, sorry to hear that. Without installing S3Guard, (i.e. you still want to 
work with an inconsistent s3 bucket), you'll have to take it up with the -spark 
team.- the delta lake team.

File a JIRA there, with the key points

# A probe for a file existing before it is created is adding a 404 to that path 
in the S3 load balancers
#  when HEAD is next called on the path, a 404 comes back, which is mapped to a 
FileNotFound

Workarounds

* don't check for the file existing before you create it. Hadoop < 3.3.0 will 
always do this in createFile(); 3.3.0+ will skip when overwrite=true.
* if there are any other probes (exists, isFile), remove them
* and/or, if you know the file should be there, spin briefly waiting for the 
file to exist, *with a retry delay > 20s

There's nothing else we can do in the Hadoop-aws module: we've eliminated 
surplus HEAD requests in our code.

Here's the URL Go to : https://github.com/delta-io/delta

Closing this as duplicate. Not our codebase. Not even ASF code. We have done 
all we can.




> hadoop-aws having FileNotFoundException when accessing AWS s3 occassionally
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-17216
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17216
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.1.2
>         Environment: hadoop = "3.1.2"
> hadoop-aws = "3.1.2"
> spark = "2.4.5"
> spark-on-k8s-operator = "v1beta2-1.1.2-2.4.5"
> deployed into AWS EKS kubernates. Version information below:
> Server Version: version.Info\{Major:"1", Minor:"16+", 
> GitVersion:"v1.16.8-eks-e16311", 
> GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", 
> BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", 
> Platform:"linux/amd64"}
>            Reporter: Cheng Wei
>            Priority: Major
>             Fix For: 3.3.0
>
>
> Hi,
> When using spark streaming with deltalake, I got the following exception 
> occasionally, something like 1 out of 100. Thanks.
> {code:java}
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3a://[pathToFolder]/date=2020-07-29/part-00005-046af631-7198-422c-8cc8-8d3adfb4413e.c000.snappy.parquet
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2255)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2149)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088)
>  at 
> org.apache.spark.sql.delta.files.DelayedCommitProtocol$$anonfun$8.apply(DelayedCommitProtocol.scala:141)
>  at 
> org.apache.spark.sql.delta.files.DelayedCommitProtocol$$anonfun$8.apply(DelayedCommitProtocol.scala:139)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.spark.sql.delta.files.DelayedCommitProtocol.commitTask(DelayedCommitProtocol.scala:139)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242){code}
>  
> -----Environment----
> hadoop = "3.1.2"
>  hadoop-aws = "3.1.2"
> spark = "2.4.5"
> spark-on-k8s-operator = "v1beta2-1.1.2-2.4.5"
>  
> deployed into AWS EKS kubernates. Version information below:
> Server Version: version.Info\{Major:"1", Minor:"16+", 
> GitVersion:"v1.16.8-eks-e16311", 
> GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", 
> BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", 
> Platform:"linux/amd64"}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to