[ 
https://issues.apache.org/jira/browse/SPARK-24273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16492159#comment-16492159
 ] 

Steve Loughran commented on SPARK-24273:
----------------------------------------

Of course, there's no need to send range headers on a 0 byte read, because 
there's no need to do a GET there. The existence check of the HEAD is enough to 
discover that length=0, so a special stream can ge be returned of length 0 and 
whose handling of read(), seek, etc. matches expectations.

HADOOP-13293 is the JIRA for adding this. 

Jami: that issue has been listed for 2 years, it's not had any attention, which 
means unless someone (you?) provides the patch, you are going to have to handle 
that range 0-0 GET. AWS S3 clearly does

> Failure while using .checkpoint method to private S3 store via S3A connector
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-24273
>                 URL: https://issues.apache.org/jira/browse/SPARK-24273
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 2.3.0
>            Reporter: Jami Malikzade
>            Priority: Major
>
> We are getting following error:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS 
> Service: Amazon S3, AWS Request ID: 
> tx000000000000000014126-005ae9bfd9-9ed9ac2-default, AWS Error Code: 
> InvalidRange, AWS Error Message: null, S3 Extended Request ID: 
> 9ed9ac2-default-default"
> when we use checkpoint method as below.
> val streamBucketDF = streamPacketDeltaDF
>  .filter('timeDelta > maxGap && 'timeDelta <= 30000)
>  .withColumn("bucket", when('timeDelta <= mediumGap, "medium")
>  .otherwise("large")
>  )
>  .checkpoint()
> Do you have idea how to prevent invalid range in header to be sent, or how it 
> can be workarounded or fixed?
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to