[
https://issues.apache.org/jira/browse/SPARK-38958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757364#comment-17757364
]
Steve Loughran commented on SPARK-38958:
----------------------------------------
[~hershalb] we are about to merge the v2 sdk feature set; it'd be good for you
to see if your changes work there.
as for static headers, I could imagine something like we added in HADOOP-17833
for adding headers to created files.
# Define a well know prefix, e.g {{fs.s3a.request.headers.))
# every key which matches fs.s3a.request.headers.* becomes a header; the value
the header value.
the alternative is as done for custom signers, a list of key=value separated by
commas.
> Override S3 Client in Spark Write/Read calls
> --------------------------------------------
>
> Key: SPARK-38958
> URL: https://issues.apache.org/jira/browse/SPARK-38958
> Project: Spark
> Issue Type: New Feature
> Components: Spark Core
> Affects Versions: 3.2.1
> Reporter: Hershal
> Priority: Major
>
> Hello,
> I have been working to use spark to read and write data to S3. Unfortunately,
> there are a few S3 headers that I need to add to my spark read/write calls.
> After much looking, I have not found a way to replace the S3 client that
> spark uses to make the read/write calls. I also have not found a
> configuration that allows me to pass in S3 headers. Here is an example of
> some common S3 request headers
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html).]
> Does there already exist functionality to add S3 headers to spark read/write
> calls or pass in a custom client that would pass these headers on every
> read/write request? Appreciate the help and feedback
>
> Thanks,
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]