[ 
https://issues.apache.org/jira/browse/HADOOP-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Björn Boschman updated HADOOP-18078:
------------------------------------
    Attachment:     (was: spark_test.py)

> TemporaryAWSCredentialsProvider has no credentials
> --------------------------------------------------
>
>                 Key: HADOOP-18078
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18078
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.3.1
>         Environment: python:3.9.5
> openjdk:11.0.13
> spark:3.2.0
> hadoop:3.3.1
>            Reporter: Björn Boschman
>            Priority: Major
>         Attachments: spark_test.py
>
>
> Not quite sure how to phrase this bugreport but I'll give it a try..
> We are using a SparkSession to access parquet files on AWS/S3
> it is ok, if there is only one  s3a URL supplied
> it used to be ok if there is a bunch of s3a URLs - that's broken siince 
> hadoop:3.3.1
>  
>  
> I've attached a sample script - yet it relys on spark+hadoop installed 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to