[
https://issues.apache.org/jira/browse/HADOOP-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17441782#comment-17441782
]
Steve Loughran commented on HADOOP-17017:
-----------------------------------------
yes, that works until AWS remove path access. which is something they
eventually plan...
> S3A client retries on SSL Auth exceptions triggered by "." bucket names
> -----------------------------------------------------------------------
>
> Key: HADOOP-17017
> URL: https://issues.apache.org/jira/browse/HADOOP-17017
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.2.1
> Reporter: Steve Loughran
> Priority: Minor
>
> If you have a "." in bucket names (it's allowed!) then virtual host HTTPS
> connections fail with a java.net.ssl exception. Except we retry and the
> inner cause is wrapped by generic "client exceptions"
> I'm not going to try and be clever about fixing this, but we should
> * make sure that the inner exception is raised up
> * avoid retries
> * document it in the troubleshooting page.
> * if there is a well known public "." bucket (cloudera has some:)) we can test
> I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]