[ https://issues.apache.org/jira/browse/HADOOP-18159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17554753#comment-17554753 ]
Steve Loughran commented on HADOOP-18159: ----------------------------------------- bq. I think another good strategy, if this is possible, would be enforcing that this file is always loaded from a specific dependency on the classpath. Maybe adding it to the aws-sdk or httpclient and shading with another name (since this is where the default hostname verifier comes from), could protect us from any issu I'd say yes, except for the "if this is possible" clause, which i now doubt in a hadoop installation, try using findclass (a little backdoor diagnostics command) to see where this is being picked up {code} hadoop org.apache.hadoop.util.FindClass locate mozilla/public-suffix-list.tx {code} {code} cdh6.3 common/lib/httpclient-4.5.3.jar hadoop-3.4.0-SNAPSHOT common/lib/aws-java-sdk-bundle-1.12.132.jar!/mozilla/public-suffix-list.txt hadoop 3.3.3 common/lib/httpclient-4.5.13.jar!/mozilla/public-suffix-list.txt cdh 7.1.x common/lib/gcs-connector-2.1.2.7.1.8.0-SNAPSHOT-shaded.jar!/mozilla/public-suffix-list.txt {code} (why yes, i do have a lot of releases on my laptop...) anyway, the location jitters depending on classes in the cp. as even those nominally shaded binaries (aws, gcs) depend on the same location of the text file. This is *not good*. I've just updated cloudstore to include the location of this in its diagnostics. https://github.com/steveloughran/cloudstore/releases/tag/tag-2022-06-15-release-public-suffix-llist we will see this again. probably just luck so far. > Certificate doesn't match any of the subject alternative names: > [*.s3.amazonaws.com, s3.amazonaws.com] > ------------------------------------------------------------------------------------------------------ > > Key: HADOOP-18159 > URL: https://issues.apache.org/jira/browse/HADOOP-18159 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 > Affects Versions: 3.3.1 > Environment: hadoop 3.3.1 > httpclient 4.5.13 > JDK8 > Reporter: André F. > Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Trying to run any job after bumping our Spark version (which is now using > Hadoop 3.3.1), lead us to the current exception while reading files on s3: > {code:java} > org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on > s3a://<bucket>/<path>.parquet: com.amazonaws.SdkClientException: Unable to > execute HTTP request: Certificate for <bucket.s3.amazonaws.com> doesn't match > any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]: > Unable to execute HTTP request: Certificate for <bucket> doesn't match any of > the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com] at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:208) at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3351) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.isDirectory(S3AFileSystem.java:4277) > at {code} > > {code:java} > Caused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for > <bucket.s3.amazonaws.com> doesn't match any of the subject alternative names: > [*.s3.amazonaws.com, s3.amazonaws.com] > at > com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:507) > at > com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:437) > at > com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:384) > at > com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) > at > com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) > at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76) > at com.amazonaws.http.conn.$Proxy16.connect(Unknown Source) > at > com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) > at > com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) > at > com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) > at > com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) > at > com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) > at > com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) > at > com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1333) > at > com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145) > {code} > We found similar problems in the following tickets but: > - https://issues.apache.org/jira/browse/HADOOP-17017 (we don't use `.` in > our bucket names) > - [https://github.com/aws/aws-sdk-java-v2/issues/1786] (we tried to override > it by using `httpclient:4.5.10` or `httpclient:4.5.8`, with no effect). > We couldn't test it using the native `openssl` configuration due to our > setup, so we would like to stick with the java ssl implementation, if > possible. > -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org