steveloughran commented on issue #970: HADOOP-16371: Option to disable GCM for 
SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#issuecomment-531754074
 
 
   Patch is coming together nicely -nearly there. Done CLI tests as well as the 
-aws suite,
   
   
   A big fear of mine is that the current patch will, through transitive 
references, fail if the
   wildfly JAR isn't on the CP.
   
   But I couldn't actually create that failure condition when I tried on the 
CLI. 
   
   first, extended patched cloudstore s3a diagnostics to look for the new class
   
   ``
   class: org.wildfly.openssl.OpenSSLProvider
          Not found on classpath: org.wildfly.openssl.OpenSSLProvider
   ```
   
   tested IO against a store -all good.
   
   and when I switch to an unsupported mode I get the expected stack trace
   ```
   2019-09-16 13:06:11,124 [main] INFO  diag.StoreDiag 
(DurationInfo.java:<init>(53)) - Starting: Creating filesystem 
s3a://hwdev-steve-ireland-new/
   2019-09-16 13:06:11,683 [main] INFO  diag.StoreDiag 
(DurationInfo.java:close(100)) - Creating filesystem 
s3a://hwdev-steve-ireland-new/: duration 0:00:561
   java.lang.UnsupportedOperationException: S3A does not support setting 
fs.s3a.ssl.channel.mode OpenSSL or Default
        at 
org.apache.hadoop.fs.s3a.impl.NetworkBinding.bindSSLChannelMode(NetworkBinding.java:86)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.initProtocolSettings(S3AUtils.java:1266)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.initConnectionSettings(S3AUtils.java:1230)
        at org.apache.hadoop.fs.s3a.S3AUtils.createAwsConf(S3AUtils.java:1211)
        at 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:58)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:543)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:364)
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
        at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3387)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:502)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
        at 
org.apache.hadoop.fs.store.diag.StoreDiag.executeFileSystemOperations(StoreDiag.java:860)
        at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:409)
        at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:353)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
        at org.apache.hadoop.fs.store.diag.StoreDiag.exec(StoreDiag.java:1163)
        at org.apache.hadoop.fs.store.diag.StoreDiag.main(StoreDiag.java:1172)
        at storediag.main(storediag.java:25)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
   2019-09-16 13:06:11,685 [main] INFO  util.ExitUtil (ExitUtil.java:t
   ```
   
   which is telling me that my fears are misguided?
   
   what do others say?
   
   BTW, @bgaborg  been having problems with STS tests too -try setting a region 
for the endpoint. Starting to suspect the latest SDK needs this now.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to