[ 
https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16879134#comment-16879134
 ] 

Jose Luis Pedrosa commented on HADOOP-16410:
--------------------------------------------

Hi [[email protected]]

I was saying how I managed to solve it, I don't have a clue if this is the 
right way to solve it for Hadoop. If you say it's a duplicate and it's the 
right solution, I'm perfectly happy with it. 

Do you want me to close this ticket?

 

> Hadoop 3.2 azure jars incompatible with alpine 3.9
> --------------------------------------------------
>
>                 Key: HADOOP-16410
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16410
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Jose Luis Pedrosa
>            Priority: Minor
>
>  Openjdk8 is based on alpine 3.9, this means that the version shipped of 
> libssl is 1.1.1b-r1:
>   
> {noformat}
> sh-4.4# apk list | grep ssl
> libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
> {noformat}
> The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by 
> [https://issues.jboss.org/browse/JBEAP-16425].
> This results on error running runtime errors (using spark as an example)
> {noformat}
> 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version 
> OpenSSL 1.1.1b 26 Feb 2019
> 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking 
> for metadata directory.
> Exception in thread "main" java.lang.NullPointerException
>  at 
> org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
> {noformat}
> In my tests creating a Docker image with an updated version of wildly, solves 
> the issue: 1.0.7.Final
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to