[ 
https://issues.apache.org/jira/browse/HADOOP-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16856714#comment-16856714
 ] 

Steve Loughran edited comment on HADOOP-16346 at 6/5/19 1:27 PM:
-----------------------------------------------------------------

I'm rolling back the openSSL support patches 5906268 b067f8a and retesting 
hadoop-aws and hadoop-azure, including CLI validation

I appreciate the speedups offered, but the patch as was doesn't work and 
problems were piling up, with HADOOP-16347 and CLI failures being the 
show-stoppers. I get to field the support calls related to all this stuff and 
saying "have you tried setting this undocumented option to JSSE_DEFAULT" is 
already getting tiring. As is having to use a local build of hadoop-3.2 so I 
can use the "hadoop fs" command. 

This is what I need before accepting any revised version of the original patch

# an identification of the cause of HADOOP-16347, a fix and test results to 
show that the fix works. This will have to be on a real test cluster, not a 
single machine minicluster.
# all attempts to load native libs or wildfly jars use reflection so as to work 
when the optional JAR is absent
# no warning messages if the user hasn't explicitly asked to use the wildfly
# default mode to be JDK until we are all satisfied that everything works. 
# documentation to cover the topic and to declare that this is experimental, 
that the JAR and native lib is required, etc.
# core-site adds new option with default value.
# full hadoop-aws scale tests, including with: assume role and kms testing; 
dynamodb in auth and non-auth.
# evidence of a build and test run on a system without the native libs
# evidence that a build of the CLI ({{mvn package -Pdist -DskipTests 
-Dmaven.javadoc.skip=true  -DskipShade}} produces a hadoop dist where {{hadoop 
fs -ls s3a://landsat-pds/}} actually returns a list over a stack trace, even 
when the wildfly JAR isn't on the CP.
# a spark build with {{-Phadoop-cloud}} produces a spark build which will list 
an s3a store even though it doesn't have wildfly on the CP

This is a lot, but given the issues identified, this is what constitutes the 
due diligence needed to say "ready for use"

HADOOP-16347 is probably the hard one. I have no idea where to begin there.




was (Author: [email protected]):
I'm rolling back the openSSL support patches 5906268 b067f8a and retesting 
hadoop-aws and hadoop-azure, including CLI validation

I appreciate the speedups offered, but the patch as was doesn't work and 
problems were piling up, with HADOOP-16321 and CLI failures being the 
show-stoppers. I get to field the support calls related to all this stuff and 
saying "have you tried setting this undocumented option to JSSE_DEFAULT" is 
already getting tiring. As is having to use a local build of hadoop-3.2 so I 
can use the "hadoop fs" command. 

This is what I need before accepting any revised version of the original patch

# an identification of the cause of HADOOP-16321, a fix and test results to 
show that the fix works. This will have to be on a real test cluster, not a 
single machine minicluster.
# all attempts to load native libs or wildfly jars use reflection so as to work 
when the optional JAR is absent
# no warning messages if the user hasn't explicitly asked to use the wildfly
# default mode to be JDK until we are all satisfied that everything works. 
# documentation to cover the topic and to declare that this is experimental, 
that the JAR and native lib is required, etc.
# core-site adds new option with default value.
# full hadoop-aws scale tests, including with: assume role and kms testing; 
dynamodb in auth and non-auth.
# evidence of a build and test run on a system without the native libs
# evidence that a build of the CLI ({{mvn package -Pdist -DskipTests 
-Dmaven.javadoc.skip=true  -DskipShade}} produces a hadoop dist where {{hadoop 
fs -ls s3a://landsat-pds/}} actually returns a list over a stack trace, even 
when the wildfly JAR isn't on the CP.
# a spark build with {{-Phadoop-cloud}} produces a spark build which will list 
an s3a store even though it doesn't have wildfly on the CP

This is a lot, but given the issues identified, this is what constitutes the 
due diligence needed to say "ready for use"

HADOOP-16321 is probably the hard one. I have no idea where to begin there.



> Stabilize S3A OpenSSL support
> -----------------------------
>
>                 Key: HADOOP-16346
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16346
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0
>            Reporter: Steve Loughran
>            Priority: Blocker
>
> HADOOP-16050 switched S3A to trying to use OpenSSL. We need to make sure this 
> is stable, that people know it exists and aren't left wondering why things 
> which did work have now stopped. Which, given I know who will end up with 
> those support calls, is not something I want.
> * Set the default back to the original JDK version.
> * Document how to change this so you don't need to use an IDE to work out 
> what other values are allowed
> * core-default.xml to include the default value and the text listing the 
> other options.
> + anything else



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to