[
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17073379#comment-17073379
]
Luca Toscano edited comment on HADOOP-16647 at 4/2/20, 5:09 AM:
----------------------------------------------------------------
[~weichiu] I'll explain my points, that were all for openssl 1.1.1
compatibility:
* when I asked to run hadoop checknative as test, the error reported was
EVP_CIPHER_CTX_cleanup, that in my experience comes from the fact that
HADOOP-14597 is not applied, so I asked what was the testing environment to
[~rakeshr].
* Judging from the error that you reported about CRYPTO_num_locks, it seems to
me that the issue is in OpensslSecureRandom.c since it explicitly uses the
function for locking purposes. Due to
[https://github.com/openssl/openssl/issues/1260], I see that the num_lock
function has been moved to
[crypto.h|https://github.com/openssl/openssl/blob/OpenSSL_1_1_1-stable/include/openssl/crypto.h#L212-L216]
in openssl 1.1.1 and is is now a no-op, since the functionality is not
supported anymore (together with other functions). My suggestion was to verify
how to change OpensslSecureRandom.c's [locking
code|[https://github.com/apache/hadoop/blob/branch-2.10/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c#L203]]
to avoid using those functions, but something else.
In theory openssl 1.1.0 and 1.0.2 should already be supported, if my
understanding is correct we'd need to apply changes to the code only if openssl
1.1.1 is used.
was (Author: elukey):
[~weichiu] I'll explain my points, that were all for openssl 1.1.1
compatibility:
* when I asked to run hadoop checknative as test, the error reported was
EVP_CIPHER_CTX_cleanup, that in my experience comes from the fact that
HADOOP-14597 is not applied, so I asked what was the testing environment to
[~rakeshr].
* Judging from the error that you reported about CRYPTO_num_locks, it seems to
me that the issue is in OpensslSecureRandom.c since it explicitly uses the
function for locking purposes. Due to
[https://github.com/openssl/openssl/issues/1260], I see that the num_lock
function has been moved to
[crypto.h|https://github.com/openssl/openssl/blob/OpenSSL_1_1_1-stable/include/openssl/crypto.h#L212-L216]
in openssl 1.1.1 and is is now a no-op, since the functionality is not
supported anymore (together with other functions). My suggestion was to verify
how to change OpensslSecureRandom.c's locking code to avoid using those
functions, but something else.
In theory openssl 1.1.0 and 1.0.2 should already be supported, if my
understanding is correct we'd need to apply changes to the code only if openssl
1.1.1 is used.
> Support OpenSSL 1.1.1 LTS
> -------------------------
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
> Issue Type: Task
> Components: security
> Reporter: Wei-Chiu Chuang
> Assignee: Rakesh Radhakrishnan
> Priority: Critical
> Attachments: HADOOP-16647-00.patch
>
>
> See Hadoop user mailing list
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux
> distros, but it's not clear to me if Linux distros are going support
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the
> openssl version supported. File this jira to test/document/fix bugs.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]