Sönke Liebau created HADOOP-18882:
-------------------------------------
Summary: HDFS defaults tls cipher to "no encryption" when keystore
key is unset or empty
Key: HADOOP-18882
URL: https://issues.apache.org/jira/browse/HADOOP-18882
Project: Hadoop Common
Issue Type: Bug
Components: security
Affects Versions: 3.3.4
Environment: We saw this issue when running in a Kubernetes
environment.
Hadoop was deployed using the [Stackable Operator for Apache
Hadoop|[https://github.com/stackabletech/hdfs-operator|http://example.com/]].
The binaries contained in the deployed images are taken from the ASF mirrors,
not self-compiled.
Reporter: Sönke Liebau
It looks like some hdfs servers default the cipher suite to not encrypt traffic
when the keystore password is not set or set to an empty string.
Historically this has probably not often been an issue as java `keytool`
refuses to create a keystore with less than 6 characters, so usually people
would need to set passwords on the keystores (and hence in the config).
When using keystores without a password, we noticed that HDFS refuses to load
keys from this keystore when `ssl.server.keystore.password` is unset or set to
an empty string - and instead of erroring out sets the cipher suite for rpc
connections to `TLS_NULL_WITH_NULL_NULL` which is basically TLS but without any
encryption.
The impact varies depending on which communication channel we looked at, what
we saw was:
* JournalNodes seem to happily go along with this and NameNodes equally
happily connect to the JournalNodes without any warnings - we do have tls
enabled after all :)
* NameNodes refuse connections with a handshake exception, so the real world
impact of this should hopefully be small, but it does seem like less than ideal
behavior.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]