Team,

Datanode is failed to restart after configuring credentials provider,
storing credential into HDFS (jceks://hdfs@hostname
:9001/credential/keys.jceks).

Getting a StackOverFlow error in datanode jsvc.out file similar to
HADOOP-11934 <https://issues.apache.org/jira/browse/HADOOP-11934>.

As per the documentation link
<https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html#Supported_Features>,
we support storing credential in HDFS.

*URI jceks://file|hdfs/path-to-keystore, is used to retrieve credentials
from a Java keystore. The underlying use of the Hadoop filesystem
abstraction allows credentials to be stored on the local filesystem or
within HDFS.*

Assume a scenario, where all of our data nodes were down and we configured
hadoop.security.credential.provider.path to HDFS location. So when we try
to get FileSystem.get() during datanode restart we end up doing recursive
call if HDFS is inaccessible.


/**
 * Check and set 'configuration' if necessary.
 *
 * @param theObject object for which to set configuration
 * @param conf Configuration
 */
public static void setConf(Object theObject, Configuration conf) {
  if (conf != null) {
    if (theObject instanceof Configurable) {
      ((Configurable) theObject).setConf(conf);
    }
    setJobConf(theObject, conf);
  }
}


No issues if we store credential in LFS (localjceks://file). The problem
only with jceks://hdfs/.

Can I change Hadoop doc that we would not support storing credential in
HDFS? Or Shall I handle this scenario only for statup issue?


Thanks,
Karthik

Reply via email to