Andre,

This definitely seems weird that somehow using embedded ZooKeeper is
causing this.

One thing I can say though, is that in order to get into the code in your
stacktrace, it had to pass through SecurityUtil.isSecurityEnabled(config)
which does the following:

public static boolean isSecurityEnabled(final Configuration config) {
    Validate.notNull(config);
    return
"kerberos".equalsIgnoreCase(config.get("hadoop.security.authentication"));
}

The Configuration instance passed in is created using the default
constructor Configuration config = new Configuration(); and then any
files/paths entered into the processor's resource property is added to the
config.

So in order for isSecurityEnabled to return true, it means the config
instance somehow got "hadoop.security.authentication" set to true, which
usually only happens if you put a core-site.xml on the classpath with that
value set.

Is it possible some JAR from the MapR dependencies has a core-site.xml
embedded in it?

-Bryan

On Wed, Oct 26, 2016 at 6:09 AM, Andre <[email protected]> wrote:

> Hi there,
>
> I've notice an odd behavior when using embedded Zookeeper on a NiFi cluster
> with MapR compatible processors:
>
> I noticed that every time I enable embedded zookeeper, NiFi's HDFS
> processors (e.g. PutHDFS) start complaining about Kerberos identities:
>
> 2016-10-26 20:07:22,376 ERROR [StandardProcessScheduler Thread-2]
> o.apache.nifi.processors.hadoop.PutHDFS
> java.io.IOException: Login failure for princical@REALM-NAME-GOES-HERE from
> keytab /path/to/keytab_file/nifi.keytab
>         at
> org.apache.hadoop.security.UserGroupInformation.
> loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1084)
> ~[hadoop-common-2.7.0-mapr-1602.jar:na]
>         at
> org.apache.nifi.hadoop.SecurityUtil.loginKerberos(SecurityUtil.java:52)
> ~[nifi-hadoop-utils-1.0.0.jar:1.0.0]
>         at
> org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.
> resetHDFSResources(AbstractHadoopProcessor.java:285)
> ~[nifi-hdfs-processors-1.0.0.jar:1.0.0]
>         at
> org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.
> abstractOnScheduled(AbstractHadoopProcessor.java:213)
> ~[nifi-hdfs-processors-1.0.0.jar:1.0.0]
>         at
> org.apache.nifi.processors.hadoop.PutHDFS.onScheduled(PutHDFS.java:181)
> [nifi-hdfs-processors-1.0.0.jar:1.0.0]
>
> So far so good, these errors are quite familiar to people using NiFi
> against secure MapR clusters and caused by issues around the custom JAAS
> settings required by Java applications relying on the MapR client to work.
>
> The normal workaround this would be instructing NiFi to where the the JAAS
> settings via bootstrap.conf [1]:
>
> $grep jaas
> java.arg.15=-Djava.security.auth.login.config=./conf/nifi-jaas.conf
>
> The contents of nifi-jaas.conf are a copy of the relevant MapR JAAS stanza:
>
> While the workaround seems to work (still doing tests) I ask:
>
> Should setting
>
> nifi.state.management.embedded.zookeeper.start=true
>
> Cause this behavior?
>
> Cheers
>

Reply via email to