So right now the /nifi z-node exists with the ACL you showed earlier for
'digest,'nifi:the-passwd-digest' , but then '/nifi/components' doesn't
exist yet?

The one difference from a code perspective is that /nifi and the cluster
nodes are created by Curator, and the state provider is done using plain ZK
client code, although no reason why that should work.

I'm no ZK expert, but the code that is causing the error is a call to
"create(path, data, acls, CreateMode.PERSISTENT)" where "acls" is a list
with one element of  Ids.CREATOR_ALL_ACL which has
/**
 * This Id is only usable to set ACLs. It will get substituted with the
 * Id's the client authenticated with.
 */
public final Id AUTH_IDS = new Id("auth", "");

Any ZK client code should be seeing the same JAAS entry you configured, so
not sure how it could be authenticating as different identities.


On Mon, Jul 6, 2020 at 2:27 PM dan young <[email protected]> wrote:

> Correct, the leader seems to work, but not the components it seems..... Is
> there some additional config setting I might be missing?
>
>
>
> <stateManagement>
>    <local-provider>
>       <id>local-provider</id>
>
> <class>org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider</class>
>       <property
> name="Directory">/opt/nifi-configuration-resources/state/local</property>
>       <property name="Always Sync">false</property>
>       <property name="Partitions">16</property>
>       <property name="Checkpoint Interval">2 mins</property>
>    </local-provider>
>    <cluster-provider>
>       <id>zk-provider</id>
>
> <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
>       <property name="Root Node">/nifi</property>
>       <property name="Session Timeout">30 seconds</property>
>       <property name="Access Control">CreatorOnly</property>
>       <property name="Connect
> String">xx.xxx.x.xxx:2181,xx.xxx.x.xxx:2181,xx.xxx.x.xxx:2181</property>
>    </cluster-provider>
> </stateManagement>
>
>
>
>
> 2020-07-06 18:25:04,830 ERROR [Timer-Driven Process Thread-3]
> o.a.n.p.standard.GenerateTableFetch
> GenerateTableFetch[id=2a056fd7-b63c-33b4-a5c4-bf767c1a2983]
> GenerateTableFetch[id=2a056fd7-b63c-33b4-a5c4-bf767c1a2983] failed to
> update State Manager, observed maximum values will not be recorded. Also,
> any generated SQL statements may be duplicated.: java.io.IOException:
> Failed to set cluster-wide state in ZooKeeper for component with ID
> 2a056fd7-b63c-33b4-a5c4-bf767c1a2983
> java.io.IOException: Failed to set cluster-wide state in ZooKeeper for
> component with ID 2a056fd7-b63c-33b4-a5c4-bf767c1a2983
>         at
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:343)
>         at
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:283)
>         at
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:228)
>         at
> org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.setState(StandardStateManagerProvider.java:298)
>         at
> org.apache.nifi.controller.state.StandardStateManager.setState(StandardStateManager.java:79)
>         at
> org.apache.nifi.controller.lifecycle.TaskTerminationAwareStateManager.setState(TaskTerminationAwareStateManager.java:64)
>         at
> org.apache.nifi.processors.standard.GenerateTableFetch.onTrigger(GenerateTableFetch.java:555)
>         at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
>         at
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
>         at
> org.apache.nifi.controller.scheduling.QuartzSchedulingAgent$2.run(QuartzSchedulingAgent.java:151)
>         at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.zookeeper.KeeperException$InvalidACLException:
> KeeperErrorCode = InvalidACL for
> /nifi/components/2a056fd7-b63c-33b4-a5c4-bf767c1a2983
>         at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
>         at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>         at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
>         at
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.createNode(ZooKeeperStateProvider.java:360)
>         at
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:321)
>         ... 17 common frames omitted
>
>
>
>
> On Mon, Jul 6, 2020 at 11:59 AM Bryan Bende <[email protected]> wrote:
>
>> You set <property name="Access Control">CreatorOnly</property> in the ZK
>> state manager ?
>>
>> On Mon, Jul 6, 2020 at 1:40 PM dan young <[email protected]> wrote:
>>
>>> Fat fingered... any insight into this error when the GenerateTableFectch?
>>>
>>> Failed to set cluster-wide state in Zookeeper...
>>> ...
>>> ...
>>>
>>> Caused by: org.apache.zookeeper.KeeperException$InvalidACLException:
>>> KeeperErrorCode = InvalidACL for
>>> /nifi/components/2a056fd7-b63c-33b4-a5c4-bf767c1a2983
>>>         at
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
>>>         at
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>>>         at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
>>>         at
>>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.createNode(ZooKeeperStateProvider.java:360)
>>>         at
>>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:321)
>>>         ... 17 common frames omitted
>>>
>>>
>>> On Mon, Jul 6, 2020 at 11:39 AM dan young <[email protected]> wrote:
>>>
>>>> Hello Bryan,
>>>>
>>>> Making some progress....any insight into this error with
>>>> GenerateTableFectch processor?
>>>>
>>>>
>>>> On Mon, Jul 6, 2020 at 10:47 AM Bryan Bende <[email protected]> wrote:
>>>>
>>>>> Have you configured this in nifi.properties?
>>>>>
>>>>> nifi.zookeeper.auth.type=sasl
>>>>>
>>>>>
>>>>> On Mon, Jul 6, 2020 at 12:43 PM dan young <[email protected]> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> And a follow up on this, if I delete the znode in zookeeper, the
>>>>>> leaders is written to the /nifi znode, but the ACL is open,
>>>>>> 'world';'anyone....  I do have the Access COntrol set to CreatorOnly in 
>>>>>> the
>>>>>> state-management.xml.  So one question, is the CreatorOnly only supported
>>>>>> when we run in kerberos env?
>>>>>>
>>>>>> Dano
>>>>>>
>>>>>> On Mon, Jul 6, 2020 at 10:36 AM dan young <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Hello everyone,
>>>>>>>
>>>>>>> I'm trying to configure the zookeeper state provider in  NiFi to use
>>>>>>> the Access Policy of CreatorOnly vs Open using DIGEST vs Kerberos.   I
>>>>>>> believe I've setup zookeeper correctly for this, and partly Nifi, but 
>>>>>>> when
>>>>>>> I startup nifi cluster, we seem to get stuck with the following:
>>>>>>>
>>>>>>> 2020-07-06 16:06:20,826 WARN [Clustering Tasks Thread-1]
>>>>>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>>>>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send 
>>>>>>> heartbeat
>>>>>>> because there is no Cluster Coordinator currently elected
>>>>>>> 2020-07-06 16:06:35,920 WARN [Clustering Tasks Thread-2]
>>>>>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>>>>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send 
>>>>>>> heartbeat
>>>>>>> because there is no Cluster Coordinator currently elected
>>>>>>> 2020-07-06 16:06:50,923 WARN [Clustering Tasks Thread-2]
>>>>>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>>>>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send 
>>>>>>> heartbeat
>>>>>>> because there is no Cluster Coordinator currently elected
>>>>>>> 2020-07-06 16:07:06,071 WARN [Clustering Tasks Thread-2]
>>>>>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>>>>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send 
>>>>>>> heartbeat
>>>>>>> because there is no Cluster Coordinator currently elected
>>>>>>>
>>>>>>> I can see the znode in zookeeper, and it appears to at least have
>>>>>>> the correct permissions.  I created this znode in the CLI:
>>>>>>>
>>>>>>> addauth digest nifi:<passwd>
>>>>>>> create /nifi data digest:nifi<passwd digest>:cdrwa
>>>>>>>
>>>>>>> The digest was generated via:
>>>>>>>
>>>>>>> java -cp
>>>>>>> '/op/zookeeper/lib/zookeeper-3.5.8.jar:/opt/zookeeper/lib/slf4j-api-1.7.25.jar'
>>>>>>> org.apache.auth.AuthenticationProvider nifi:<passwd>
>>>>>>>
>>>>>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] getAcl
>>>>>>> /nifi
>>>>>>> 'digest,'nifi:the-passwd-digest'
>>>>>>> : cdrwa
>>>>>>>
>>>>>>>
>>>>>>> after starting up Nifi, doing and ls /nifi, the znode is empty.
>>>>>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] ls /nifi
>>>>>>> []
>>>>>>>
>>>>>>> Seems like we can't write the leaders or components value under the
>>>>>>> /nifi znode.
>>>>>>>
>>>>>>>
>>>>>>> Looking at the nifi-app log
>>>>>>>
>>>>>>> 2020-07-06 16:05:46,554 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>>>>>> org.apache.zookeeper.Login Client successfully logged in.
>>>>>>> 2020-07-06 16:05:46,556 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>>>>>> o.a.zookeeper.client.ZooKeeperSaslClient Client will use DIGEST-MD5 as 
>>>>>>> SASL
>>>>>>> mechanism.
>>>>>>> 2020-07-06 16:05:46,900 INFO [main-EventThread]
>>>>>>> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
>>>>>>> 2020-07-06 16:05:47,347 INFO [main-EventThread]
>>>>>>> o.a.c.framework.imps.EnsembleTracker New config event received:
>>>>>>> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181,
>>>>>>> version=0, server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
>>>>>>> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
>>>>>>> 2020-07-06 16:05:47,354 INFO [main-EventThread]
>>>>>>> o.a.c.framework.imps.EnsembleTracker New config event received:
>>>>>>> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181,
>>>>>>> version=0, server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
>>>>>>> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
>>>>>>> 2020-07-06 16:05:47,357 INFO [Curator-Framework-0]
>>>>>>> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
>>>>>>> 2020-07-06 16:05:47,364 DEBUG [main] org.apache.zookeeper.ZooKeeper
>>>>>>> Closing session: 0x3002a05b0c60006
>>>>>>> 2020-07-06 16:05:47,469 INFO [main/ org.apache.zookeeper.ZooKeeper
>>>>>>> Session: 0x3002a05b0c60006 closed
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Any ideas on what configuration I could be missing or have wrong?  I
>>>>>>> have a jaas.conf file in the $NIFI_HOME/conf directory and have a
>>>>>>> java.arg.18--Djava.security.auth.login.config=<path to jaas.conf file>
>>>>>>>
>>>>>>> One question I have, in the jaas.conf file, I put the passwd in
>>>>>>> there and not the digest I believe...I understand this would be passed
>>>>>>> around cleartext, but this is just for testing purposes currently....
>>>>>>>
>>>>>>> Nifi 1.11.4
>>>>>>> external zookeeper 3.5.8
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> Dano
>>>>>>>
>>>>>>>

Reply via email to