Is there anything interesting (errors/warnings) in nifi-app.log on
host 2 during start up?

Also, I'm not sure if this will do anything different, but you could
try clearing the ZK state dir to make sure all the info in ZK is
starting fresh...

- Shutdown both nodes
- Remove the directory nifi/state/zookeeper/version-2 on host 1 (not
the whole ZK dir, just version-2)
- Start nifi 1 and wait for it be up and running
- Start nifi 2

On Wed, Oct 24, 2018 at 11:18 AM Saip, Alexander (NIH/CC/BTRIS) [C]
<alexander.s...@nih.gov> wrote:
>
> Yes, that setting is the same for on both hosts. I attach the UI screenshots 
> taken on those. Please note that host’s FQDNs have been removed.
>
>
>
> -----Original Message-----
> From: Bryan Bende <bbe...@gmail.com>
> Sent: Wednesday, October 24, 2018 9:25 AM
> To: users@nifi.apache.org
> Subject: Re: NiFi fails on cluster nodes
>
>
>
> Many services can share a single ZooKeeper by segmenting their data under a 
> specific root node.
>
>
>
> The root node is specified by nifi.zookeeper.root.node=/nifi so if those were 
> different on each node then it would form separate clusters.
>
>
>
> Can you show screenshots of the cluster information from each node?
>
>
>
> May need to upload them somewhere and provide links here since attachments 
> don't always make it through.
>
> On Wed, Oct 24, 2018 at 8:18 AM Saip, Alexander (NIH/CC/BTRIS) [C] 
> <alexander.s...@nih.gov> wrote:
>
> >
>
> > The ZooKeeper related settings in the nifi.properties files on both hosts 
> > are identical, with the exception of 
> > nifi.state.management.embedded.zookeeper.start, which is ‘true’ on host-1 
> > and ‘false’ on host-2. Moreover, if I shut down NiFi on host-1, it crashes 
> > on host-2. Here is the message in the browser window:
>
> >
>
> >
>
> >
>
> > Action cannot be performed because there is currently no Cluster 
> > Coordinator elected. The request should be tried again after a moment, 
> > after a Cluster Coordinator has been automatically elected.
>
> >
>
> >
>
> >
>
> > I even went as far as commenting out the server.1  line in the 
> > zookeeper.properties file on host-1 before restarting both NiFi instances, 
> > which didn’t change the outcome.
>
> >
>
> >
>
> >
>
> > When I look at the NiFi Cluster information in the UI on host-1, it shows 
> > the status of the node “CONNECTED, PRIMARY, COORDINATOR”, whereas on host-2 
> > just “CONNECTED”. I don’t know if this tells you anything.
>
> >
>
> >
>
> >
>
> > BTW, what does “a different location in the same ZK” mean?
>
> >
>
> >
>
> >
>
> > -----Original Message-----
>
> > From: Bryan Bende <bbe...@gmail.com>
>
> > Sent: Tuesday, October 23, 2018 3:02 PM
>
> > To: users@nifi.apache.org
>
> > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> >
>
> >
>
> > The only way I could see that happening is if the ZK config on the second 
> > node pointed at a different ZK, or at a different location in the same ZK.
>
> >
>
> >
>
> >
>
> > For example, if node 1 had:
>
> >
>
> >
>
> >
>
> > nifi.zookeeper.connect.string=node-1:2181
>
> >
>
> > nifi.zookeeper.connect.timeout=3 secs
>
> >
>
> > nifi.zookeeper.session.timeout=3 secs
>
> >
>
> > nifi.zookeeper.root.node=/nifi
>
> >
>
> >
>
> >
>
> > Then node 2 should have exactly the same thing.
>
> >
>
> >
>
> >
>
> > If node 2 specified a different connect string, or a different root node, 
> > then it wouldn't know about the other node.
>
> >
>
> >
>
> >
>
> > On Tue, Oct 23, 2018 at 2:48 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > <alexander.s...@nih.gov> wrote:
>
> >
>
> > >
>
> >
>
> > > That's exactly the case.
>
> >
>
> > >
>
> >
>
> > > -----Original Message-----
>
> >
>
> > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > > Sent: Tuesday, October 23, 2018 2:44 PM
>
> >
>
> > > To: users@nifi.apache.org
>
> >
>
> > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > >
>
> >
>
> > > So you can get into each node's UI and they each show 1/1 for cluster 
> > > nodes?
>
> >
>
> > >
>
> >
>
> > > It doesn't really make sense how the second node would form its own 
> > > cluster.
>
> >
>
> > > On Tue, Oct 23, 2018 at 2:20 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > > <alexander.s...@nih.gov> wrote:
>
> >
>
> > > >
>
> >
>
> > > > I copied over users.xml, authorizers.xml and authorizations.xml to 
> > > > host-2, removed flow.xml.gz, and started NiFi there. Unfortunately, for 
> > > > whatever reason, the nodes still don’t talk to each other, even though 
> > > > both of them are connected to ZooKeeper on host-1. I still see two 
> > > > separate clusters, one on host-1 with all the dataflows, and the other, 
> > > > on host-2, without any of them. On the latter, the logs have no mention 
> > > > of host-1 whatsoever, neither server name, nor IP address. On host-1, 
> > > > nifi-app.log contains a few lines like the following:
>
> >
>
> > > >
>
> >
>
> > > >
>
> >
>
> > > >
>
> >
>
> > > > 2018-10-23 13:44:43,628 INFO
>
> >
>
> > > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181]
>
> >
>
> > > > o.a.zookeeper.server.ZooKeeperServer Client attempting to
>
> > > > establish
>
> >
>
> > > > new session at /<host-2 IP address>:50412
>
> >
>
> > > >
>
> >
>
> > > > 2018-10-23 13:44:43,629 INFO [SyncThread:0]
>
> >
>
> > > > o.a.zookeeper.server.ZooKeeperServer Established session
>
> >
>
> > > > 0x166a1d139590002 with negotiated timeout 4000 for client /<host-2
>
> >
>
> > > > IP
>
> >
>
> > > > address>:50412
>
> >
>
> > > >
>
> >
>
> > > >
>
> >
>
> > > >
>
> >
>
> > > > I apologize for bugging you with all this, converting our
>
> > > > standalone
>
> >
>
> > > > NiFi instances into cluster nodes turned out to be much more
>
> >
>
> > > > challenging than we had anticipated…
>
> >
>
> > > >
>
> >
>
> > > >
>
> >
>
> > > >
>
> >
>
> > > > -----Original Message-----
>
> >
>
> > > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > > > Sent: Tuesday, October 23, 2018 1:17 PM
>
> >
>
> > > > To: users@nifi.apache.org
>
> >
>
> > > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > > >
>
> >
>
> > > >
>
> >
>
> > > >
>
> >
>
> > > > Probably easiest to copy the files over since you have other existing 
> > > > users/policies and you know the first node is working.
>
> >
>
> > > >
>
> >
>
> > > > On Tue, Oct 23, 2018 at 1:12 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > > > <alexander.s...@nih.gov> wrote:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > Embarrassingly enough, there was a missing whitespace in the host DN 
> > > > > in the users.xml file. Thank you so much for pointing me in the right 
> > > > > direction! Now, in order to add another node, should I copy users.xml 
> > > > > and authorizations.xml from the connected node to it, or remove them 
> > > > > there instead?
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > -----Original Message-----
>
> >
>
> > > >
>
> >
>
> > > > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > > >
>
> >
>
> > > > > Sent: Tuesday, October 23, 2018 12:36 PM
>
> >
>
> > > >
>
> >
>
> > > > > To: users@nifi.apache.org
>
> >
>
> > > >
>
> >
>
> > > > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > That means the user representing host-1 does not have permissions to 
> > > > > proxy.
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > You can look in authorizations.xml on nifi-1 for a policy like:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > <policy identifier="287edf48-da72-359b-8f61-da5d4c45a270"
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > resource="/proxy" action="W">
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >             <user
>
> >
>
> > > > > identifier="c22273fa-7ed3-38a9-8994-3ed5fea5d234"/>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >         </policy>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > That user identifier should point to a user in users.xml like:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > <user identifier="c22273fa-7ed3-38a9-8994-3ed5fea5d234"
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > identity="CN=<host-1, redacted>, OU=Devices, OU=NIH, OU=HHS, O=U.S.
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > Government, C=US"/>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > All of the user identities are case sensitive and white space 
> > > > > sensitive so make sure whatever is in users.xml is exactly what is 
> > > > > shown in the logs.
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > On Tue, Oct 23, 2018 at 12:28 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > > > > <alexander.s...@nih.gov> wrote:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Hi Bryan,
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Yes, converting two standalone NiFi instances into a cluster is 
> > > > > > exactly what we are trying to do. Here are the steps I went through 
> > > > > > in this round:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > ·         restored the original configuration files 
> > > > > > (nifi.properties, users.xml, authorizers.xml and authorizations.xml)
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > ·         restarted one instance in the standalone mode
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > ·         added two new node users in the NiFi web UI (CN=<host-1, 
> > > > > > redacted>, OU=Devices, OU=NIH, OU=HHS, O=U.S. Government, C=US and 
> > > > > > CN=<host-2, redacted>, OU=Devices, OU=NIH, OU=HHS, O=U.S. 
> > > > > > Government, C=US)
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > ·         granted them the “proxy user requests” privileges
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > ·         edited the nifi.properties file 
> > > > > > (nifi.state.management.embedded.zookeeper.start=true, 
> > > > > > nifi.cluster.is.node=true, nifi.zookeeper.connect.string=<host-1, 
> > > > > > redacted>:2181)
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > ·         restarted the node on host-1
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > On logging in, I see the cluster section of the dashboard showing 
> > > > > > 1/1 as expected, although I’m unable to do anything there due to 
> > > > > > errors like this:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Insufficient Permissions
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Node <host-1, redacted>:8008 is unable to fulfill this request due 
> > > > > > to: Untrusted proxy CN=<host-1, redacted>, OU=Devices, OU=NIH, 
> > > > > > OU=HHS, O=U.S. Government, C=US Contact the system administrator.
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > The nifi-user.log also contains
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > 2018-10-23 12:17:01,916 WARN [NiFi Web Server-224]
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > o.a.n.w.s.NiFiAuthenticationFilter Rejecting access to web api:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Untrusted proxy CN=<host-1, redacted>, OU=Devices, OU=NIH,
>
> >
>
> > > > > > OU=HHS,
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > O=U.S. Government, C=US
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > From your experience, what the most likely causes for this 
> > > > > > exception?
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Thank you,
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Alexander
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > -----Original Message-----
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Sent: Monday, October 22, 2018 1:25 PM
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > To: users@nifi.apache.org
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Yes, to further clarify what I meant...
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > If you are trying to change the Initial Admin or Node Identities in 
> > > > > > authorizers.xml, these will only be used when there are no other 
> > > > > > users/group/policies present. People frequently make a mistake 
> > > > > > during initial config and then try to edit authorizers.xml and try 
> > > > > > again, but it won't actually do anything unless you remove the 
> > > > > > users.xml and authorizations.xml to start over.
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > In your case it sounds like you are trying to convert and existing 
> > > > > > standalone node to a cluster, given that I would do the following...
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > - In standalone mode, use the UI to add users for the DN's of
>
> >
>
> > > > > > the
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > server certificates (CN=nifi-node-1, OU=NIFI, CN=nifi-node-2,
>
> >
>
> > > >
>
> >
>
> > > > > > OU=NIFI)
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > - In the UI, grant those users Write access to "Proxy"
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > - Convert to a cluster and keep your same authorizers.xml,
>
> >
>
> > > >
>
> >
>
> > > > > > users.xml,
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > and authorizations.xml when you setup your cluster, this way
>
> > > > > > all
>
> >
>
> > > >
>
> >
>
> > > > > > your
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > users and policies are already setup and the Initial Admin and
>
> >
>
> > > > > > Node
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > Identities are not needed
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > On Mon, Oct 22, 2018 at 1:06 PM Saip, Alexander (NIH/CC/BTRIS) [C] 
> > > > > > <alexander.s...@nih.gov> wrote:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > Thanks again, Bryan. Just a quick follow-up question: does 
> > > > > > > removing users.xml and authorizations.xml mean that we will need 
> > > > > > > to re-create all users and groups that we had in the original 
> > > > > > > standalone NiFi instance?
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > -----Original Message-----
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > From: Bryan Bende <bbe...@gmail.com>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > Sent: Monday, October 22, 2018 12:48 PM
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > To: users@nifi.apache.org
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > Subject: Re: NiFi fails on cluster nodes
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > Sorry I was confused when you said two 1 node clusters and I 
> > > > > > > assumed they each had their own ZooKeeper.
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > You don't need to run ZK on both nodes, you can create a 2 node 
> > > > > > > cluster using the embedded ZK on the first node.
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > This blog post shows how to setup a secure 2 node cluster:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > https://bryanbende.com/development/2016/08/17/apache-nifi-1-
>
> > > > > > > 0-
>
> >
>
> > > > > > > 0-
>
> >
>
> > > > > > > au
>
> >
>
> > > >
>
> >
>
> > > > > > > th
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > or
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > ization-and-multi-tenancy
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > The only difference is that the authorizers.xml has changed 
> > > > > > > slightly, so instead of:
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > <authorizer>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >     <identifier>file-provider</identifier>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >
>
> >
>
> > > > > > > <class>org.apache.nifi.authorization.FileAuthorizer</class>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >     <property name="Authorizations
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > File">./conf/authorizations.xml</property>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >     <property name="Users File">./conf/users.xml</property>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >     <property name="Initial Admin Identity">CN=bbende,
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > OU=ApacheNiFi</property>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >     <property name="Legacy Authorized Users
>
> > > > > > > File"></property>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > >     <property name="Node Identity 1">CN=localhost,
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > OU=NIFI</property>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > > > </authorizer>
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >
>
> >
>
> > > > > >
>
> >
>
> > > >
>
> >
>
> > > > >
>
> >
>
> > > >

Reply via email to