NIFI-259: - Moving the state management documentation to after the Clustering 
configuration.

Signed-off-by: Aldrin Piri <ald...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/nifi/repo
Commit: http://git-wip-us.apache.org/repos/asf/nifi/commit/adfa5dc0
Tree: http://git-wip-us.apache.org/repos/asf/nifi/tree/adfa5dc0
Diff: http://git-wip-us.apache.org/repos/asf/nifi/diff/adfa5dc0

Branch: refs/heads/master
Commit: adfa5dc0ebc8446c8a4e2eeea882e27e526bc798
Parents: 7711106
Author: Matt Gilman <matt.c.gil...@gmail.com>
Authored: Wed Feb 3 09:48:01 2016 -0500
Committer: Aldrin Piri <ald...@apache.org>
Committed: Wed Feb 3 10:13:23 2016 -0500

----------------------------------------------------------------------
 .../src/main/asciidoc/administration-guide.adoc | 208 +++++++++----------
 1 file changed, 104 insertions(+), 104 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/nifi/blob/adfa5dc0/nifi-docs/src/main/asciidoc/administration-guide.adoc
----------------------------------------------------------------------
diff --git a/nifi-docs/src/main/asciidoc/administration-guide.adoc 
b/nifi-docs/src/main/asciidoc/administration-guide.adoc
index 3b41dab..a6da382 100644
--- a/nifi-docs/src/main/asciidoc/administration-guide.adoc
+++ b/nifi-docs/src/main/asciidoc/administration-guide.adoc
@@ -368,6 +368,108 @@ cluster, s/he can grant it to the group and avoid having 
to grant it individuall
 
 
 
+[[clustering]]
+Clustering Configuration
+------------------------
+
+This section provides a quick overview of NiFi Clustering and instructions on 
how to set up a basic cluster. In the future, we hope to provide supplemental 
documentation that covers the NiFi Cluster Architecture in depth.
+
+The design of NiFi clustering is a simple master/slave model where there is a 
master and one or more slaves.
+While the model is that of master and slave, if the master dies, the slaves 
are all instructed to continue operating
+as they were to ensure the dataflow remains live. The absence of the master 
simply means new slaves cannot join the
+cluster and cluster flow changes cannot occur until the master is restored. In 
NiFi clustering, we call the master
+the NiFi Cluster Manager (NCM), and the slaves are called Nodes. See a full 
description of each in the Terminology section below.
+
+*Why Cluster?* +
+
+NiFi Administrators or Dataflow Managers (DFMs) may find that using one 
instance of NiFi on a single server is not enough to process the amount of data 
they have. So, one solution is to run the same dataflow on multiple NiFi 
servers. However, this creates a management problem, because each time DFMs 
want to change or update the dataflow, they must make those changes on each 
server and then monitor each server individually. By clustering the NiFi 
servers, it's possible to have that increased processing capability along with 
a single interface through which to make dataflow changes and monitor the 
dataflow. Clustering allows the DFM to make each change only once, and that 
change is then replicated to all the nodes of the cluster. Through the single 
interface, the DFM may also monitor the health and status of all the nodes.
+
+NiFi Clustering is unique and has its own terminology. It's important to 
understand the following terms before setting up a cluster.
+
+[template="glossary", id="terminology"]
+*Terminology* +
+
+*NiFi Cluster Manager*: A NiFi Cluster Manager (NCM) is an instance of NiFi 
that provides the sole management point for the cluster. It communicates 
dataflow changes to the nodes and receives health and status information from 
the nodes. It also ensures that a uniform dataflow is maintained across the 
cluster.  When DFMs manage a dataflow in a cluster, they do so through the User 
Interface of the NCM (i.e., via the URL of the NCM's User Interface). 
Fundamentally, the NCM keeps the state of the cluster consistent.
+
+*Nodes*: Each cluster is made up of the NCM and one or more nodes. The nodes 
do the actual data processing. (The NCM does not process any data; all data 
runs through the nodes.)  While nodes are connected to a cluster, the DFM may 
not access the User Interface for any of the individual nodes. The User 
Interface of a node may only be accessed if the node is manually removed from 
the cluster.
+
+*Primary Node*: Every cluster has one Primary Node. On this node, it is 
possible to run "Isolated Processors" (see below). By default, the NCM will 
elect the first node that connects to the cluster as the Primary Node; however, 
the DFM may select a new node as the Primary Node in the Cluster Management 
page of the User Interface if desired. If the cluster restarts, the NCM will 
"remember" which node was the Primary Node and wait for that node to re-connect 
before allowing the DFM to make any changes to the dataflow. The ADMIN may 
adjust how long the NCM waits for the Primary Node to reconnect by adjusting 
the property _nifi.cluster.manager.safemode.duration_ in the _nifi.properties_ 
file, which is discussed in the <<system_properties>> section of this document.
+
+*Isolated Processors*: In a NiFi cluster, the same dataflow runs on all the 
nodes. As a result, every component in the flow runs on every node. However, 
there may be cases when the DFM would not want every processor to run on every 
node. The most common case is when using a processor that communicates with an 
external service using a protocol that does not scale well. For example, the 
GetSFTP processor pulls from a remote directory, and if the GetSFTP on every 
node in the cluster tries simultaneously to pull from the same remote 
directory, there could be race conditions. Therefore, the DFM could configure 
the GetSFTP on the Primary Node to run in isolation, meaning that it only runs 
on that node. It could pull in data and -with the proper dataflow 
configuration- load-balance it across the rest of the nodes in the cluster. 
Note that while this feature exists, it is also very common to simply use a 
standalone NiFi instance to pull data and feed it to the cluster. It just 
depends on th
 e resources available and how the Administrator decides to configure the 
cluster.
+
+*Heartbeats*: The nodes communicate their health and status to the NCM via 
"heartbeats", which let the NCM know they are still connected to the cluster 
and working properly. By default, the nodes emit heartbeats to the NCM every 5 
seconds, and if the NCM does not receive a heartbeat from a node within 45 
seconds, it disconnects the node due to "lack of heartbeat". (The 5-second and 
45-second settings are configurable in the _nifi.properties_ file. See the 
<<system_properties>> section of this document for more information.) The 
reason that the NCM disconnects the node is because the NCM needs to ensure 
that every node in the cluster is in sync, and if a node is not heard from 
regularly, the NCM cannot be sure it is still in sync with the rest of the 
cluster. If, after 45 seconds, the node does send a new heartbeat, the NCM will 
automatically reconnect the node to the cluster. Both the disconnection due to 
lack of heartbeat and the reconnection once a heartbeat is received are report
 ed to the DFM in the NCM's User Interface.
+
+*Communication within the Cluster* +
+
+As noted, the nodes communicate with the NCM via heartbeats. The communication 
that allows the nodes to find the NCM may be set up as multicast or unicast; 
this is configured in the _nifi.properties_ file (See <<system_properties>> ). 
By default, unicast is used. It is important to note that the nodes in a NiFi 
cluster are not aware of each other. They only communicate with the NCM. 
Therefore, if one of the nodes goes down, the other nodes in the cluster will 
not automatically pick up the load of the missing node. It is possible for the 
DFM to configure the dataflow for failover contingencies; however, this is 
dependent on the dataflow design and does not happen automatically.
+
+When the DFM makes changes to the dataflow, the NCM communicates those changes 
to the nodes and waits for each node to respond, indicating that it has made 
the change on its local flow. If the DFM wants to make another change, the NCM 
will only allow this to happen once all the nodes have acknowledged that 
they've implemented the last change. This is a safeguard to ensure that all the 
nodes in the cluster have the correct and up-to-date flow.
+
+*Dealing with Disconnected Nodes* +
+
+A DFM may manually disconnect a node from the cluster. But if a node becomes 
disconnected for any other reason (such as due to lack of heartbeat), the NCM 
will show a bulletin on the User Interface, and the DFM will not be able to 
make any changes to the dataflow until the issue of the disconnected node is 
resolved. The DFM or the Administrator will need to troubleshoot the issue with 
the node and resolve it before any new changes may be made to the dataflow. 
However, it is worth noting that just because a node is disconnected does not 
mean that it is not working; it just means that the NCM cannot communicate with 
the node.
+
+
+*Basic Cluster Setup* +
+
+This section describes the setup for a simple two-node, non-secure, unicast 
cluster comprised of three instances of NiFi:
+
+* The NCM
+* Node 1
+* Node 2
+
+Administrators may install each instance on a separate server; however, it is 
also perfectly fine to install the NCM and one of the nodes on the same server, 
as the NCM is very lightweight. Just keep in mind that the ports assigned to 
each instance must not collide if the NCM and one of the nodes share the same 
server.
+
+For each instance, certain properties in the _nifi.properties_ file will need 
to be updated. In particular, the Web and Clustering properties should be 
evaluated for your situation and adjusted accordingly. All the properties are 
described in the <<system_properties>> section of this guide; however, in this 
section, we will focus on the minimum properties that must be set for a simple 
cluster.
+
+For all three instances, the Cluster Common Properties can be left with the 
default settings. Note, however, that if you change these settings, they must 
be set the same on every instance in the cluster (NCM and nodes).
+
+For the NCM, the minimum properties to configure are as follows:
+
+* Under the Web Properties, set either the http or https port that you want 
the NCM to run on. If the NCM and one of the nodes are on the same server, make 
sure this port is different from the web port used by the node.
+* Under the Cluster Manager Properties, set the following:
+** nifi.cluster.is.manager - Set this to _true_.
+** nifi.cluster.manager.protocol.port - Set this to an open port that is 
higher than 1024 (anything lower requires root). Take note of this setting, as 
you will need to reference it when you set up the nodes.
+
+For Node 1, the minimum properties to configure are as follows:
+
+* Under the Web Properties, set either the http or https port that you want 
Node 1 to run on. If the NCM is running on the same server, choose a different 
web port for Node 1. Also, consider whether you need to set the http or https 
host property.
+* Under the State Management section, set the 
`nifi.state.management.provider.cluster` property to the identifier of the 
Cluster State Provider. Ensure that the Cluster State Provider has been 
configured in the _state-management.xml_ file. See <<state_providers>> for more 
information.
+* Under Cluster Node Properties, set the following:
+** nifi.cluster.is.node - Set this to _true_.
+** nifi.cluster.node.address - Set this to the fully qualified hostname of the 
node. If left blank, it defaults to "localhost".
+** nifi.cluster.node.protocol.port - Set this to an open port that is higher 
than 1024 (anything lower requires root). If Node 1 and the NCM are on the same 
server, make sure this port is different from the 
nifi.cluster.manager.protocol.port.
+** nifi.cluster.node.unicast.manager.address - Set this to the NCM's fully 
qualified hostname.
+** nifi.cluster.node.unicast.manager.protocol.port - Set this to exactly the 
same port that was set on the NCM for the property 
nifi.cluster.manager.protocol.port.
+
+For Node 2, the minimum properties to configure are as follows:
+
+* Under the Web Properties, set either the http or https port that you want 
Node 2 to run on. Also, consider whether you need to set the http or https host 
property.
+* Under the State Management section, set the 
`nifi.state.management.provider.cluster` property to the identifier of the 
Cluster State Provider. Ensure that the Cluster State Provider has been 
configured in the _state-management.xml_ file. See <<state_providers>> for more 
information.
+* Under the Cluster Node Properties, set the following:
+** nifi.cluster.is.node - Set this to _true_.
+** nifi.cluster.node.address - Set this to the fully qualified hostname of the 
node. If left blank, it defaults to "localhost".
+** nifi.cluster.node.protocol.port - Set this to an open port that is higher 
than 1024 (anything lower requires root).
+** nifi.cluster.node.unicast.manager.address - Set this to the NCM's fully 
qualified hostname.
+** nifi.cluster.node.unicast.manager.protocol.port - Set this to exactly the 
same port that was set on the NCM for the property 
nifi.cluster.manager.protocol.port.
+
+Now, it is possible to start up the cluster. Technically, it does not matter 
which instance starts up first. However, you could start the NCM first, then 
Node 1 and then Node 2. Since the first node that connects is automatically 
elected as the Primary Node, this sequence should create a cluster where Node 1 
is the Primary Node. Navigate to the URL for the NCM in your web browser, and 
the User Interface should look similar to the following:
+
+image:ncm.png["NCM User Interface", width=940]
+
+*Troubleshooting*
+
+If you encounter issues and your cluster does not work as described, 
investigate the nifi.app log and nifi.user log on both the NCM and the nodes. 
If needed, you can change the logging level to DEBUG by editing the 
conf/logback.xml file. Specifically, set the level="DEBUG" in the following 
line (instead of "INFO"):
+
+----
+    <logger name="org.apache.nifi.web.api.config" level="INFO"
+additivity="false">
+        <appender-ref ref="USER_FILE"/>
+    </logger>
+----
+
+
+
 [[state_management]]
 State Management
 ----------------
@@ -612,7 +714,7 @@ Be sure to replace the value of _principal_ above with the 
appropriate Principal
 
 Next, we need to tell NiFi to use this as our JAAS configuration. This is done 
by setting a JVM System Property, so we will edit the `conf/bootstrap.conf` 
file.
 If the Client has already been configured to use Kerberos, this is not 
necessary, as it was done above. Otherwise, we will add the following line to 
our _bootstrap.conf_ file:
- 
+
 [source]
 java.arg.15=-Djava.security.auth.login.config=./conf/zookeeper-jaas.conf
 
@@ -652,7 +754,7 @@ Failure to do so, may result in errors similar to the 
following:
 [source]
 2016-01-08 16:08:57,888 ERROR [pool-26-thread-1-SendThread(localhost:2181)] 
o.a.zookeeper.client.ZooKeeperSaslClient An error: 
(java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
GSS initiate failed [Caused by GSSException: No valid credentials provided 
(Mechanism level: Server not found in Kerberos database (7) - 
LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
received SASL token. Zookeeper Client will go to AUTH_FAILED state.
 
-If there are problems communicating or authenticating with Kerberos, 
+If there are problems communicating or authenticating with Kerberos,
 
link:http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/tutorials/Troubleshooting.html[this
 Troubleshooting Guide] may be of value.
 
 One of the most important notes in the above Troubleshooting guide is the 
mechanism for turning on Debug output for Kerberos.
@@ -667,108 +769,6 @@ This output can be rather verbose but provides extremely 
valuable information fo
 
 
 
-[[clustering]]
-Clustering Configuration
-------------------------
-
-This section provides a quick overview of NiFi Clustering and instructions on 
how to set up a basic cluster. In the future, we hope to provide supplemental 
documentation that covers the NiFi Cluster Architecture in depth.
-
-The design of NiFi clustering is a simple master/slave model where there is a 
master and one or more slaves.
-While the model is that of master and slave, if the master dies, the slaves 
are all instructed to continue operating
-as they were to ensure the dataflow remains live. The absence of the master 
simply means new slaves cannot join the
-cluster and cluster flow changes cannot occur until the master is restored. In 
NiFi clustering, we call the master
-the NiFi Cluster Manager (NCM), and the slaves are called Nodes. See a full 
description of each in the Terminology section below.
-
-*Why Cluster?* +
-
-NiFi Administrators or Dataflow Managers (DFMs) may find that using one 
instance of NiFi on a single server is not enough to process the amount of data 
they have. So, one solution is to run the same dataflow on multiple NiFi 
servers. However, this creates a management problem, because each time DFMs 
want to change or update the dataflow, they must make those changes on each 
server and then monitor each server individually. By clustering the NiFi 
servers, it's possible to have that increased processing capability along with 
a single interface through which to make dataflow changes and monitor the 
dataflow. Clustering allows the DFM to make each change only once, and that 
change is then replicated to all the nodes of the cluster. Through the single 
interface, the DFM may also monitor the health and status of all the nodes.
-
-NiFi Clustering is unique and has its own terminology. It's important to 
understand the following terms before setting up a cluster.
-
-[template="glossary", id="terminology"]
-*Terminology* +
-
-*NiFi Cluster Manager*: A NiFi Cluster Manager (NCM) is an instance of NiFi 
that provides the sole management point for the cluster. It communicates 
dataflow changes to the nodes and receives health and status information from 
the nodes. It also ensures that a uniform dataflow is maintained across the 
cluster.  When DFMs manage a dataflow in a cluster, they do so through the User 
Interface of the NCM (i.e., via the URL of the NCM's User Interface). 
Fundamentally, the NCM keeps the state of the cluster consistent.
-
-*Nodes*: Each cluster is made up of the NCM and one or more nodes. The nodes 
do the actual data processing. (The NCM does not process any data; all data 
runs through the nodes.)  While nodes are connected to a cluster, the DFM may 
not access the User Interface for any of the individual nodes. The User 
Interface of a node may only be accessed if the node is manually removed from 
the cluster.
-
-*Primary Node*: Every cluster has one Primary Node. On this node, it is 
possible to run "Isolated Processors" (see below). By default, the NCM will 
elect the first node that connects to the cluster as the Primary Node; however, 
the DFM may select a new node as the Primary Node in the Cluster Management 
page of the User Interface if desired. If the cluster restarts, the NCM will 
"remember" which node was the Primary Node and wait for that node to re-connect 
before allowing the DFM to make any changes to the dataflow. The ADMIN may 
adjust how long the NCM waits for the Primary Node to reconnect by adjusting 
the property _nifi.cluster.manager.safemode.duration_ in the _nifi.properties_ 
file, which is discussed in the <<system_properties>> section of this document.
-
-*Isolated Processors*: In a NiFi cluster, the same dataflow runs on all the 
nodes. As a result, every component in the flow runs on every node. However, 
there may be cases when the DFM would not want every processor to run on every 
node. The most common case is when using a processor that communicates with an 
external service using a protocol that does not scale well. For example, the 
GetSFTP processor pulls from a remote directory, and if the GetSFTP on every 
node in the cluster tries simultaneously to pull from the same remote 
directory, there could be race conditions. Therefore, the DFM could configure 
the GetSFTP on the Primary Node to run in isolation, meaning that it only runs 
on that node. It could pull in data and -with the proper dataflow 
configuration- load-balance it across the rest of the nodes in the cluster. 
Note that while this feature exists, it is also very common to simply use a 
standalone NiFi instance to pull data and feed it to the cluster. It just 
depends on th
 e resources available and how the Administrator decides to configure the 
cluster.
-
-*Heartbeats*: The nodes communicate their health and status to the NCM via 
"heartbeats", which let the NCM know they are still connected to the cluster 
and working properly. By default, the nodes emit heartbeats to the NCM every 5 
seconds, and if the NCM does not receive a heartbeat from a node within 45 
seconds, it disconnects the node due to "lack of heartbeat". (The 5-second and 
45-second settings are configurable in the _nifi.properties_ file. See the 
<<system_properties>> section of this document for more information.) The 
reason that the NCM disconnects the node is because the NCM needs to ensure 
that every node in the cluster is in sync, and if a node is not heard from 
regularly, the NCM cannot be sure it is still in sync with the rest of the 
cluster. If, after 45 seconds, the node does send a new heartbeat, the NCM will 
automatically reconnect the node to the cluster. Both the disconnection due to 
lack of heartbeat and the reconnection once a heartbeat is received are report
 ed to the DFM in the NCM's User Interface.
-
-*Communication within the Cluster* +
-
-As noted, the nodes communicate with the NCM via heartbeats. The communication 
that allows the nodes to find the NCM may be set up as multicast or unicast; 
this is configured in the _nifi.properties_ file (See <<system_properties>> ). 
By default, unicast is used. It is important to note that the nodes in a NiFi 
cluster are not aware of each other. They only communicate with the NCM. 
Therefore, if one of the nodes goes down, the other nodes in the cluster will 
not automatically pick up the load of the missing node. It is possible for the 
DFM to configure the dataflow for failover contingencies; however, this is 
dependent on the dataflow design and does not happen automatically.
-
-When the DFM makes changes to the dataflow, the NCM communicates those changes 
to the nodes and waits for each node to respond, indicating that it has made 
the change on its local flow. If the DFM wants to make another change, the NCM 
will only allow this to happen once all the nodes have acknowledged that 
they've implemented the last change. This is a safeguard to ensure that all the 
nodes in the cluster have the correct and up-to-date flow.
-
-*Dealing with Disconnected Nodes* +
-
-A DFM may manually disconnect a node from the cluster. But if a node becomes 
disconnected for any other reason (such as due to lack of heartbeat), the NCM 
will show a bulletin on the User Interface, and the DFM will not be able to 
make any changes to the dataflow until the issue of the disconnected node is 
resolved. The DFM or the Administrator will need to troubleshoot the issue with 
the node and resolve it before any new changes may be made to the dataflow. 
However, it is worth noting that just because a node is disconnected does not 
mean that it is not working; it just means that the NCM cannot communicate with 
the node.
-
-
-*Basic Cluster Setup* +
-
-This section describes the setup for a simple two-node, non-secure, unicast 
cluster comprised of three instances of NiFi:
-
-* The NCM
-* Node 1
-* Node 2
-
-Administrators may install each instance on a separate server; however, it is 
also perfectly fine to install the NCM and one of the nodes on the same server, 
as the NCM is very lightweight. Just keep in mind that the ports assigned to 
each instance must not collide if the NCM and one of the nodes share the same 
server.
-
-For each instance, certain properties in the _nifi.properties_ file will need 
to be updated. In particular, the Web and Clustering properties should be 
evaluated for your situation and adjusted accordingly. All the properties are 
described in the <<system_properties>> section of this guide; however, in this 
section, we will focus on the minimum properties that must be set for a simple 
cluster.
-
-For all three instances, the Cluster Common Properties can be left with the 
default settings. Note, however, that if you change these settings, they must 
be set the same on every instance in the cluster (NCM and nodes).
-
-For the NCM, the minimum properties to configure are as follows:
-
-* Under the Web Properties, set either the http or https port that you want 
the NCM to run on. If the NCM and one of the nodes are on the same server, make 
sure this port is different from the web port used by the node.
-* Under the Cluster Manager Properties, set the following:
-** nifi.cluster.is.manager - Set this to _true_.
-** nifi.cluster.manager.protocol.port - Set this to an open port that is 
higher than 1024 (anything lower requires root). Take note of this setting, as 
you will need to reference it when you set up the nodes.
-
-For Node 1, the minimum properties to configure are as follows:
-
-* Under the Web Properties, set either the http or https port that you want 
Node 1 to run on. If the NCM is running on the same server, choose a different 
web port for Node 1. Also, consider whether you need to set the http or https 
host property.
-* Under the State Management section, set the 
`nifi.state.management.provider.cluster` property to the identifier of the 
Cluster State Provider. Ensure that the Cluster State Provider has been 
configured in the _state-management.xml_ file. See <<state_providers>> for more 
information.
-* Under Cluster Node Properties, set the following:
-** nifi.cluster.is.node - Set this to _true_.
-** nifi.cluster.node.address - Set this to the fully qualified hostname of the 
node. If left blank, it defaults to "localhost".
-** nifi.cluster.node.protocol.port - Set this to an open port that is higher 
than 1024 (anything lower requires root). If Node 1 and the NCM are on the same 
server, make sure this port is different from the 
nifi.cluster.manager.protocol.port.
-** nifi.cluster.node.unicast.manager.address - Set this to the NCM's fully 
qualified hostname.
-** nifi.cluster.node.unicast.manager.protocol.port - Set this to exactly the 
same port that was set on the NCM for the property 
nifi.cluster.manager.protocol.port.
-
-For Node 2, the minimum properties to configure are as follows:
-
-* Under the Web Properties, set either the http or https port that you want 
Node 2 to run on. Also, consider whether you need to set the http or https host 
property.
-* Under the State Management section, set the 
`nifi.state.management.provider.cluster` property to the identifier of the 
Cluster State Provider. Ensure that the Cluster State Provider has been 
configured in the _state-management.xml_ file. See <<state_providers>> for more 
information.
-* Under the Cluster Node Properties, set the following:
-** nifi.cluster.is.node - Set this to _true_.
-** nifi.cluster.node.address - Set this to the fully qualified hostname of the 
node. If left blank, it defaults to "localhost".
-** nifi.cluster.node.protocol.port - Set this to an open port that is higher 
than 1024 (anything lower requires root).
-** nifi.cluster.node.unicast.manager.address - Set this to the NCM's fully 
qualified hostname.
-** nifi.cluster.node.unicast.manager.protocol.port - Set this to exactly the 
same port that was set on the NCM for the property 
nifi.cluster.manager.protocol.port.
-
-Now, it is possible to start up the cluster. Technically, it does not matter 
which instance starts up first. However, you could start the NCM first, then 
Node 1 and then Node 2. Since the first node that connects is automatically 
elected as the Primary Node, this sequence should create a cluster where Node 1 
is the Primary Node. Navigate to the URL for the NCM in your web browser, and 
the User Interface should look similar to the following:
-
-image:ncm.png["NCM User Interface", width=940]
-
-*Troubleshooting*
-
-If you encounter issues and your cluster does not work as described, 
investigate the nifi.app log and nifi.user log on both the NCM and the nodes. 
If needed, you can change the logging level to DEBUG by editing the 
conf/logback.xml file. Specifically, set the level="DEBUG" in the following 
line (instead of "INFO"):
-
-----
-    <logger name="org.apache.nifi.web.api.config" level="INFO"
-additivity="false">
-        <appender-ref ref="USER_FILE"/>
-    </logger>
-----
-
-
-
 [[bootstrap_properties]]
 Bootstrap Properties
 --------------------

Reply via email to