I have a new 3 node cluster i'm trying to stand up.

On most of the API calls (to create proxy the proxy user) and others for
debugging, i get a "java.net.SocketTimeoutException: timeout" returned.  In
debugging the issue, the timeout occurs when the API replicates the request
to the same instance receiving the request.  I.e when node1 replicates the
request to node1.

I've attached a pruned debug logfile showing the /controller/config request
with the current request "nifi.cluster.node.read.timeout=10 sec".  From
what I can tell in the attached log is, Thread "Nifi Web Server-24" gets a
request for /nifi-api/controller/cluster" it replicates the request to
itself, which arrives 10 secs (the request timeout) later and comes in on
thread "NiFi Web Server-161" at roughly the same moment (5-6 ms after)
okhttp3 timeouts the request.  When 161 goes to respond the connection is
already closed.  So I think what's happening is that node1 locks/blocks
something that the replicated request needs to read from, but can't until
the lock is released when the request timeouts.  So why is the request
getting replicated to itself?

In troubleshooting I tried finding relevant REST endpoints and i get these
results:

/controller/config -> java.net.SocketTimeoutException: timeout
/flow/status ->  java.net.SocketTimeoutException: timeout

/flow/cluster/summary ->  {
  clusterSummary => {
    connectedNodes => "3 / 3",
    connectedNodeCount => 3,
    totalNodeCount => 3,
    connectedToCluster => true,
    clustered => true
  }
}

/controller/cluster -> {
  cluster => {
    nodes => [
      {
        nodeId => "8d066e95-b254-4527-a214-79ca5359f078",
        address => "node2.example.com",
        apiPort => 8443,
        status => "CONNECTED",
        heartbeat => "05/20/2019 02:19:13 UTC",
        roles => [
          "Cluster Coordinator"
        ],
        activeThreadCount => 0,
        queued => "0 / 0 bytes",
        events => [
          {
            timestamp => "05/20/2019 02:17:35 UTC",
            category => "INFO",
            message => "Received first heartbeat from connecting node. Node
connected."
          },
          {
            timestamp => "05/20/2019 02:17:20 UTC",
            category => "INFO",
            message => "Connection requested from existing node. Setting
status to connecting."
          },
          {
            timestamp => "05/20/2019 02:17:20 UTC",
            category => "INFO",
            message => "Connection requested from existing node. Setting
status to connecting."
          },
          {
            timestamp => "05/20/2019 02:17:14 UTC",
            category => "INFO",
            message => "Requesting that node connect to cluster"
          },
          {
            timestamp => "05/20/2019 02:17:14 UTC",
            category => "INFO",
            message => "Received heartbeat from node previously
disconnected due to Has Not Yet Connected
          }
        ],
        nodeStartTime => "05/20/2019 02:17:08 UTC"
      },
      {
        nodeId => "e5785deb-5646-421f-b858-e35342195988",
        address => "node3.example.com",
        apiPort => 8443,
        status => "CONNECTED",
        heartbeat => "05/20/2019 02:19:10 UTC",
        roles => [
          "Primary Node"
        ],
        activeThreadCount => 0,
        queued => "0 / 0 bytes",
        events => [
          {
            timestamp => "05/20/2019 02:17:30 UTC",
            category => "INFO",
            message => "Received first heartbeat from connecting node. Node
connected."
          },
          {
            timestamp => "05/20/2019 02:17:17 UTC",
            category => "INFO",
            message => "Connection requested from existing node. Setting
status to connecting."
          },
          {
            timestamp => "05/20/2019 02:17:17 UTC",
            category => "INFO",
            message => "Connection requested from existing node. Setting
status to connecting."
          },
          {
            timestamp => "05/20/2019 02:17:14 UTC",
            category => "INFO",
            message => "Received heartbeat from node previously
disconnected due to Has Not Yet Connected
          },
          {
            timestamp => "05/20/2019 02:17:14 UTC",
            category => "INFO",
            message => "Requesting that node connect to cluster"
          }
        ],
        nodeStartTime => "05/20/2019 02:17:06 UTC"
      },
      {
        nodeId => "334f49b5-d714-4013-8180-0ffca95aa829",
        address => " node1.example.com  ",
        apiPort => 8443,
        status => "CONNECTED",
        heartbeat => "05/20/2019 02:19:10 UTC",
        roles => [],
        activeThreadCount => 0,
        queued => "0 / 0 bytes",
        events => [
          {
            timestamp => "05/20/2019 02:17:35 UTC",
            category => "INFO",
            message => "Received first heartbeat from connecting node. Node
connected."
          },
          {
            timestamp => "05/20/2019 02:17:23 UTC",
            category => "INFO",
            message => "Connection requested from existing node. Setting
status to connecting."
          },
          {
            timestamp => "05/20/2019 02:17:17 UTC",
            category => "INFO",
            message => "Connection requested from existing node. Setting
status to connecting."
          },
          {
            timestamp => "05/20/2019 02:17:15 UTC",
            category => "INFO",
            message => "Received heartbeat from node previously
disconnected due to Has Not Yet Connected
          },
          {
            timestamp => "05/20/2019 02:17:15 UTC",
            category => "INFO",
            message => "Requesting that node connect to cluster"
          }
        ],
        nodeStartTime => "05/20/2019 02:17:06 UTC"
      }
    ],
    generated => "02:19:15 UTC"
  }
}


--
Curtis Ruck
2019-05-18 20:02:33,115 DEBUG [NiFi Web Server-24] 
o.a.nifi.web.server.HostHeaderHandler HostHeaderHandler#doScope on 
/nifi-api/controller/cluster
2019-05-18 20:02:33,115 DEBUG [NiFi Web Server-24] 
o.a.nifi.web.server.HostHeaderHandler Received request 
[/nifi-api/controller/cluster] with host header: node1.example.com:8443
2019-05-18 20:02:33,134 DEBUG [FileSystemRepository Workers Thread-2] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:33,258 DEBUG [NiFi Web Server-24] 
o.apache.nifi.util.ComponentIdGenerator Generating UUID 
cc89f0aa-016a-1000-ec9a-4b7333661f42 for msb=1558209753258, 
lsb=-1397721776189464766, ensureUnique=true
2019-05-18 20:02:33,259 DEBUG [NiFi Web Server-24] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Obtaining lock 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock@1ea8b09f[Read locks 
= 0] in order to replicate request GET 
https://node1.example.com:8443/nifi-api/controller/cluster
2019-05-18 20:02:33,259 DEBUG [NiFi Web Server-24] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Lock 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock@1ea8b09f[Read locks 
= 1] obtained in order to replicate request GET 
https://node1.example.com:8443/nifi-api/controller/cluster
2019-05-18 20:02:33,260 DEBUG [NiFi Web Server-24] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Replicating request GET 
https://node1.example.com:8443/nifi-api/controller/cluster with entity {} to 
[node1.example.com:8443]; response is null
2019-05-18 20:02:33,265 DEBUG [NiFi Web Server-24] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator For Request ID 
0bbc58a4-683c-40de-9273-60abaa9707e9, response object is 
StandardAsyncClusterResponse[id=0bbc58a4-683c-40de-9273-60abaa9707e9, 
uri=https://node1.example.com:8443/nifi-api/controller/cluster, method=GET, 
failure=false, responses=0/1]
2019-05-18 20:02:33,279 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Replicating request GET 
/nifi-api/controller/cluster to node1.example.com:8443
2019-05-18 20:02:33,279 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Replicating request to GET 
https://node1.example.com:8443/nifi-api/controller/cluster, request ID = null, 
headers = {Accept=application/json, text/javascript, 
X-Cluster-Id-Generation-Seed=cc89f0aa-016a-1000-ec9a-4b7333661f42, 
Referer=https://127.0.0.1:8443/nifi/login, Connection=close, 
X-Request-Replicated=true, X-ProxyPort=8443, X-ProxyHost=node1.example.com, 
X-ProxiedEntitiesChain=<no...@example.com>, 
X-RequestTransactionId=0bbc58a4-683c-40de-9273-60abaa9707e9, 
Accept-Language=en-US,en;q=0.5, Content-Length=2, 
Content-Type=application/json, X-ProxyScheme=https}
2019-05-18 20:02:33,280 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.o.OkHttpReplicationClient Replicating request 
OkHttpPreparedRequest[method=GET, headers={Accept=application/json, 
text/javascript, 
X-Cluster-Id-Generation-Seed=cc89f0aa-016a-1000-ec9a-4b7333661f42, 
Referer=https://127.0.0.1:8443/nifi/login, Connection=close, 
X-Request-Replicated=true, X-ProxyPort=8443, X-ProxyHost=node1.example.com, 
X-ProxiedEntitiesChain=<no...@example.com>, 
X-RequestTransactionId=0bbc58a4-683c-40de-9273-60abaa9707e9, 
Accept-Language=en-US,en;q=0.5, Content-Length=2, 
Content-Type=application/json, X-ProxyScheme=https}] to 
https://node1.example.com:8443/nifi-api/controller/cluster
2019-05-18 20:02:33,925 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:33,926 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463100416 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002160190 bytes are used
2019-05-18 20:02:33,926 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:33,948 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:33,948 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:33 UTC 2019; delete expired = 22 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:33,949 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:34,046 DEBUG [Clustering Tasks Thread-3] 
o.apache.nifi.controller.FlowController Generated heartbeat
2019-05-18 20:02:34,049 DEBUG [Process Cluster Protocol Request-6] 
o.a.n.c.p.impl.SocketProtocolListener Received request 
f4125cf1-4c0a-48e5-8c0a-6486103feaa5 from node1.example.com
2019-05-18 20:02:34,134 DEBUG [FileSystemRepository Workers Thread-3] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:34,148 DEBUG [Process Cluster Protocol Request-6] 
o.a.n.c.p.impl.SocketProtocolListener Received message: Z  ?<?xml 
version="1.0" encoding="UTF-8" 
standalone="yes"?><heartbeatMessage><heartbeat><connectionStatus><nodeId><id>ed9e5a88-4a40-4807-b254-31d2e61c180f</id><apiAddress>node1.example.com</apiAddress><apiPort>8443</apiPort><socketAddress>node1.example.com</socketAddress><socketPort>8444</socketPort><loadBalanceAddress>node1.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node1.example.com</siteToSiteAddress><siteToSitePort>8445</siteToSitePort><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSiteSecure>true</siteToSiteSecure></nodeId><state>CONNECTED</state><updateId>14</updateId></connectionStatus><nodeIdentifier><apiAddress>node1.example.com</apiAddress><apiPort>8443</apiPort><id>ed9e5a88-4a40-4807-b254-31d2e61c180f</id><loadBalanceAddress>node1.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node1.example.com</siteToSiteAddress><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSitePort>8445</siteToSitePort><siteToSiteSecure>true</siteToSiteSecure><socketAddress>node1.example.com</socketAddress><socketPort>8444</socketPort></nodeIdentifier><payload>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9InllcyI/PjxoZWFydGJlYXRQYXlsb2FkPjxhY3RpdmVUaHJlYWRDb3VudD4wPC9hY3RpdmVUaHJlYWRDb3VudD48Y2x1c3RlclN0YXR1cz48bm9kZUlkPjxpZD45NTMxZDY3YS1iNmMzLTQwZGItODhjNi1mNjEyNDQ5ZjJhODc8L2lkPjxhcGlBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvYXBpQWRkcmVzcz48YXBpUG9ydD44NDQzPC9hcGlQb3J0Pjxzb2NrZXRBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc29ja2V0QWRkcmVzcz48c29ja2V0UG9ydD44NDQ0PC9zb2NrZXRQb3J0Pjxsb2FkQmFsYW5jZUFkZHJlc3M+djMyNS1kYXRhaW50Mi5pbmMxLmFybXkubWlsPC9sb2FkQmFsYW5jZUFkZHJlc3M+PGxvYWRCYWxhbmNlUG9ydD42MzQyPC9sb2FkQmFsYW5jZVBvcnQ+PHNpdGVUb1NpdGVBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc2l0ZVRvU2l0ZUFkZHJlc3M+PHNpdGVUb1NpdGVQb3J0Pjg0NDU8L3NpdGVUb1NpdGVQb3J0PjxzaXRlVG9TaXRlSHR0cEFwaVBvcnQ+ODQ0Mzwvc2l0ZVRvU2l0ZUh0dHBBcGlQb3J0PjxzaXRlVG9TaXRlU2VjdXJlPnRydWU8L3NpdGVUb1NpdGVTZWN1cmU+PC9ub2RlSWQ+PHN0YXRlPkNPTk5FQ1RFRDwvc3RhdGU+PHVwZGF0ZUlkPjE5PC91cGRhdGVJZD48L2NsdXN0ZXJTdGF0dXM+PGNsdXN0ZXJTdGF0dXM+PG5vZGVJZD48aWQ+ZWM2NGE5ZjgtZGQzNC00ZjZiLWFkYzYtNTAxYmNiZjRlYjhiPC9pZD48YXBpQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L2FwaUFkZHJlc3M+PGFwaVBvcnQ+ODQ0MzwvYXBpUG9ydD48c29ja2V0QWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NvY2tldEFkZHJlc3M+PHNvY2tldFBvcnQ+ODQ0NDwvc29ja2V0UG9ydD48bG9hZEJhbGFuY2VBZGRyZXNzPnYzMjUtZGF0YWludDMuaW5jMS5hcm15Lm1pbDwvbG9hZEJhbGFuY2VBZGRyZXNzPjxsb2FkQmFsYW5jZVBvcnQ+NjM0MjwvbG9hZEJhbGFuY2VQb3J0PjxzaXRlVG9TaXRlQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NpdGVUb1NpdGVBZGRyZXNzPjxzaXRlVG9TaXRlUG9ydD44NDQ1PC9zaXRlVG9TaXRlUG9ydD48c2l0ZVRvU2l0ZUh0dHBBcGlQb3J0Pjg0NDM8L3NpdGVUb1NpdGVIdHRwQXBpUG9ydD48c2l0ZVRvU2l0ZVNlY3VyZT50cnVlPC9zaXRlVG9TaXRlU2VjdXJlPjwvbm9kZUlkPjxzdGF0ZT5DT05ORUNURUQ8L3N0YXRlPjx1cGRhdGVJZD4yMjwvdXBkYXRlSWQ+PC9jbHVzdGVyU3RhdHVzPjxjbHVzdGVyU3RhdHVzPjxub2RlSWQ+PGlkPmVkOWU1YTg4LTRhNDAtNDgwNy1iMjU0LTMxZDJlNjFjMTgwZjwvaWQ+PGFwaUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9hcGlBZGRyZXNzPjxhcGlQb3J0Pjg0NDM8L2FwaVBvcnQ+PHNvY2tldEFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zb2NrZXRBZGRyZXNzPjxzb2NrZXRQb3J0Pjg0NDQ8L3NvY2tldFBvcnQ+PGxvYWRCYWxhbmNlQWRkcmVzcz52MzI1LWRhdGFpbnQxLmluYzEuYXJteS5taWw8L2xvYWRCYWxhbmNlQWRkcmVzcz48bG9hZEJhbGFuY2VQb3J0PjYzNDI8L2xvYWRCYWxhbmNlUG9ydD48c2l0ZVRvU2l0ZUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zaXRlVG9TaXRlQWRkcmVzcz48c2l0ZVRvU2l0ZVBvcnQ+ODQ0NTwvc2l0ZVRvU2l0ZVBvcnQ+PHNpdGVUb1NpdGVIdHRwQXBpUG9ydD44NDQzPC9zaXRlVG9TaXRlSHR0cEFwaVBvcnQ+PHNpdGVUb1NpdGVTZWN1cmU+dHJ1ZTwvc2l0ZVRvU2l0ZVNlY3VyZT48L25vZGVJZD48c3RhdGU+Q09OTkVDVEVEPC9zdGF0ZT48dXBkYXRlSWQ+MTU8L3VwZGF0ZUlkPjwvY2x1c3RlclN0YXR1cz48c3lzdGVtU3RhcnRUaW1lPjE1NTgyMDkyNTc0MjM8L3N5c3RlbVN0YXJ0VGltZT48dG90YWxGbG93RmlsZUJ5dGVzPjA8L3RvdGFsRmxvd0ZpbGVCeXRlcz48dG90YWxGbG93RmlsZUNvdW50PjA8L3RvdGFsRmxvd0ZpbGVDb3VudD48L2hlYXJ0YmVhdFBheWxvYWQ+</payload></heartbeat></heartbeatMessage>
2019-05-18 20:02:34,149 DEBUG [Process Cluster Protocol Request-6] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from 
node1.example.com:8443
2019-05-18 20:02:34,149 DEBUG [Process Cluster Protocol Request-6] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor 

Calculated diff between current cluster status and node cluster status as 
follows:
Node: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Self: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Difference: []


2019-05-18 20:02:34,149 DEBUG [Process Cluster Protocol Request-6] 
o.a.n.c.p.impl.SocketProtocolListener Sending response for request 
f4125cf1-4c0a-48e5-8c0a-6486103feaa5
2019-05-18 20:02:34,149 INFO [Process Cluster Protocol Request-6] 
o.a.n.c.p.impl.SocketProtocolListener Finished processing request 
f4125cf1-4c0a-48e5-8c0a-6486103feaa5 (type=HEARTBEAT, length=4059 bytes) from 
node1.example.com:8443 in 100 millis
2019-05-18 20:02:34,150 INFO [Clustering Tasks Thread-3] 
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2019-05-18 
20:02:34,046 and sent to node1.example.com:8444 at 2019-05-18 20:02:34,150; 
send took 103 millis
2019-05-18 20:02:34,181 DEBUG [Listen to Bootstrap] 
org.apache.nifi.BootstrapListener Listening for Bootstrap Requests
2019-05-18 20:02:34,262 DEBUG [Process Cluster Protocol Request-7] 
o.a.n.c.p.impl.SocketProtocolListener Received request 
2b31e688-0a03-4916-a77f-4bd1c243d7d2 from node2.example.com
2019-05-18 20:02:34,376 DEBUG [Process Cluster Protocol Request-7] 
o.a.n.c.p.impl.SocketProtocolListener Received message: Z  ?<?xml 
version="1.0" encoding="UTF-8" 
standalone="yes"?><heartbeatMessage><heartbeat><connectionStatus><nodeId><id>9531d67a-b6c3-40db-88c6-f612449f2a87</id><apiAddress>node2.example.com</apiAddress><apiPort>8443</apiPort><socketAddress>node2.example.com</socketAddress><socketPort>8444</socketPort><loadBalanceAddress>node2.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node2.example.com</siteToSiteAddress><siteToSitePort>8445</siteToSitePort><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSiteSecure>true</siteToSiteSecure></nodeId><state>CONNECTED</state><updateId>18</updateId></connectionStatus><nodeIdentifier><apiAddress>node2.example.com</apiAddress><apiPort>8443</apiPort><id>9531d67a-b6c3-40db-88c6-f612449f2a87</id><loadBalanceAddress>node2.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node2.example.com</siteToSiteAddress><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSitePort>8445</siteToSitePort><siteToSiteSecure>true</siteToSiteSecure><socketAddress>node2.example.com</socketAddress><socketPort>8444</socketPort></nodeIdentifier><payload>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9InllcyI/PjxoZWFydGJlYXRQYXlsb2FkPjxhY3RpdmVUaHJlYWRDb3VudD4wPC9hY3RpdmVUaHJlYWRDb3VudD48Y2x1c3RlclN0YXR1cz48bm9kZUlkPjxpZD45NTMxZDY3YS1iNmMzLTQwZGItODhjNi1mNjEyNDQ5ZjJhODc8L2lkPjxhcGlBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvYXBpQWRkcmVzcz48YXBpUG9ydD44NDQzPC9hcGlQb3J0Pjxzb2NrZXRBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc29ja2V0QWRkcmVzcz48c29ja2V0UG9ydD44NDQ0PC9zb2NrZXRQb3J0Pjxsb2FkQmFsYW5jZUFkZHJlc3M+djMyNS1kYXRhaW50Mi5pbmMxLmFybXkubWlsPC9sb2FkQmFsYW5jZUFkZHJlc3M+PGxvYWRCYWxhbmNlUG9ydD42MzQyPC9sb2FkQmFsYW5jZVBvcnQ+PHNpdGVUb1NpdGVBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc2l0ZVRvU2l0ZUFkZHJlc3M+PHNpdGVUb1NpdGVQb3J0Pjg0NDU8L3NpdGVUb1NpdGVQb3J0PjxzaXRlVG9TaXRlSHR0cEFwaVBvcnQ+ODQ0Mzwvc2l0ZVRvU2l0ZUh0dHBBcGlQb3J0PjxzaXRlVG9TaXRlU2VjdXJlPnRydWU8L3NpdGVUb1NpdGVTZWN1cmU+PC9ub2RlSWQ+PHN0YXRlPkNPTk5FQ1RFRDwvc3RhdGU+PHVwZGF0ZUlkPjE5PC91cGRhdGVJZD48L2NsdXN0ZXJTdGF0dXM+PGNsdXN0ZXJTdGF0dXM+PG5vZGVJZD48aWQ+ZWM2NGE5ZjgtZGQzNC00ZjZiLWFkYzYtNTAxYmNiZjRlYjhiPC9pZD48YXBpQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L2FwaUFkZHJlc3M+PGFwaVBvcnQ+ODQ0MzwvYXBpUG9ydD48c29ja2V0QWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NvY2tldEFkZHJlc3M+PHNvY2tldFBvcnQ+ODQ0NDwvc29ja2V0UG9ydD48bG9hZEJhbGFuY2VBZGRyZXNzPnYzMjUtZGF0YWludDMuaW5jMS5hcm15Lm1pbDwvbG9hZEJhbGFuY2VBZGRyZXNzPjxsb2FkQmFsYW5jZVBvcnQ+NjM0MjwvbG9hZEJhbGFuY2VQb3J0PjxzaXRlVG9TaXRlQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NpdGVUb1NpdGVBZGRyZXNzPjxzaXRlVG9TaXRlUG9ydD44NDQ1PC9zaXRlVG9TaXRlUG9ydD48c2l0ZVRvU2l0ZUh0dHBBcGlQb3J0Pjg0NDM8L3NpdGVUb1NpdGVIdHRwQXBpUG9ydD48c2l0ZVRvU2l0ZVNlY3VyZT50cnVlPC9zaXRlVG9TaXRlU2VjdXJlPjwvbm9kZUlkPjxzdGF0ZT5DT05ORUNURUQ8L3N0YXRlPjx1cGRhdGVJZD4yMjwvdXBkYXRlSWQ+PC9jbHVzdGVyU3RhdHVzPjxjbHVzdGVyU3RhdHVzPjxub2RlSWQ+PGlkPmVkOWU1YTg4LTRhNDAtNDgwNy1iMjU0LTMxZDJlNjFjMTgwZjwvaWQ+PGFwaUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9hcGlBZGRyZXNzPjxhcGlQb3J0Pjg0NDM8L2FwaVBvcnQ+PHNvY2tldEFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zb2NrZXRBZGRyZXNzPjxzb2NrZXRQb3J0Pjg0NDQ8L3NvY2tldFBvcnQ+PGxvYWRCYWxhbmNlQWRkcmVzcz52MzI1LWRhdGFpbnQxLmluYzEuYXJteS5taWw8L2xvYWRCYWxhbmNlQWRkcmVzcz48bG9hZEJhbGFuY2VQb3J0PjYzNDI8L2xvYWRCYWxhbmNlUG9ydD48c2l0ZVRvU2l0ZUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zaXRlVG9TaXRlQWRkcmVzcz48c2l0ZVRvU2l0ZVBvcnQ+ODQ0NTwvc2l0ZVRvU2l0ZVBvcnQ+PHNpdGVUb1NpdGVIdHRwQXBpUG9ydD44NDQzPC9zaXRlVG9TaXRlSHR0cEFwaVBvcnQ+PHNpdGVUb1NpdGVTZWN1cmU+dHJ1ZTwvc2l0ZVRvU2l0ZVNlY3VyZT48L25vZGVJZD48c3RhdGU+Q09OTkVDVEVEPC9zdGF0ZT48dXBkYXRlSWQ+MTU8L3VwZGF0ZUlkPjwvY2x1c3RlclN0YXR1cz48c3lzdGVtU3RhcnRUaW1lPjE1NTgyMDkzNDU4MTY8L3N5c3RlbVN0YXJ0VGltZT48dG90YWxGbG93RmlsZUJ5dGVzPjA8L3RvdGFsRmxvd0ZpbGVCeXRlcz48dG90YWxGbG93RmlsZUNvdW50PjA8L3RvdGFsRmxvd0ZpbGVDb3VudD48L2hlYXJ0YmVhdFBheWxvYWQ+</payload></heartbeat></heartbeatMessage>
2019-05-18 20:02:34,377 DEBUG [Process Cluster Protocol Request-7] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from 
node2.example.com:8443
2019-05-18 20:02:34,377 DEBUG [Process Cluster Protocol Request-7] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor 

Calculated diff between current cluster status and node cluster status as 
follows:
Node: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Self: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Difference: []


2019-05-18 20:02:34,377 DEBUG [Process Cluster Protocol Request-7] 
o.a.n.c.p.impl.SocketProtocolListener Sending response for request 
2b31e688-0a03-4916-a77f-4bd1c243d7d2
2019-05-18 20:02:34,377 INFO [Process Cluster Protocol Request-7] 
o.a.n.c.p.impl.SocketProtocolListener Finished processing request 
2b31e688-0a03-4916-a77f-4bd1c243d7d2 (type=HEARTBEAT, length=4059 bytes) from 
node2.example.com:8443 in 115 millis
2019-05-18 20:02:34,949 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:34,949 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463088128 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002147902 bytes are used
2019-05-18 20:02:34,949 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:34,973 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:34,973 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:34 UTC 2019; delete expired = 23 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:34,973 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:35,123 DEBUG [Provenance Maintenance Thread-1] 
o.a.n.p.PersistentProvenanceRepository Purging old event files: []
2019-05-18 20:02:35,123 DEBUG [Provenance Maintenance Thread-1] 
o.a.n.p.PersistentProvenanceRepository After expiration, path map: {}
2019-05-18 20:02:35,134 DEBUG [FileSystemRepository Workers Thread-2] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:35,973 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:35,973 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463084032 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002143806 bytes are used
2019-05-18 20:02:35,973 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:35,998 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:35,998 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:35 UTC 2019; delete expired = 24 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:35,998 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:36,135 DEBUG [FileSystemRepository Workers Thread-3] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:36,183 DEBUG [Listen to Bootstrap] 
org.apache.nifi.BootstrapListener Listening for Bootstrap Requests
2019-05-18 20:02:36,217 DEBUG [Validate Components Thread-4] 
o.a.n.c.validation.TriggerValidationTask Triggering validation of all components
2019-05-18 20:02:36,217 DEBUG [Http Site-to-Site Transaction Maintenance] 
o.a.nifi.remote.HttpRemoteSiteListener Transaction maintenance task finished. 
originalSize=0, currentSize=0
2019-05-18 20:02:36,285 INFO [Heartbeat Monitor Thread-1] 
o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 3 heartbeats in 15761 
nanos
2019-05-18 20:02:36,358 DEBUG [Leader Election Notification Thread-4] 
o.a.n.c.l.e.CuratorLeaderElectionManager Reached out to ZooKeeper to determine 
which node is the elected leader for Role 'Cluster Coordinator'. Elected Leader 
= 'node1.example.com:8444', Participant ID = 'node1.example.com:8444', This 
Node Elected = true
2019-05-18 20:02:36,883 DEBUG [Process Cluster Protocol Request-8] 
o.a.n.c.p.impl.SocketProtocolListener Received request 
7b3d4d08-db85-457e-82e5-7d9d228b25bd from node3.example.com
2019-05-18 20:02:36,987 DEBUG [Process Cluster Protocol Request-8] 
o.a.n.c.p.impl.SocketProtocolListener Received message: Z  ?<?xml 
version="1.0" encoding="UTF-8" 
standalone="yes"?><heartbeatMessage><heartbeat><connectionStatus><nodeId><id>ec64a9f8-dd34-4f6b-adc6-501bcbf4eb8b</id><apiAddress>node3.example.com</apiAddress><apiPort>8443</apiPort><socketAddress>node3.example.com</socketAddress><socketPort>8444</socketPort><loadBalanceAddress>node3.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node3.example.com</siteToSiteAddress><siteToSitePort>8445</siteToSitePort><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSiteSecure>true</siteToSiteSecure></nodeId><state>CONNECTED</state><updateId>21</updateId></connectionStatus><nodeIdentifier><apiAddress>node3.example.com</apiAddress><apiPort>8443</apiPort><id>ec64a9f8-dd34-4f6b-adc6-501bcbf4eb8b</id><loadBalanceAddress>node3.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node3.example.com</siteToSiteAddress><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSitePort>8445</siteToSitePort><siteToSiteSecure>true</siteToSiteSecure><socketAddress>node3.example.com</socketAddress><socketPort>8444</socketPort></nodeIdentifier><payload>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9InllcyI/PjxoZWFydGJlYXRQYXlsb2FkPjxhY3RpdmVUaHJlYWRDb3VudD4wPC9hY3RpdmVUaHJlYWRDb3VudD48Y2x1c3RlclN0YXR1cz48bm9kZUlkPjxpZD45NTMxZDY3YS1iNmMzLTQwZGItODhjNi1mNjEyNDQ5ZjJhODc8L2lkPjxhcGlBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvYXBpQWRkcmVzcz48YXBpUG9ydD44NDQzPC9hcGlQb3J0Pjxzb2NrZXRBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc29ja2V0QWRkcmVzcz48c29ja2V0UG9ydD44NDQ0PC9zb2NrZXRQb3J0Pjxsb2FkQmFsYW5jZUFkZHJlc3M+djMyNS1kYXRhaW50Mi5pbmMxLmFybXkubWlsPC9sb2FkQmFsYW5jZUFkZHJlc3M+PGxvYWRCYWxhbmNlUG9ydD42MzQyPC9sb2FkQmFsYW5jZVBvcnQ+PHNpdGVUb1NpdGVBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc2l0ZVRvU2l0ZUFkZHJlc3M+PHNpdGVUb1NpdGVQb3J0Pjg0NDU8L3NpdGVUb1NpdGVQb3J0PjxzaXRlVG9TaXRlSHR0cEFwaVBvcnQ+ODQ0Mzwvc2l0ZVRvU2l0ZUh0dHBBcGlQb3J0PjxzaXRlVG9TaXRlU2VjdXJlPnRydWU8L3NpdGVUb1NpdGVTZWN1cmU+PC9ub2RlSWQ+PHN0YXRlPkNPTk5FQ1RFRDwvc3RhdGU+PHVwZGF0ZUlkPjE5PC91cGRhdGVJZD48L2NsdXN0ZXJTdGF0dXM+PGNsdXN0ZXJTdGF0dXM+PG5vZGVJZD48aWQ+ZWM2NGE5ZjgtZGQzNC00ZjZiLWFkYzYtNTAxYmNiZjRlYjhiPC9pZD48YXBpQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L2FwaUFkZHJlc3M+PGFwaVBvcnQ+ODQ0MzwvYXBpUG9ydD48c29ja2V0QWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NvY2tldEFkZHJlc3M+PHNvY2tldFBvcnQ+ODQ0NDwvc29ja2V0UG9ydD48bG9hZEJhbGFuY2VBZGRyZXNzPnYzMjUtZGF0YWludDMuaW5jMS5hcm15Lm1pbDwvbG9hZEJhbGFuY2VBZGRyZXNzPjxsb2FkQmFsYW5jZVBvcnQ+NjM0MjwvbG9hZEJhbGFuY2VQb3J0PjxzaXRlVG9TaXRlQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NpdGVUb1NpdGVBZGRyZXNzPjxzaXRlVG9TaXRlUG9ydD44NDQ1PC9zaXRlVG9TaXRlUG9ydD48c2l0ZVRvU2l0ZUh0dHBBcGlQb3J0Pjg0NDM8L3NpdGVUb1NpdGVIdHRwQXBpUG9ydD48c2l0ZVRvU2l0ZVNlY3VyZT50cnVlPC9zaXRlVG9TaXRlU2VjdXJlPjwvbm9kZUlkPjxzdGF0ZT5DT05ORUNURUQ8L3N0YXRlPjx1cGRhdGVJZD4yMjwvdXBkYXRlSWQ+PC9jbHVzdGVyU3RhdHVzPjxjbHVzdGVyU3RhdHVzPjxub2RlSWQ+PGlkPmVkOWU1YTg4LTRhNDAtNDgwNy1iMjU0LTMxZDJlNjFjMTgwZjwvaWQ+PGFwaUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9hcGlBZGRyZXNzPjxhcGlQb3J0Pjg0NDM8L2FwaVBvcnQ+PHNvY2tldEFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zb2NrZXRBZGRyZXNzPjxzb2NrZXRQb3J0Pjg0NDQ8L3NvY2tldFBvcnQ+PGxvYWRCYWxhbmNlQWRkcmVzcz52MzI1LWRhdGFpbnQxLmluYzEuYXJteS5taWw8L2xvYWRCYWxhbmNlQWRkcmVzcz48bG9hZEJhbGFuY2VQb3J0PjYzNDI8L2xvYWRCYWxhbmNlUG9ydD48c2l0ZVRvU2l0ZUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zaXRlVG9TaXRlQWRkcmVzcz48c2l0ZVRvU2l0ZVBvcnQ+ODQ0NTwvc2l0ZVRvU2l0ZVBvcnQ+PHNpdGVUb1NpdGVIdHRwQXBpUG9ydD44NDQzPC9zaXRlVG9TaXRlSHR0cEFwaVBvcnQ+PHNpdGVUb1NpdGVTZWN1cmU+dHJ1ZTwvc2l0ZVRvU2l0ZVNlY3VyZT48L25vZGVJZD48c3RhdGU+Q09OTkVDVEVEPC9zdGF0ZT48dXBkYXRlSWQ+MTU8L3VwZGF0ZUlkPjwvY2x1c3RlclN0YXR1cz48c3lzdGVtU3RhcnRUaW1lPjE1NTgyMDk0NDM2NDI8L3N5c3RlbVN0YXJ0VGltZT48dG90YWxGbG93RmlsZUJ5dGVzPjA8L3RvdGFsRmxvd0ZpbGVCeXRlcz48dG90YWxGbG93RmlsZUNvdW50PjA8L3RvdGFsRmxvd0ZpbGVDb3VudD48L2hlYXJ0YmVhdFBheWxvYWQ+</payload></heartbeat></heartbeatMessage>
2019-05-18 20:02:36,988 DEBUG [Process Cluster Protocol Request-8] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from 
node3.example.com:8443
2019-05-18 20:02:36,988 DEBUG [Process Cluster Protocol Request-8] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor 

Calculated diff between current cluster status and node cluster status as 
follows:
Node: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Self: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Difference: []


2019-05-18 20:02:36,988 DEBUG [Process Cluster Protocol Request-8] 
o.a.n.c.p.impl.SocketProtocolListener Sending response for request 
7b3d4d08-db85-457e-82e5-7d9d228b25bd
2019-05-18 20:02:36,988 INFO [Process Cluster Protocol Request-8] 
o.a.n.c.p.impl.SocketProtocolListener Finished processing request 
7b3d4d08-db85-457e-82e5-7d9d228b25bd (type=HEARTBEAT, length=4059 bytes) from 
node3.example.com:8443 in 104 millis
2019-05-18 20:02:36,998 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:36,999 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463075840 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002135614 bytes are used
2019-05-18 20:02:36,999 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:37,022 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:37,022 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:37 UTC 2019; delete expired = 23 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:37,023 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:37,135 DEBUG [FileSystemRepository Workers Thread-2] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:37,623 DEBUG [Provenance Maintenance Thread-3] 
o.a.n.p.PersistentProvenanceRepository Purging old event files: []
2019-05-18 20:02:37,624 DEBUG [Provenance Maintenance Thread-3] 
o.a.n.p.PersistentProvenanceRepository After expiration, path map: {}
2019-05-18 20:02:38,023 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:38,023 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463075840 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002135614 bytes are used
2019-05-18 20:02:38,023 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:38,047 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:38,047 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:38 UTC 2019; delete expired = 23 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:38,047 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:38,135 DEBUG [FileSystemRepository Workers Thread-3] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:38,185 DEBUG [Listen to Bootstrap] 
org.apache.nifi.BootstrapListener Listening for Bootstrap Requests
2019-05-18 20:02:39,047 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:39,047 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463075840 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002135614 bytes are used
2019-05-18 20:02:39,047 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:39,083 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:39,083 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:39 UTC 2019; delete expired = 35 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:39,083 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:39,136 DEBUG [FileSystemRepository Workers Thread-2] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:39,151 DEBUG [Clustering Tasks Thread-3] 
o.apache.nifi.controller.FlowController Generated heartbeat
2019-05-18 20:02:39,154 DEBUG [Process Cluster Protocol Request-9] 
o.a.n.c.p.impl.SocketProtocolListener Received request 
ab6c1533-fac8-4290-a188-c59cdf6298f6 from node1.example.com
2019-05-18 20:02:39,253 DEBUG [Process Cluster Protocol Request-9] 
o.a.n.c.p.impl.SocketProtocolListener Received message: Z  ?<?xml 
version="1.0" encoding="UTF-8" 
standalone="yes"?><heartbeatMessage><heartbeat><connectionStatus><nodeId><id>ed9e5a88-4a40-4807-b254-31d2e61c180f</id><apiAddress>node1.example.com</apiAddress><apiPort>8443</apiPort><socketAddress>node1.example.com</socketAddress><socketPort>8444</socketPort><loadBalanceAddress>node1.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node1.example.com</siteToSiteAddress><siteToSitePort>8445</siteToSitePort><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSiteSecure>true</siteToSiteSecure></nodeId><state>CONNECTED</state><updateId>14</updateId></connectionStatus><nodeIdentifier><apiAddress>node1.example.com</apiAddress><apiPort>8443</apiPort><id>ed9e5a88-4a40-4807-b254-31d2e61c180f</id><loadBalanceAddress>node1.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node1.example.com</siteToSiteAddress><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSitePort>8445</siteToSitePort><siteToSiteSecure>true</siteToSiteSecure><socketAddress>node1.example.com</socketAddress><socketPort>8444</socketPort></nodeIdentifier><payload>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9InllcyI/PjxoZWFydGJlYXRQYXlsb2FkPjxhY3RpdmVUaHJlYWRDb3VudD4wPC9hY3RpdmVUaHJlYWRDb3VudD48Y2x1c3RlclN0YXR1cz48bm9kZUlkPjxpZD45NTMxZDY3YS1iNmMzLTQwZGItODhjNi1mNjEyNDQ5ZjJhODc8L2lkPjxhcGlBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvYXBpQWRkcmVzcz48YXBpUG9ydD44NDQzPC9hcGlQb3J0Pjxzb2NrZXRBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc29ja2V0QWRkcmVzcz48c29ja2V0UG9ydD44NDQ0PC9zb2NrZXRQb3J0Pjxsb2FkQmFsYW5jZUFkZHJlc3M+djMyNS1kYXRhaW50Mi5pbmMxLmFybXkubWlsPC9sb2FkQmFsYW5jZUFkZHJlc3M+PGxvYWRCYWxhbmNlUG9ydD42MzQyPC9sb2FkQmFsYW5jZVBvcnQ+PHNpdGVUb1NpdGVBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc2l0ZVRvU2l0ZUFkZHJlc3M+PHNpdGVUb1NpdGVQb3J0Pjg0NDU8L3NpdGVUb1NpdGVQb3J0PjxzaXRlVG9TaXRlSHR0cEFwaVBvcnQ+ODQ0Mzwvc2l0ZVRvU2l0ZUh0dHBBcGlQb3J0PjxzaXRlVG9TaXRlU2VjdXJlPnRydWU8L3NpdGVUb1NpdGVTZWN1cmU+PC9ub2RlSWQ+PHN0YXRlPkNPTk5FQ1RFRDwvc3RhdGU+PHVwZGF0ZUlkPjE5PC91cGRhdGVJZD48L2NsdXN0ZXJTdGF0dXM+PGNsdXN0ZXJTdGF0dXM+PG5vZGVJZD48aWQ+ZWM2NGE5ZjgtZGQzNC00ZjZiLWFkYzYtNTAxYmNiZjRlYjhiPC9pZD48YXBpQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L2FwaUFkZHJlc3M+PGFwaVBvcnQ+ODQ0MzwvYXBpUG9ydD48c29ja2V0QWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NvY2tldEFkZHJlc3M+PHNvY2tldFBvcnQ+ODQ0NDwvc29ja2V0UG9ydD48bG9hZEJhbGFuY2VBZGRyZXNzPnYzMjUtZGF0YWludDMuaW5jMS5hcm15Lm1pbDwvbG9hZEJhbGFuY2VBZGRyZXNzPjxsb2FkQmFsYW5jZVBvcnQ+NjM0MjwvbG9hZEJhbGFuY2VQb3J0PjxzaXRlVG9TaXRlQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NpdGVUb1NpdGVBZGRyZXNzPjxzaXRlVG9TaXRlUG9ydD44NDQ1PC9zaXRlVG9TaXRlUG9ydD48c2l0ZVRvU2l0ZUh0dHBBcGlQb3J0Pjg0NDM8L3NpdGVUb1NpdGVIdHRwQXBpUG9ydD48c2l0ZVRvU2l0ZVNlY3VyZT50cnVlPC9zaXRlVG9TaXRlU2VjdXJlPjwvbm9kZUlkPjxzdGF0ZT5DT05ORUNURUQ8L3N0YXRlPjx1cGRhdGVJZD4yMjwvdXBkYXRlSWQ+PC9jbHVzdGVyU3RhdHVzPjxjbHVzdGVyU3RhdHVzPjxub2RlSWQ+PGlkPmVkOWU1YTg4LTRhNDAtNDgwNy1iMjU0LTMxZDJlNjFjMTgwZjwvaWQ+PGFwaUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9hcGlBZGRyZXNzPjxhcGlQb3J0Pjg0NDM8L2FwaVBvcnQ+PHNvY2tldEFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zb2NrZXRBZGRyZXNzPjxzb2NrZXRQb3J0Pjg0NDQ8L3NvY2tldFBvcnQ+PGxvYWRCYWxhbmNlQWRkcmVzcz52MzI1LWRhdGFpbnQxLmluYzEuYXJteS5taWw8L2xvYWRCYWxhbmNlQWRkcmVzcz48bG9hZEJhbGFuY2VQb3J0PjYzNDI8L2xvYWRCYWxhbmNlUG9ydD48c2l0ZVRvU2l0ZUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zaXRlVG9TaXRlQWRkcmVzcz48c2l0ZVRvU2l0ZVBvcnQ+ODQ0NTwvc2l0ZVRvU2l0ZVBvcnQ+PHNpdGVUb1NpdGVIdHRwQXBpUG9ydD44NDQzPC9zaXRlVG9TaXRlSHR0cEFwaVBvcnQ+PHNpdGVUb1NpdGVTZWN1cmU+dHJ1ZTwvc2l0ZVRvU2l0ZVNlY3VyZT48L25vZGVJZD48c3RhdGU+Q09OTkVDVEVEPC9zdGF0ZT48dXBkYXRlSWQ+MTU8L3VwZGF0ZUlkPjwvY2x1c3RlclN0YXR1cz48c3lzdGVtU3RhcnRUaW1lPjE1NTgyMDkyNTc0MjM8L3N5c3RlbVN0YXJ0VGltZT48dG90YWxGbG93RmlsZUJ5dGVzPjA8L3RvdGFsRmxvd0ZpbGVCeXRlcz48dG90YWxGbG93RmlsZUNvdW50PjA8L3RvdGFsRmxvd0ZpbGVDb3VudD48L2hlYXJ0YmVhdFBheWxvYWQ+</payload></heartbeat></heartbeatMessage>
2019-05-18 20:02:39,253 DEBUG [Process Cluster Protocol Request-9] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from 
node1.example.com:8443
2019-05-18 20:02:39,254 DEBUG [Process Cluster Protocol Request-9] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor 

Calculated diff between current cluster status and node cluster status as 
follows:
Node: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Self: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Difference: []


2019-05-18 20:02:39,254 DEBUG [Process Cluster Protocol Request-9] 
o.a.n.c.p.impl.SocketProtocolListener Sending response for request 
ab6c1533-fac8-4290-a188-c59cdf6298f6
2019-05-18 20:02:39,254 INFO [Process Cluster Protocol Request-9] 
o.a.n.c.p.impl.SocketProtocolListener Finished processing request 
ab6c1533-fac8-4290-a188-c59cdf6298f6 (type=HEARTBEAT, length=4059 bytes) from 
node1.example.com:8443 in 100 millis
2019-05-18 20:02:39,255 INFO [Clustering Tasks Thread-3] 
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2019-05-18 
20:02:39,151 and sent to node1.example.com:8444 at 2019-05-18 20:02:39,255; 
send took 103 millis
2019-05-18 20:02:39,382 DEBUG [Process Cluster Protocol Request-10] 
o.a.n.c.p.impl.SocketProtocolListener Received request 
e5f3882f-f0e8-4c9a-9a0d-8985dd77c998 from node2.example.com
2019-05-18 20:02:39,480 DEBUG [Process Cluster Protocol Request-10] 
o.a.n.c.p.impl.SocketProtocolListener Received message: Z  ?<?xml 
version="1.0" encoding="UTF-8" 
standalone="yes"?><heartbeatMessage><heartbeat><connectionStatus><nodeId><id>9531d67a-b6c3-40db-88c6-f612449f2a87</id><apiAddress>node2.example.com</apiAddress><apiPort>8443</apiPort><socketAddress>node2.example.com</socketAddress><socketPort>8444</socketPort><loadBalanceAddress>node2.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node2.example.com</siteToSiteAddress><siteToSitePort>8445</siteToSitePort><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSiteSecure>true</siteToSiteSecure></nodeId><state>CONNECTED</state><updateId>18</updateId></connectionStatus><nodeIdentifier><apiAddress>node2.example.com</apiAddress><apiPort>8443</apiPort><id>9531d67a-b6c3-40db-88c6-f612449f2a87</id><loadBalanceAddress>node2.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node2.example.com</siteToSiteAddress><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSitePort>8445</siteToSitePort><siteToSiteSecure>true</siteToSiteSecure><socketAddress>node2.example.com</socketAddress><socketPort>8444</socketPort></nodeIdentifier><payload>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9InllcyI/PjxoZWFydGJlYXRQYXlsb2FkPjxhY3RpdmVUaHJlYWRDb3VudD4wPC9hY3RpdmVUaHJlYWRDb3VudD48Y2x1c3RlclN0YXR1cz48bm9kZUlkPjxpZD45NTMxZDY3YS1iNmMzLTQwZGItODhjNi1mNjEyNDQ5ZjJhODc8L2lkPjxhcGlBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvYXBpQWRkcmVzcz48YXBpUG9ydD44NDQzPC9hcGlQb3J0Pjxzb2NrZXRBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc29ja2V0QWRkcmVzcz48c29ja2V0UG9ydD44NDQ0PC9zb2NrZXRQb3J0Pjxsb2FkQmFsYW5jZUFkZHJlc3M+djMyNS1kYXRhaW50Mi5pbmMxLmFybXkubWlsPC9sb2FkQmFsYW5jZUFkZHJlc3M+PGxvYWRCYWxhbmNlUG9ydD42MzQyPC9sb2FkQmFsYW5jZVBvcnQ+PHNpdGVUb1NpdGVBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc2l0ZVRvU2l0ZUFkZHJlc3M+PHNpdGVUb1NpdGVQb3J0Pjg0NDU8L3NpdGVUb1NpdGVQb3J0PjxzaXRlVG9TaXRlSHR0cEFwaVBvcnQ+ODQ0Mzwvc2l0ZVRvU2l0ZUh0dHBBcGlQb3J0PjxzaXRlVG9TaXRlU2VjdXJlPnRydWU8L3NpdGVUb1NpdGVTZWN1cmU+PC9ub2RlSWQ+PHN0YXRlPkNPTk5FQ1RFRDwvc3RhdGU+PHVwZGF0ZUlkPjE5PC91cGRhdGVJZD48L2NsdXN0ZXJTdGF0dXM+PGNsdXN0ZXJTdGF0dXM+PG5vZGVJZD48aWQ+ZWM2NGE5ZjgtZGQzNC00ZjZiLWFkYzYtNTAxYmNiZjRlYjhiPC9pZD48YXBpQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L2FwaUFkZHJlc3M+PGFwaVBvcnQ+ODQ0MzwvYXBpUG9ydD48c29ja2V0QWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NvY2tldEFkZHJlc3M+PHNvY2tldFBvcnQ+ODQ0NDwvc29ja2V0UG9ydD48bG9hZEJhbGFuY2VBZGRyZXNzPnYzMjUtZGF0YWludDMuaW5jMS5hcm15Lm1pbDwvbG9hZEJhbGFuY2VBZGRyZXNzPjxsb2FkQmFsYW5jZVBvcnQ+NjM0MjwvbG9hZEJhbGFuY2VQb3J0PjxzaXRlVG9TaXRlQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NpdGVUb1NpdGVBZGRyZXNzPjxzaXRlVG9TaXRlUG9ydD44NDQ1PC9zaXRlVG9TaXRlUG9ydD48c2l0ZVRvU2l0ZUh0dHBBcGlQb3J0Pjg0NDM8L3NpdGVUb1NpdGVIdHRwQXBpUG9ydD48c2l0ZVRvU2l0ZVNlY3VyZT50cnVlPC9zaXRlVG9TaXRlU2VjdXJlPjwvbm9kZUlkPjxzdGF0ZT5DT05ORUNURUQ8L3N0YXRlPjx1cGRhdGVJZD4yMjwvdXBkYXRlSWQ+PC9jbHVzdGVyU3RhdHVzPjxjbHVzdGVyU3RhdHVzPjxub2RlSWQ+PGlkPmVkOWU1YTg4LTRhNDAtNDgwNy1iMjU0LTMxZDJlNjFjMTgwZjwvaWQ+PGFwaUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9hcGlBZGRyZXNzPjxhcGlQb3J0Pjg0NDM8L2FwaVBvcnQ+PHNvY2tldEFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zb2NrZXRBZGRyZXNzPjxzb2NrZXRQb3J0Pjg0NDQ8L3NvY2tldFBvcnQ+PGxvYWRCYWxhbmNlQWRkcmVzcz52MzI1LWRhdGFpbnQxLmluYzEuYXJteS5taWw8L2xvYWRCYWxhbmNlQWRkcmVzcz48bG9hZEJhbGFuY2VQb3J0PjYzNDI8L2xvYWRCYWxhbmNlUG9ydD48c2l0ZVRvU2l0ZUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zaXRlVG9TaXRlQWRkcmVzcz48c2l0ZVRvU2l0ZVBvcnQ+ODQ0NTwvc2l0ZVRvU2l0ZVBvcnQ+PHNpdGVUb1NpdGVIdHRwQXBpUG9ydD44NDQzPC9zaXRlVG9TaXRlSHR0cEFwaVBvcnQ+PHNpdGVUb1NpdGVTZWN1cmU+dHJ1ZTwvc2l0ZVRvU2l0ZVNlY3VyZT48L25vZGVJZD48c3RhdGU+Q09OTkVDVEVEPC9zdGF0ZT48dXBkYXRlSWQ+MTU8L3VwZGF0ZUlkPjwvY2x1c3RlclN0YXR1cz48c3lzdGVtU3RhcnRUaW1lPjE1NTgyMDkzNDU4MTY8L3N5c3RlbVN0YXJ0VGltZT48dG90YWxGbG93RmlsZUJ5dGVzPjA8L3RvdGFsRmxvd0ZpbGVCeXRlcz48dG90YWxGbG93RmlsZUNvdW50PjA8L3RvdGFsRmxvd0ZpbGVDb3VudD48L2hlYXJ0YmVhdFBheWxvYWQ+</payload></heartbeat></heartbeatMessage>
2019-05-18 20:02:39,481 DEBUG [Process Cluster Protocol Request-10] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from 
node2.example.com:8443
2019-05-18 20:02:39,481 DEBUG [Process Cluster Protocol Request-10] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor 

Calculated diff between current cluster status and node cluster status as 
follows:
Node: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Self: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Difference: []


2019-05-18 20:02:39,481 DEBUG [Process Cluster Protocol Request-10] 
o.a.n.c.p.impl.SocketProtocolListener Sending response for request 
e5f3882f-f0e8-4c9a-9a0d-8985dd77c998
2019-05-18 20:02:39,481 INFO [Process Cluster Protocol Request-10] 
o.a.n.c.p.impl.SocketProtocolListener Finished processing request 
e5f3882f-f0e8-4c9a-9a0d-8985dd77c998 (type=HEARTBEAT, length=4059 bytes) from 
node2.example.com:8443 in 99 millis
2019-05-18 20:02:40,083 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:40,084 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463063552 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002123326 bytes are used
2019-05-18 20:02:40,084 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:40,115 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:40,116 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:40 UTC 2019; delete expired = 31 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:40,116 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:40,124 DEBUG [Provenance Maintenance Thread-3] 
o.a.n.p.PersistentProvenanceRepository Purging old event files: []
2019-05-18 20:02:40,124 DEBUG [Provenance Maintenance Thread-3] 
o.a.n.p.PersistentProvenanceRepository After expiration, path map: {}
2019-05-18 20:02:40,136 DEBUG [FileSystemRepository Workers Thread-3] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:40,187 DEBUG [Listen to Bootstrap] 
org.apache.nifi.BootstrapListener Listening for Bootstrap Requests
2019-05-18 20:02:41,116 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:41,116 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463059456 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002119230 bytes are used
2019-05-18 20:02:41,116 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:41,136 DEBUG [FileSystemRepository Workers Thread-2] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:41,141 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:41,141 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:41 UTC 2019; delete expired = 24 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:41,141 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:41,217 DEBUG [Validate Components Thread-4] 
o.a.n.c.validation.TriggerValidationTask Triggering validation of all components
2019-05-18 20:02:41,217 DEBUG [Http Site-to-Site Transaction Maintenance] 
o.a.nifi.remote.HttpRemoteSiteListener Transaction maintenance task finished. 
originalSize=0, currentSize=0
2019-05-18 20:02:41,289 INFO [Heartbeat Monitor Thread-1] 
o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 3 heartbeats in 16391 
nanos
2019-05-18 20:02:41,363 DEBUG [Leader Election Notification Thread-4] 
o.a.n.c.l.e.CuratorLeaderElectionManager Reached out to ZooKeeper to determine 
which node is the elected leader for Role 'Cluster Coordinator'. Elected Leader 
= 'node1.example.com:8444', Participant ID = 'node1.example.com:8444', This 
Node Elected = true
2019-05-18 20:02:41,993 DEBUG [Process Cluster Protocol Request-2] 
o.a.n.c.p.impl.SocketProtocolListener Received request 
4515cad6-185e-4bae-839b-82aa7ed58c17 from node3.example.com
2019-05-18 20:02:42,093 DEBUG [Process Cluster Protocol Request-2] 
o.a.n.c.p.impl.SocketProtocolListener Received message: Z  ?<?xml 
version="1.0" encoding="UTF-8" 
standalone="yes"?><heartbeatMessage><heartbeat><connectionStatus><nodeId><id>ec64a9f8-dd34-4f6b-adc6-501bcbf4eb8b</id><apiAddress>node3.example.com</apiAddress><apiPort>8443</apiPort><socketAddress>node3.example.com</socketAddress><socketPort>8444</socketPort><loadBalanceAddress>node3.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node3.example.com</siteToSiteAddress><siteToSitePort>8445</siteToSitePort><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSiteSecure>true</siteToSiteSecure></nodeId><state>CONNECTED</state><updateId>21</updateId></connectionStatus><nodeIdentifier><apiAddress>node3.example.com</apiAddress><apiPort>8443</apiPort><id>ec64a9f8-dd34-4f6b-adc6-501bcbf4eb8b</id><loadBalanceAddress>node3.example.com</loadBalanceAddress><loadBalancePort>6342</loadBalancePort><siteToSiteAddress>node3.example.com</siteToSiteAddress><siteToSiteHttpApiPort>8443</siteToSiteHttpApiPort><siteToSitePort>8445</siteToSitePort><siteToSiteSecure>true</siteToSiteSecure><socketAddress>node3.example.com</socketAddress><socketPort>8444</socketPort></nodeIdentifier><payload>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9InllcyI/PjxoZWFydGJlYXRQYXlsb2FkPjxhY3RpdmVUaHJlYWRDb3VudD4wPC9hY3RpdmVUaHJlYWRDb3VudD48Y2x1c3RlclN0YXR1cz48bm9kZUlkPjxpZD45NTMxZDY3YS1iNmMzLTQwZGItODhjNi1mNjEyNDQ5ZjJhODc8L2lkPjxhcGlBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvYXBpQWRkcmVzcz48YXBpUG9ydD44NDQzPC9hcGlQb3J0Pjxzb2NrZXRBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc29ja2V0QWRkcmVzcz48c29ja2V0UG9ydD44NDQ0PC9zb2NrZXRQb3J0Pjxsb2FkQmFsYW5jZUFkZHJlc3M+djMyNS1kYXRhaW50Mi5pbmMxLmFybXkubWlsPC9sb2FkQmFsYW5jZUFkZHJlc3M+PGxvYWRCYWxhbmNlUG9ydD42MzQyPC9sb2FkQmFsYW5jZVBvcnQ+PHNpdGVUb1NpdGVBZGRyZXNzPnYzMjUtZGF0YWludDIuaW5jMS5hcm15Lm1pbDwvc2l0ZVRvU2l0ZUFkZHJlc3M+PHNpdGVUb1NpdGVQb3J0Pjg0NDU8L3NpdGVUb1NpdGVQb3J0PjxzaXRlVG9TaXRlSHR0cEFwaVBvcnQ+ODQ0Mzwvc2l0ZVRvU2l0ZUh0dHBBcGlQb3J0PjxzaXRlVG9TaXRlU2VjdXJlPnRydWU8L3NpdGVUb1NpdGVTZWN1cmU+PC9ub2RlSWQ+PHN0YXRlPkNPTk5FQ1RFRDwvc3RhdGU+PHVwZGF0ZUlkPjE5PC91cGRhdGVJZD48L2NsdXN0ZXJTdGF0dXM+PGNsdXN0ZXJTdGF0dXM+PG5vZGVJZD48aWQ+ZWM2NGE5ZjgtZGQzNC00ZjZiLWFkYzYtNTAxYmNiZjRlYjhiPC9pZD48YXBpQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L2FwaUFkZHJlc3M+PGFwaVBvcnQ+ODQ0MzwvYXBpUG9ydD48c29ja2V0QWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NvY2tldEFkZHJlc3M+PHNvY2tldFBvcnQ+ODQ0NDwvc29ja2V0UG9ydD48bG9hZEJhbGFuY2VBZGRyZXNzPnYzMjUtZGF0YWludDMuaW5jMS5hcm15Lm1pbDwvbG9hZEJhbGFuY2VBZGRyZXNzPjxsb2FkQmFsYW5jZVBvcnQ+NjM0MjwvbG9hZEJhbGFuY2VQb3J0PjxzaXRlVG9TaXRlQWRkcmVzcz52MzI1LWRhdGFpbnQzLmluYzEuYXJteS5taWw8L3NpdGVUb1NpdGVBZGRyZXNzPjxzaXRlVG9TaXRlUG9ydD44NDQ1PC9zaXRlVG9TaXRlUG9ydD48c2l0ZVRvU2l0ZUh0dHBBcGlQb3J0Pjg0NDM8L3NpdGVUb1NpdGVIdHRwQXBpUG9ydD48c2l0ZVRvU2l0ZVNlY3VyZT50cnVlPC9zaXRlVG9TaXRlU2VjdXJlPjwvbm9kZUlkPjxzdGF0ZT5DT05ORUNURUQ8L3N0YXRlPjx1cGRhdGVJZD4yMjwvdXBkYXRlSWQ+PC9jbHVzdGVyU3RhdHVzPjxjbHVzdGVyU3RhdHVzPjxub2RlSWQ+PGlkPmVkOWU1YTg4LTRhNDAtNDgwNy1iMjU0LTMxZDJlNjFjMTgwZjwvaWQ+PGFwaUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9hcGlBZGRyZXNzPjxhcGlQb3J0Pjg0NDM8L2FwaVBvcnQ+PHNvY2tldEFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zb2NrZXRBZGRyZXNzPjxzb2NrZXRQb3J0Pjg0NDQ8L3NvY2tldFBvcnQ+PGxvYWRCYWxhbmNlQWRkcmVzcz52MzI1LWRhdGFpbnQxLmluYzEuYXJteS5taWw8L2xvYWRCYWxhbmNlQWRkcmVzcz48bG9hZEJhbGFuY2VQb3J0PjYzNDI8L2xvYWRCYWxhbmNlUG9ydD48c2l0ZVRvU2l0ZUFkZHJlc3M+djMyNS1kYXRhaW50MS5pbmMxLmFybXkubWlsPC9zaXRlVG9TaXRlQWRkcmVzcz48c2l0ZVRvU2l0ZVBvcnQ+ODQ0NTwvc2l0ZVRvU2l0ZVBvcnQ+PHNpdGVUb1NpdGVIdHRwQXBpUG9ydD44NDQzPC9zaXRlVG9TaXRlSHR0cEFwaVBvcnQ+PHNpdGVUb1NpdGVTZWN1cmU+dHJ1ZTwvc2l0ZVRvU2l0ZVNlY3VyZT48L25vZGVJZD48c3RhdGU+Q09OTkVDVEVEPC9zdGF0ZT48dXBkYXRlSWQ+MTU8L3VwZGF0ZUlkPjwvY2x1c3RlclN0YXR1cz48c3lzdGVtU3RhcnRUaW1lPjE1NTgyMDk0NDM2NDI8L3N5c3RlbVN0YXJ0VGltZT48dG90YWxGbG93RmlsZUJ5dGVzPjA8L3RvdGFsRmxvd0ZpbGVCeXRlcz48dG90YWxGbG93RmlsZUNvdW50PjA8L3RvdGFsRmxvd0ZpbGVDb3VudD48L2hlYXJ0YmVhdFBheWxvYWQ+</payload></heartbeat></heartbeatMessage>
2019-05-18 20:02:42,093 DEBUG [Process Cluster Protocol Request-2] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received new heartbeat from 
node3.example.com:8443
2019-05-18 20:02:42,094 DEBUG [Process Cluster Protocol Request-2] 
o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor 

Calculated diff between current cluster status and node cluster status as 
follows:
Node: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Self: [NodeConnectionStatus[nodeId=node2.example.com:8443, state=CONNECTED, 
updateId=19], NodeConnectionStatus[nodeId=node3.example.com:8443, 
state=CONNECTED, updateId=22], 
NodeConnectionStatus[nodeId=node1.example.com:8443, state=CONNECTED, 
updateId=15]]
Difference: []


2019-05-18 20:02:42,094 DEBUG [Process Cluster Protocol Request-2] 
o.a.n.c.p.impl.SocketProtocolListener Sending response for request 
4515cad6-185e-4bae-839b-82aa7ed58c17
2019-05-18 20:02:42,094 INFO [Process Cluster Protocol Request-2] 
o.a.n.c.p.impl.SocketProtocolListener Finished processing request 
4515cad6-185e-4bae-839b-82aa7ed58c17 (type=HEARTBEAT, length=4059 bytes) from 
node3.example.com:8443 in 101 millis
2019-05-18 20:02:42,136 DEBUG [FileSystemRepository Workers Thread-3] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:42,141 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:42,141 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463051264 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002111038 bytes are used
2019-05-18 20:02:42,141 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:42,164 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:42,164 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:42 UTC 2019; delete expired = 22 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:42,164 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:42,189 DEBUG [Listen to Bootstrap] 
org.apache.nifi.BootstrapListener Listening for Bootstrap Requests
2019-05-18 20:02:42,624 DEBUG [Provenance Maintenance Thread-2] 
o.a.n.p.PersistentProvenanceRepository Purging old event files: []
2019-05-18 20:02:42,625 DEBUG [Provenance Maintenance Thread-2] 
o.a.n.p.PersistentProvenanceRepository After expiration, path map: {}
2019-05-18 20:02:43,137 DEBUG [FileSystemRepository Workers Thread-2] 
o.a.n.c.r.c.StandardResourceClaimManager Drained 0 destructable claims to []
2019-05-18 20:02:43,164 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Destroying Expired Archives for Container default
2019-05-18 20:02:43,164 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Currently 7463051264 bytes free for Container 
default; requirement is 5460940226 byte free, so no need to free space until an 
additional 2002111038 bytes are used
2019-05-18 20:02:43,164 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Searching for more archived data to expire
2019-05-18 20:02:43,188 DEBUG [Cleanup Archive for default] 
o.a.n.c.r.F.archive.expiration Deleting data based on timestamp
2019-05-18 20:02:43,188 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Oldest Archive Date for Container 
default is Sat May 18 20:02:43 UTC 2019; delete expired = 23 ms, sort remaining 
= 0 ms, delete oldest = 0 ms, cleanup = 0 ms
2019-05-18 20:02:43,188 DEBUG [Cleanup Archive for default] 
o.a.n.c.repository.FileSystemRepository Container default signaled to allow 
Content Claim Creation
2019-05-18 20:02:43,386 DEBUG [NiFi Web Server-161] 
o.a.nifi.web.server.HostHeaderHandler HostHeaderHandler#doScope on 
/nifi-api/controller/cluster
2019-05-18 20:02:43,386 DEBUG [NiFi Web Server-161] 
o.a.nifi.web.server.HostHeaderHandler Received request 
[/nifi-api/controller/cluster] with host header: node1.example.com:8443
2019-05-18 20:02:43,395 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
/nifi-api/controller/cluster to node1.example.com:8443 due to 
java.net.SocketTimeoutException: timeout
2019-05-18 20:02:43,396 WARN [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator 
java.net.SocketTimeoutException: timeout
        at okio.Okio$4.newTimeoutException(Okio.java:232)
        at okio.AsyncTimeout.exit(AsyncTimeout.java:285)
        at okio.AsyncTimeout$2.read(AsyncTimeout.java:241)
        at okio.RealBufferedSource.indexOf(RealBufferedSource.java:355)
        at 
okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:227)
        at okhttp3.internal.http1.Http1Codec.readHeaderLine(Http1Codec.java:215)
        at 
okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:189)
        at 
okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:88)
        at 
okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at 
okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
        at 
okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at 
okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
        at 
okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
        at 
okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at 
okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
        at 
okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
        at 
okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at 
okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:126)
        at 
okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
        at 
okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
        at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200)
        at okhttp3.RealCall.execute(RealCall.java:77)
        at 
org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:138)
        at 
org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:132)
        at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:647)
        at 
org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:839)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Socket closed
        at java.net.SocketInputStream.read(SocketInputStream.java:204)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
        at sun.security.ssl.InputRecord.read(InputRecord.java:503)
        at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
        at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
        at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
        at okio.Okio$2.read(Okio.java:140)
        at okio.AsyncTimeout$2.read(AsyncTimeout.java:237)
        ... 28 common frames omitted
2019-05-18 20:02:43,396 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Request GET 
/nifi-api/controller/cluster completed for node1.example.com:8443
2019-05-18 20:02:43,397 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Received response from 
node1.example.com:8443 for GET /nifi-api/controller/cluster
2019-05-18 20:02:43,397 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.StandardAsyncClusterResponse Received response 1 out of 1 for 
0bbc58a4-683c-40de-9273-60abaa9707e9 from node1.example.com:8443
2019-05-18 20:02:43,397 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.StandardAsyncClusterResponse Notifying all that merged response 
is ready for 0bbc58a4-683c-40de-9273-60abaa9707e9
2019-05-18 20:02:43,532 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator For GET /nifi-api/controller/cluster 
(Request ID 0bbc58a4-683c-40de-9273-60abaa9707e9), minimum response time = 0, 
max = 0, average = 0.0 ms
2019-05-18 20:02:43,532 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Node Responses for GET 
/nifi-api/controller/cluster (Request ID 0bbc58a4-683c-40de-9273-60abaa9707e9):
node1.example.com:8443: 0 millis

2019-05-18 20:02:43,533 DEBUG [Replicate Request Thread-1] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Notified monitor 
java.lang.Object@b53c233 because request GET 
https://node1.example.com:8443/nifi-api/controller/cluster has completed
2019-05-18 20:02:43,533 DEBUG [NiFi Web Server-24] 
o.a.n.c.c.h.r.ThreadPoolRequestReplicator Unlocked 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock@1ea8b09f[Read locks 
= 0] after replication completed for GET 
https://node1.example.com:8443/nifi-api/controller/cluster
2019-05-18 20:02:43,539 DEBUG [NiFi Web Server-24] 
o.a.n.c.c.h.r.StandardAsyncClusterResponse Notifying all that merged response 
is complete for 0bbc58a4-683c-40de-9273-60abaa9707e9
2019-05-18 20:02:43,539 DEBUG [NiFi Web Server-24] 
o.a.n.c.c.h.r.StandardAsyncClusterResponse For GET 
https://node1.example.com:8443/nifi-api/controller/cluster Timing Info is as 
follows:
Verify Cluster State for All Nodes took 0 millis
Wait for HTTP Request Replication to be triggered for node1.example.com:8443 
took 0 millis
Phase Completed for All Nodes took 10132 millis
Notifying All Threads that Request is Complete took 0 millis
Total Time for All Nodes took 10132 millis
Map/Merge Responses for All Nodes took 4 millis

2019-05-18 20:02:43,542 DEBUG [NiFi Web Server-24] 
org.apache.nifi.web.filter.TimerFilter GET 
https://node1.example.com:8443/nifi-api/controller/cluster from 10.0.1.85 
duration for Request ID null: 10426 millis
2019-05-18 20:02:43,544 DEBUG [NiFi Web Server-161] 
o.apache.nifi.web.NiFiServiceFacadeLock In order to perform procedure public 
org.apache.nifi.web.api.dto.ClusterDTO 
org.apache.nifi.web.StandardNiFiServiceFacade.getCluster(), it took 4449 nanos 
to obtain the Read Lock 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock@3449ea92[Read locks 
= 0] and 135260459 nanos to invoke the method
2019-05-18 20:02:43,574 DEBUG [NiFi Web Server-161] 
o.g.j.m.internal.OutboundMessageContext null
java.nio.channels.ClosedChannelException: null
        at org.eclipse.jetty.io.WriteFlusher.onClose(WriteFlusher.java:532)
        at 
org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onIncompleteFlush(SslConnection.java:530)
        at 
org.eclipse.jetty.io.AbstractEndPoint$2.onIncompleteFlush(AbstractEndPoint.java:54)
        at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:332)
        at 
org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:380)
        at 
org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:798)
        at 
org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
        at 
org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224)
        at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:538)
        at 
org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790)
        at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846)
        at 
org.eclipse.jetty.server.handler.gzip.GzipHttpOutputInterceptor.commit(GzipHttpOutputInterceptor.java:201)
        at 
org.eclipse.jetty.server.handler.gzip.GzipHttpOutputInterceptor.write(GzipHttpOutputInterceptor.java:106)
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240)
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216)
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:541)
        at 
org.glassfish.jersey.servlet.internal.ResponseWriter$NonCloseableOutputStreamWrapper.write(ResponseWriter.java:325)
        at java.io.ByteArrayOutputStream.writeTo(ByteArrayOutputStream.java:167)
        at 
org.glassfish.jersey.message.internal.CommittingOutputStream.flushBuffer(CommittingOutputStream.java:302)
        at 
org.glassfish.jersey.message.internal.CommittingOutputStream.commit(CommittingOutputStream.java:256)
        at 
org.glassfish.jersey.message.internal.CommittingOutputStream.close(CommittingOutputStream.java:271)
        at 
java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:241)
        at 
org.glassfish.jersey.message.internal.OutboundMessageContext.close(OutboundMessageContext.java:848)
        at 
org.glassfish.jersey.server.ContainerResponse.close(ContainerResponse.java:413)
        at 
org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:734)
        at 
org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:395)
        at 
org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:385)
        at 
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:280)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
        at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
        at 
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
        at 
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
        at 
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
        at 
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
        at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655)
        at 
org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)
        at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)
        at 
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at 
org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at 
org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at 
org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:60)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at 
org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:60)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at 
org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:60)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at 
org.apache.nifi.web.security.NiFiAuthenticationFilter.authenticate(NiFiAuthenticationFilter.java:84)
        at 
org.apache.nifi.web.security.NiFiAuthenticationFilter.doFilter(NiFiAuthenticationFilter.java:58)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at 
org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:96)
        at 
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
        at 
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331)
        at 
org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214)
        at 
org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177)
        at 
org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347)
        at 
org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at org.apache.nifi.web.filter.TimerFilter.doFilter(TimerFilter.java:51)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at 
org.apache.nifi.web.filter.ExceptionFilter.doFilter(ExceptionFilter.java:46)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
        at 
org.apache.nifi.web.security.ContentSecurityPolicyFilter.doFilter(ContentSecurityPolicyFilter.java:47)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at 
org.apache.nifi.web.server.JettyServer$2.doFilter(JettyServer.java:1048)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
        at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
        at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
        at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
        at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
        at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
        at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
        at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:724)
        at 
org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:61)
        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
        at org.eclipse.jetty.server.Server.handle(Server.java:531)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
        at 
org.eclipse.jetty.server.HttpChannelOverHttp.earlyEOF(HttpChannelOverHttp.java:238)
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1583)
        at 
org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:360)
        at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:250)
        at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
        at 
org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:291)
        at 
org.eclipse.jetty.io.ssl.SslConnection$3.succeeded(SslConnection.java:151)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
        at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
        at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
        at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
        at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
        at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
        at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
        at java.lang.Thread.run(Thread.java:748)
2019-05-18 20:02:43,575 DEBUG [NiFi Web Server-161] 
org.apache.nifi.web.filter.TimerFilter GET 
https://node1.example.com:8443/nifi-api/controller/cluster from 10.0.1.85 
duration for Request ID 0bbc58a4-683c-40de-9273-60abaa9707e9: 189 millis

Reply via email to