Thanks Matt,

I was able to send the flowfiles through a Remote Processor Group back into the 
cluster and the flowfiles were distributed appropriately. I appreciate the 
advice.

Kevin

From: Matthew Clarke [mailto:[email protected]]
Sent: Saturday, May 28, 2016 1:44 PM
To: [email protected]
Subject: Re: Cluster Node Protocol Threads - will this configuration help 
reduce a high queue?


Hey Kevin,
    The DistributeLoad processor in itself does not distribute data across to 
the other nodes your cluster.  Can you explain your flow in a little more 
detail? I am afraid i an missing something here. If you want to spread data 
across your nodes, I encourage you to use a remote process group. This will 
auto scale and auto load balance data to all your nodes. The setting you 
mentioned is for node communication but not data transmission between nodes. 
Increasing this values can help when you have a large number of nodes, but in 
your case with only four nodes it should not make much impact. Again these 
nodes comms are not for data. What processors are queues building behind?

Matt
On May 27, 2016 11:41 AM, "Kevin Verhoeven" 
<[email protected]<mailto:[email protected]>> wrote:
I’m new to NiFi and I have a small problem. I’m running NiFi 0.7.0-SNAPSHOT on 
Windows 2012 R2 VMs. I’ve noticed that the queues on some of my Processors are 
very large, sometimes as high as 10,000 flowfiles. The flowfiles are eventually 
processed but at a slow pace.

I run a cluster with 4 nodes. The initial Get Processor runs on the Primary 
Node to only request a single file and I then use the DistributeLoad Processor 
with the Next Available setting to spread the load across the cluster nodes. 
However, I see that the queue is highest on the Primary Node and the cluster 
nodes see very little work.

My question is: will I increase throughput to the cluster nodes if I increase 
the nifi.cluster.node.protocol.threads from 2 to something higher? What effect 
does nifi.cluster.node.protocol.threads have on the nodes?

Thanks,

Kevin

Reply via email to