Hi Jeff,

My application communicates with the NiFi REST API to import templates, 
instantiate flows from templates, edit processor properties, and a few other 
things. I’m currently using Jersey to send calls to one NiFi node but if that 
node goes down then my application has to be manually reconfigured with the 
hostname and port of another NiFi node. HAProxy would handle failover but it 
still must be manually reconfigured when a NiFi node is added or removed from 
the cluster.

I was hoping that NiFi would use ZooKeeper similarly to other applications 
(Hive or HBase) where a client can easily get the hostname and port of the 
cluster coordinator (or active master). Unfortunately, the information in 
ZooKeeper does not include the value of nifi.rest.http.host and 
nifi.rest.http.port of any NiFi nodes.

It sounds like HAProxy might be the better solution for now. Luckily, adding or 
removing nodes from a cluster shouldn’t be a daily occurrence. If you have any 
other ideas please let me know.

Thanks!
-Greg

From: Jeff <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, December 20, 2016 at 8:56 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: Load-balancing web api in cluster

Hello Greg,

You can use the REST API on any of the nodes in the cluster.  Could you provide 
more details on what you're trying to accomplish?  If, for instance, you are 
posting data to a ListenHTTP processor and you want to balance POSTs across the 
instances of ListenHTTP on your cluster, then haproxy would probably be a good 
idea.  If you're trying to distribute the processing load once the data is 
received, you can use a Remote Process Group to distribute the data across the 
cluster.  Pierre Villard has written a nice blog about setting up a cluster and 
configuring a flow using a Remote Process Group to distribute the processing 
load [1].  It details creating a Remote Process Group to send data back to an 
Input Port in the same NiFi cluster, and allows NiFi to distribute the 
processing load across all the nodes in your cluster.

You can use a combination of haproxy and Remote Process Group to load balance 
connections to the REST API on each NiFi node and to balance the processing 
load across the cluster.

[1] https://pierrevillard.com/2016/08/13/apache-nifi-1-0-0-cluster-setup/

- Jeff

On Mon, Dec 19, 2016 at 9:25 PM Hart, Greg 
<[email protected]<mailto:[email protected]>> wrote:
Hi all,

What¹s the recommended way for communicating with the NiFi REST API in a
cluster? I see that NiFi uses ZooKeeper so is it possible to get the
Cluster Coordinator hostname and API port from ZooKeeper, or should I use
something like haproxy?

Thanks!
-Greg

Reply via email to