Hello Greg, You can use the REST API on any of the nodes in the cluster. Could you provide more details on what you're trying to accomplish? If, for instance, you are posting data to a ListenHTTP processor and you want to balance POSTs across the instances of ListenHTTP on your cluster, then haproxy would probably be a good idea. If you're trying to distribute the processing load once the data is received, you can use a Remote Process Group to distribute the data across the cluster. Pierre Villard has written a nice blog about setting up a cluster and configuring a flow using a Remote Process Group to distribute the processing load [1]. It details creating a Remote Process Group to send data back to an Input Port in the same NiFi cluster, and allows NiFi to distribute the processing load across all the nodes in your cluster.
You can use a combination of haproxy and Remote Process Group to load balance connections to the REST API on each NiFi node and to balance the processing load across the cluster. [1] https://pierrevillard.com/2016/08/13/apache-nifi-1-0-0-cluster-setup/ - Jeff On Mon, Dec 19, 2016 at 9:25 PM Hart, Greg <[email protected]> wrote: > Hi all, > > What¹s the recommended way for communicating with the NiFi REST API in a > cluster? I see that NiFi uses ZooKeeper so is it possible to get the > Cluster Coordinator hostname and API port from ZooKeeper, or should I use > something like haproxy? > > Thanks! > -Greg > >
