I think that is part of the problem, and probably part of the solution.
If I can wire load balancer to do a HTTP HEAD (a property of the
HandleHttpRequest) - knowing that this is a “Separate” Jetty instance means I
should get a 200 from the HEAD
Something like that
> On Sep 4, 2020, at
That is correct. Each instance of HandleHttpRequest and ListenHttp have
their own embedded Jetty server that is separate from the Jetty that is
running NiFi's REST API.
On Fri, Sep 4, 2020 at 12:40 PM Etienne Jouvin
wrote:
> I do not know everything, but if I well understood NiFi is based on
I do not know everything, but if I well understood NiFi is based on REST
API. For example, all you do on the GUI is done throw REST call.
So I guess you can request if the NiFi instance is up on each node.
But this will not give you the status of your custom HttpHandle. NiFi
instance can be up,
You can always hit NiFi API status rest endpoint. It won't give you any
idea about that specific http endpoint you exposed, though, as it is a
general nifi rest api.
Your LB would need to understand how to hit this URL too, especially if
it's secured. Coming back to the easiest path, you'd rather
It seems a bit like a chicken and egg thing. Using ‘anything’ configured on the
disconnected node as a health check, is not unlike trying to get to the API
(listening port) itself? Kinda.
Anyway
I was hoping that the NIFI infrastructure had a generalized, centralized (REST
API? or other)
Because you implemented a HandleHttpRequest listing, why don't you
configure an handle on something like http(s)://server/ping
And the response is just pong
Le ven. 4 sept. 2020 à 18:02, jgunvaldson a écrit :
> Hi,
>
> Our network administrators are unable to wire up advanced Load Balancer
>
Hi,
Our network administrators are unable to wire up advanced Load Balancer (AWS
Application Load Balancer) or (Apache reverse proxy) to leverage a NIFI API
that may be listening on a port across several nodes.
For instance, a HandleHttpRequest listing on Node-1 on PORT 5112, Node-2 on
5112,