And once reconnected again, no data passes that queue - it all just piles
up there (the queue count matching the number of items sent into the
cluster). However if I try and list the queue, it claims there are no files
in it. Very very confused!

On Thu, 2 Sep 2021 at 08:39, Phil H <[email protected]> wrote:

> Okay, found the offload, but the data is still stuck on the “offloaded”
> node, in a “single node” queue (I am bringing the data to a single node to
> deduplicate multiple parallel inputs).
>
> If I refresh the UI, I can see the missing items numbered in the queue,
> but can’t open the queue because the other node is “currently offloaded”.
>
> I’m sure I’m just missing something here??
>
> On Thu, 2 Sep 2021 at 08:20, Shawn Weeks <[email protected]>
> wrote:
>
>> On newer versions there is an option in the UI to Offload the data if you
>> have NiFi's cluster load balancing setup. Then you'd disconnect the node
>> and shut it down.
>>
>> Thanks
>> Shawn
>>
>> -----Original Message-----
>> From: Phil H <[email protected]>
>> Sent: Wednesday, September 1, 2021 4:36 PM
>> To: [email protected]
>> Subject: Primary node vs shutdown
>>
>> Hi there,
>>
>> I am noticing a number of situations where shutting down one node in a
>> cluster is leaving data stranded in the flows on that shut down server.
>>
>> Is there any way to tell NiFi to ship data off to other cluster members
>> before it shuts down?  Note I am restarting via the nifi.sh script, not
>> just killing the process/host with no notice
>>
>> Thanks,
>> Phil
>>
>

Reply via email to