Thanks Joe,

So is it possibly the the processor is not correctly handling the flow
files, and that this only becomes apparent when one tries to offload data?
Because in normal operation they appear to work fine (have been operating
outside of a cluster context for some years)

Thanks,
Phil

On Thu, 2 Sep 2021 at 13:48, Joe Witt <[email protected]> wrote:

> Phil.  The behavior you mentioned sounds like that processor pulled flow
> files from the queue but had not yet transferred them anywhere.  If you see
> that again I strongly recommend you gather a thread dump.
>
> Joe
>
> On Wed, Sep 1, 2021 at 7:56 PM Phil H <[email protected]> wrote:
>
> > Hi Joe,
> >
> > It’s a custom one, but it is effectively just a routing filter component
> > (read the data, send the flow file out on relationship A or B based on
> what
> > it finds).  Nothing exotic in terms of how it interacts with the
> flowfiles.
> >
> > After restarting all nodes, the queue worked normally again.
> >
> > Phil
> >
> > > On 2 Sep 2021, at 12:02 pm, Joe Witt <[email protected]> wrote:
> > >
> > > Phil
> > >
> > > What processor reads from that queue that appears unmoving?
> > >
> > > Thanks
> > >
> > > On Wed, Sep 1, 2021 at 3:51 PM Phil H <[email protected]> wrote:
> > >
> > >> And once reconnected again, no data passes that queue - it all just
> > piles
> > >> up there (the queue count matching the number of items sent into the
> > >> cluster). However if I try and list the queue, it claims there are no
> > files
> > >> in it. Very very confused!
> > >>
> > >> On Thu, 2 Sep 2021 at 08:39, Phil H <[email protected]> wrote:
> > >>
> > >>> Okay, found the offload, but the data is still stuck on the
> “offloaded”
> > >>> node, in a “single node” queue (I am bringing the data to a single
> node
> > >> to
> > >>> deduplicate multiple parallel inputs).
> > >>>
> > >>> If I refresh the UI, I can see the missing items numbered in the
> queue,
> > >>> but can’t open the queue because the other node is “currently
> > offloaded”.
> > >>>
> > >>> I’m sure I’m just missing something here??
> > >>>
> > >>> On Thu, 2 Sep 2021 at 08:20, Shawn Weeks <[email protected]>
> > >>> wrote:
> > >>>
> > >>>> On newer versions there is an option in the UI to Offload the data
> if
> > >> you
> > >>>> have NiFi's cluster load balancing setup. Then you'd disconnect the
> > node
> > >>>> and shut it down.
> > >>>>
> > >>>> Thanks
> > >>>> Shawn
> > >>>>
> > >>>> -----Original Message-----
> > >>>> From: Phil H <[email protected]>
> > >>>> Sent: Wednesday, September 1, 2021 4:36 PM
> > >>>> To: [email protected]
> > >>>> Subject: Primary node vs shutdown
> > >>>>
> > >>>> Hi there,
> > >>>>
> > >>>> I am noticing a number of situations where shutting down one node
> in a
> > >>>> cluster is leaving data stranded in the flows on that shut down
> > server.
> > >>>>
> > >>>> Is there any way to tell NiFi to ship data off to other cluster
> > members
> > >>>> before it shuts down?  Note I am restarting via the nifi.sh script,
> > not
> > >>>> just killing the process/host with no notice
> > >>>>
> > >>>> Thanks,
> > >>>> Phil
> > >>>>
> > >>>
> > >>
> >
> >
>

Reply via email to