Hahah Pat!

Lars

Ok great - very highly probably this is the issue.  Go with 50KB and
lets see what unfolds.

Thanks

On Tue, Sep 13, 2022 at 1:16 PM Patrick Timmins <[email protected]> wrote:
>
> Ha! ... too funny!
>
> You are a good father and NiFi brother
>
> ... God Bless!
>
> Pat
>
> On 9/13/2022 1:08 PM, Lars Winderling wrote:
> > …and guess what I did :-) the joys of remote working. just put my kids
> > to bed, and here you are!
> >
> > # Content Repository
> > nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
> >
> > nifi.content.claim.max.appendable.size=10 MB
> > nifi.content.claim.max.flow.files=100
> > nifi.content.repository.directory.default=/srv/nifi-content/data/content-repository
> >
> > nifi.content.repository.archive.max.retention.period=12 hours
> > nifi.content.repository.archive.max.usage.percentage=50%
> > nifi.content.repository.archive.enabled=true
> > nifi.content.repository.always.sync=false
> > nifi.content.viewer.url=../nifi-content-viewer/
> >
> > So we even use 10MB…
> > will check if lowering the value changes anything
> >
> > On 22-09-13 20:04, Patrick Timmins wrote:
> >> No, I agree.  Lars, please give up the rest of your evening and drive
> >> back to work and report back with your findings ASAP.  It may be past
> >> normal working hours in Germany, but you have NiFi brothers and
> >> sisters around the world that are counting on you ... please don't
> >> let us down.
> >>
> >> :)  <- international smiley/joking symbol
> >>
> >>
> >> On 9/13/2022 10:15 AM, Joe Witt wrote:
> >>> read that again and hopefully it was obvious I was joking.   But I am
> >>> looking forward to hearing what you learn.
> >>>
> >>> Thanks
> >>>
> >>> On Tue, Sep 13, 2022 at 10:10 AM Joe Witt <[email protected]> wrote:
> >>>> Lars
> >>>>
> >>>> I need you to drive back to work because now I am very vested in
> >>>> the outcome :)
> >>>>
> >>>> But yeah this was an annoying problem we saw hit some folks. Changing
> >>>> that value after fixing the behavior was the answer.  I owe the
> >>>> community a blog on this....
> >>>>
> >>>> Thanks
> >>>>
> >>>> On Tue, Sep 13, 2022 at 9:57 AM Lars Winderling
> >>>> <[email protected]> wrote:
> >>>>> Sorry, misread the jira. We're still on the old default value.
> >>>>> Thank you for being persistant about it. I will try it tomorrow
> >>>>> with the lower value and get back to you. Not at work atm, so I
> >>>>> can't paste the config values in detail.
> >>>>>
> >>>>> On 13 September 2022 16:45:30 CEST, Joe Witt <[email protected]>
> >>>>> wrote:
> >>>>>> Lars
> >>>>>>
> >>>>>> You should not have to update to 1.17.  While I'm always fond of
> >>>>>> peoople being on the latest the issue i mentioned is fixed in
> >>>>>> 1.16.3.
> >>>>>>
> >>>>>> HOWEVER, please do confirm your values.  The one I'd really focus
> >>>>>> you on is
> >>>>>> nifi.content.claim.max.appendable.size=50 KB
> >>>>>>
> >>>>>> Our default before was like 1MB and what we'd see is we'd hang on to
> >>>>>> large content way longer than we intended because some queue had one
> >>>>>> tiny object in it.  So that value became really important.
> >>>>>>
> >>>>>> If you're on 1MB change to 50KB and see what happens.
> >>>>>>
> >>>>>> Thanks
> >>>>>>
> >>>>>> On Tue, Sep 13, 2022 at 9:40 AM Lars Winderling
> >>>>>> <[email protected]> wrote:
> >>>>>>>
> >>>>>>>   I guess the issue you linked, is related. I have seen similar
> >>>>>>> messages in the log occasionally, but didn't directly connect
> >>>>>>> it. Our config is pretty similar to the defaults, none of it
> >>>>>>> should directly cause the issue. Will give 1.17.0 a try and come
> >>>>>>> back if the issue persists. Your help is really appreciated,
> >>>>>>> thanks!
> >>>>>>>
> >>>>>>>   On 13 September 2022 16:33:53 CEST, Joe Witt
> >>>>>>> <[email protected]> wrote:
> >>>>>>>>
> >>>>>>>>   Lars
> >>>>>>>>
> >>>>>>>>   The issue that came to mind is
> >>>>>>>>   https://issues.apache.org/jira/browse/NIFI-10023 but that is
> >>>>>>>> fixed in
> >>>>>>>>   1.16.2 and 1.17.0 so that is why I asked.
> >>>>>>>>
> >>>>>>>>   What is in your nifi.properties for
> >>>>>>>>   # Content Repository
> >>>>>>>> nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
> >>>>>>>>
> >>>>>>>>   nifi.content.claim.max.appendable.size=50 KB
> >>>>>>>> nifi.content.repository.directory.default=./content_repository
> >>>>>>>> nifi.content.repository.archive.max.retention.period=7 days
> >>>>>>>> nifi.content.repository.archive.max.usage.percentage=50%
> >>>>>>>>   nifi.content.repository.archive.enabled=true
> >>>>>>>>   nifi.content.repository.always.sync=false
> >>>>>>>>
> >>>>>>>>   Thanks
> >>>>>>>>
> >>>>>>>>   On Tue, Sep 13, 2022 at 7:04 AM Lars Winderling
> >>>>>>>>   <[email protected]> wrote:
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>    I'm using 1.16.3 from upstream (no custom build) on java 11
> >>>>>>>>> temurin, debian 10, virtualized, no docker setup.
> >>>>>>>>>
> >>>>>>>>>    On 13 September 2022 13:37:15 CEST, Joe Witt
> >>>>>>>>> <[email protected]> wrote:
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>    Lars
> >>>>>>>>>>
> >>>>>>>>>>    What version are you using?
> >>>>>>>>>>
> >>>>>>>>>>    Thanks
> >>>>>>>>>>
> >>>>>>>>>>    On Tue, Sep 13, 2022 at 3:11 AM Lars Winderling
> >>>>>>>>>> <[email protected]> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>    Dear community,
> >>>>>>>>>>>
> >>>>>>>>>>>    sometimes our content repository grows out of bounds.
> >>>>>>>>>>> Since it has been separated on disk from the rest of NiFi,
> >>>>>>>>>>> we can still use the NiFi UI and empty the respective
> >>>>>>>>>>> queues. However, the disk remains jammed. Sometimes, it gets
> >>>>>>>>>>> cleaned up after a few mintes, but most of the time we need
> >>>>>>>>>>> to restart NiFi manually, for the cleanup to happen.
> >>>>>>>>>>>    So. is there any way of triggering the content eviction
> >>>>>>>>>>> manually without restarting NiFi?
> >>>>>>>>>>>    Btw. the respective files on disk are not archived in the
> >>>>>>>>>>> content repository (thus not below */archive/*).
> >>>>>>>>>>>
> >>>>>>>>>>>    Thanks in advance for your support!
> >>>>>>>>>>>    Best,
> >>>>>>>>>>>    Lars
> >

Reply via email to