Without further details, this is what I did to see if it was something other than the usual issue of having not enough file handlers available. Something like a legitimate case of someone forgetting to close file objects or something in the code itself.
1. Setup a 8core/32GB VM on AWS w/ Amazon AMI. 2. Pushed 1.11.1RC1 3. Pushed the RAM settings to 6/12GB 4. Disabled flowfile archiving because I only allocated 8GB of storage. 5. Setup a flow that used 2 generateflow instances to generate massive amounts of garbage data using all available cores. (All queues were setup to hold 250k flow files) 6. Kicked it off and let it run for probably about 20 minutes. No apparent problem with closing and releasing resources here. On Sat, Feb 1, 2020 at 8:00 AM Joe Witt <[email protected]> wrote: > these are usually very easy to find. > > run lsof -p pid. and share results > > > thanks > > On Sat, Feb 1, 2020 at 7:56 AM Mike Thomsen <[email protected]> > wrote: > > > > > > https://stackoverflow.com/questions/59991035/nifi-1-11-opening-more-than-50k-files/60017064#60017064 > > > > No idea if this is valid or not. I asked for clarification to see if > there > > might be a specific processor or something that is triggering this. > > >
