I created NIFI-3558 [1] to capture these scenarios. I added this specific 
example, but if anyone has more, please contribute them on the ticket.

[1] https://issues.apache.org/jira/browse/NIFI-3558

Andy LoPresto
[email protected]
[email protected]
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Mar 6, 2017, at 10:52 AM, Andy LoPresto <[email protected]> wrote:
> 
> Peter,
> 
> There have been intermittent discussions around a “system 
> status/configuration traffic light tool” which would be a visual indicator in 
> the UI that addresses common problems that are easily attributed to a 
> specific configuration value or environment scenario not matching best 
> practices. It would aggregate the collective institutional knowledge of the 
> mailing lists when we’ve encountered the same problem multiple times and try 
> to provide that diagnosis and recommended solutions to the user at a much 
> earlier stage, rather than relying on these conversations. This sounds like 
> another great piece of information to collect and display there.
> 
> There is a vague reference to this “better tooling” in [1] but I can’t find 
> an explicit ticket for it right now. I’ll open one and we can start listing 
> the desired functionality for the first pass.
> 
> [1] https://issues.apache.org/jira/browse/NIFI-3496 
> <https://issues.apache.org/jira/browse/NIFI-3496>
> 
> 
> Andy LoPresto
> [email protected] <mailto:[email protected]>
> [email protected] <mailto:[email protected]>
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> 
>> On Mar 6, 2017, at 10:18 AM, Peter Wicks (pwicks) <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Joe,
>> 
>> In my case I had not seen the issue until I added 7 new 
>> QueryDatabaseProcessor's. All seven of them kicked off against the same SQL 
>> database on restart and took 10 to 15 minutes to come back.  During that 
>> time my default 10 threads was running with only 3 to spare, which were 
>> being shared across a lot of other jobs.  I bumped it up considerably and 
>> have not had issues since then.
>> 
>> --Peter
>> 
>> -----Original Message-----
>> From: Joe Witt [mailto:[email protected] <mailto:[email protected]>]
>> Sent: Friday, March 03, 2017 3:02 PM
>> To: [email protected] <mailto:[email protected]>
>> Subject: Re: Visual Indicator for "Can't run because there are no threads"?
>> 
>> Peter,
>> 
>> That is a good idea and I don't believe there is any existing JIRAs to do 
>> so.  But the idea makes a lot of sense.  Being so thread starved that 
>> processors do not get to run for extended periods of time is pretty unique.  
>> Makes me think that the flow has processors which are not honoring the model 
>> but are rather more acting like greedy thread daemons.  That should also be 
>> considered.  But even with that said I could certainly see how it would be 
>> helpful to know that a processor is running less often than it would like 
>> due to lack of available threads rather than just backpressure.
>> 
>> Thanks
>> Joe
>> 
>> On Fri, Mar 3, 2017 at 4:57 PM, Peter Wicks (pwicks) <[email protected] 
>> <mailto:[email protected]>> wrote:
>>> I think everyone was really happy when backpressure finally got super
>>> great indicators.  Backpressure used to be my #1, “Why isn’t stuff moving?”
>>> problem.  My latest issue is there are no free threads, sometimes for
>>> hours, and I don’t notice and start wondering what’s going on.
>>> 
>>> 
>>> 
>>> Is there anything under consideration for an indicator to show how
>>> many processors can’t run because there aren’t enough threads
>>> available? I can create a ticket, wasn’t sure if there was one floating 
>>> around.
> 

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to