Re: NiFi Queue Monitoring

2021-07-27 Thread Jens M. Kofoed
Why not using the NiFi wiki page at confluence???
https://cwiki.apache.org/confluence/display/NIFI
There are so many great people that has made many wonderful blogs about
NiFi. But for new users it is a nightmare, so find them all. I think it
would be great if many of the wonderful tips and guides could be added to
the wiki. If not direct copied to the wiki, at least with a link.

regards
Jens M. Kofoed

Den ons. 28. jul. 2021 kl. 00.15 skrev Matt Burgess :

> I’m planning on doing one all about QueryNiFiReportingTask and the
> RecordSinks, I can include this use case if you like, but would definitely
> encourage you to blog it as well :) my blog is at
> https://funnifi.blogspot.com as an example, there are many others as well.
>
> Regards,
> Matt
>
> On Jul 27, 2021, at 5:17 PM, scott  wrote:
>
> 
> Joe,
> I'm not sure. What would be involved? I'm not familiar with a NiFi blog,
> can you point me to some examples?
>
> Thanks,
> Scott
>
> On Tue, Jul 27, 2021 at 10:00 AM Joe Witt  wrote:
>
>> Scott
>>
>> This sounds pretty darn cool.  Any chance you'd be interested in
>> kicking out a blog on it?
>>
>> Thanks
>>
>> On Tue, Jul 27, 2021 at 9:58 AM scott  wrote:
>> >
>> > Matt/all,
>> > I was able to solve my problem using the QueryNiFiReportingTask with
>> "SELECT * FROM CONNECTION_STATUS WHERE isBackPressureEnabled = true" and
>> the new LoggingRecordSink as you suggested. Everything is working
>> flawlessly now. Thank you again!
>> >
>> > Scott
>> >
>> > On Wed, Jul 21, 2021 at 5:09 PM Matt Burgess 
>> wrote:
>> >>
>> >> Scott,
>> >>
>> >> Glad to hear it! Please let me know if you have any questions or if
>> >> issues arise. One thing I forgot to mention is that I think
>> >> backpressure prediction is disabled by default due to the extra
>> >> consumption of CPU to do the regressions, make sure the
>> >> "nifi.analytics.predict.enabled" property in nifi.properties is set to
>> >> "true" before starting NiFi.
>> >>
>> >> Regards,
>> >> Matt
>> >>
>> >> On Wed, Jul 21, 2021 at 7:21 PM scott  wrote:
>> >> >
>> >> > Excellent! Very much appreciate the help and for setting me on the
>> right path. I'll give the queryNiFiReportingTask code a try.
>> >> >
>> >> > Scott
>> >> >
>> >> > On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess 
>> wrote:
>> >> >>
>> >> >> Scott et al,
>> >> >>
>> >> >> There are a number of options for monitoring flows, including
>> >> >> backpressure and even backpressure prediction:
>> >> >>
>> >> >> 1) The REST API for metrics. As you point out, it's subject to the
>> >> >> same authz/authn as any other NiFi operation and doesn't sound like
>> it
>> >> >> will work out for you.
>> >> >> 2) The Prometheus scrape target via the REST API. The issue would be
>> >> >> the same as #1 I presume.
>> >> >> 3) PrometheusReportingTask. This is similar to the REST scrape
>> target
>> >> >> but isn't subject to the usual NiFi authz/authn stuff, however it
>> does
>> >> >> support SSL/TLS for a secure solution (and is also a "pull" approach
>> >> >> despite it being a reporting task)
>> >> >> 4) QueryNiFiReportingTask. This is not included with the NiFi
>> >> >> distribution but can be downloaded separately, the latest version
>> >> >> (1.14.0) is at [1]. I believe this is what Andrew was referring to
>> >> >> when he mentioned being able to run SQL queries over the
>> information,
>> >> >> you can do something like "SELECT * FROM
>> CONNECTION_STATUS_PREDICTIONS
>> >> >> WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
>> >> >> done either as a push or pull depending on the Record Sink you
>> choose.
>> >> >> A SiteToSiteReportingRecordSink, KafkaRecordSink, or
>> LoggingRecordSink
>> >> >> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
>> >> >> where a PrometheusRecordSink results in a pull the same as #2 and
>> #3.
>> >> >> There's even a ScriptedRecordSink where you can write your own
>> script
>> >> >> to put the results where you want them.
>> >> >> 5) The other reporting tasks. These have been mentioned frequently
>> in
>> >> >> this thread so no need for elaboration here :)
>> >> >>
>> >> >> Regards,
>> >> >> Matt
>> >> >>
>> >> >> [1]
>> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
>> >> >>
>> >> >> On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
>> >> >> >
>> >> >> > Great comments all. I agree with the architecture comment about
>> push monitoring. I've been monitoring applications for more than 2 decades
>> now, but sometimes you have to work around the limitations of the
>> situation. It would be really nice if NiFi had this logic built-in, and
>> frankly I'm surprised it is not yet. I can't be the only one who has had to
>> deal with queues filling up, causing problems downstream. NiFi certainly
>> knows that the queues fill up, they change color and execute back-pressure
>> logic. If it would just do something simple like write a log/error message
>> to a log file when this happens, I would 

Re: NiFi Queue Monitoring

2021-07-27 Thread Matt Burgess
I’m planning on doing one all about QueryNiFiReportingTask and the RecordSinks, 
I can include this use case if you like, but would definitely encourage you to 
blog it as well :) my blog is at https://funnifi.blogspot.com as an example, 
there are many others as well.

Regards,
Matt

> On Jul 27, 2021, at 5:17 PM, scott  wrote:
> 
> 
> Joe,
> I'm not sure. What would be involved? I'm not familiar with a NiFi blog, can 
> you point me to some examples?
> 
> Thanks,
> Scott
> 
>> On Tue, Jul 27, 2021 at 10:00 AM Joe Witt  wrote:
>> Scott
>> 
>> This sounds pretty darn cool.  Any chance you'd be interested in
>> kicking out a blog on it?
>> 
>> Thanks
>> 
>> On Tue, Jul 27, 2021 at 9:58 AM scott  wrote:
>> >
>> > Matt/all,
>> > I was able to solve my problem using the QueryNiFiReportingTask with 
>> > "SELECT * FROM CONNECTION_STATUS WHERE isBackPressureEnabled = true" and 
>> > the new LoggingRecordSink as you suggested. Everything is working 
>> > flawlessly now. Thank you again!
>> >
>> > Scott
>> >
>> > On Wed, Jul 21, 2021 at 5:09 PM Matt Burgess  wrote:
>> >>
>> >> Scott,
>> >>
>> >> Glad to hear it! Please let me know if you have any questions or if
>> >> issues arise. One thing I forgot to mention is that I think
>> >> backpressure prediction is disabled by default due to the extra
>> >> consumption of CPU to do the regressions, make sure the
>> >> "nifi.analytics.predict.enabled" property in nifi.properties is set to
>> >> "true" before starting NiFi.
>> >>
>> >> Regards,
>> >> Matt
>> >>
>> >> On Wed, Jul 21, 2021 at 7:21 PM scott  wrote:
>> >> >
>> >> > Excellent! Very much appreciate the help and for setting me on the 
>> >> > right path. I'll give the queryNiFiReportingTask code a try.
>> >> >
>> >> > Scott
>> >> >
>> >> > On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess  
>> >> > wrote:
>> >> >>
>> >> >> Scott et al,
>> >> >>
>> >> >> There are a number of options for monitoring flows, including
>> >> >> backpressure and even backpressure prediction:
>> >> >>
>> >> >> 1) The REST API for metrics. As you point out, it's subject to the
>> >> >> same authz/authn as any other NiFi operation and doesn't sound like it
>> >> >> will work out for you.
>> >> >> 2) The Prometheus scrape target via the REST API. The issue would be
>> >> >> the same as #1 I presume.
>> >> >> 3) PrometheusReportingTask. This is similar to the REST scrape target
>> >> >> but isn't subject to the usual NiFi authz/authn stuff, however it does
>> >> >> support SSL/TLS for a secure solution (and is also a "pull" approach
>> >> >> despite it being a reporting task)
>> >> >> 4) QueryNiFiReportingTask. This is not included with the NiFi
>> >> >> distribution but can be downloaded separately, the latest version
>> >> >> (1.14.0) is at [1]. I believe this is what Andrew was referring to
>> >> >> when he mentioned being able to run SQL queries over the information,
>> >> >> you can do something like "SELECT * FROM CONNECTION_STATUS_PREDICTIONS
>> >> >> WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
>> >> >> done either as a push or pull depending on the Record Sink you choose.
>> >> >> A SiteToSiteReportingRecordSink, KafkaRecordSink, or LoggingRecordSink
>> >> >> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
>> >> >> where a PrometheusRecordSink results in a pull the same as #2 and #3.
>> >> >> There's even a ScriptedRecordSink where you can write your own script
>> >> >> to put the results where you want them.
>> >> >> 5) The other reporting tasks. These have been mentioned frequently in
>> >> >> this thread so no need for elaboration here :)
>> >> >>
>> >> >> Regards,
>> >> >> Matt
>> >> >>
>> >> >> [1] 
>> >> >> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
>> >> >>
>> >> >> On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
>> >> >> >
>> >> >> > Great comments all. I agree with the architecture comment about push 
>> >> >> > monitoring. I've been monitoring applications for more than 2 
>> >> >> > decades now, but sometimes you have to work around the limitations 
>> >> >> > of the situation. It would be really nice if NiFi had this logic 
>> >> >> > built-in, and frankly I'm surprised it is not yet. I can't be the 
>> >> >> > only one who has had to deal with queues filling up, causing 
>> >> >> > problems downstream. NiFi certainly knows that the queues fill up, 
>> >> >> > they change color and execute back-pressure logic. If it would just 
>> >> >> > do something simple like write a log/error message to a log file 
>> >> >> > when this happens, I would be good.
>> >> >> > I have looked at the new metrics and reporting tasks but still 
>> >> >> > haven't found the right thing to do to get notified when any queue 
>> >> >> > in my instance fills up. Are there any examples of using them for a 
>> >> >> > similar task you can share?
>> >> >> >
>> >> >> > Thanks,
>> >> >> > Scott
>> >> >> >
>> >> >> > On Wed, Jul 21, 2021 at 11:29 AM 

Re: NiFi Queue Monitoring

2021-07-27 Thread scott
Joe,
I'm not sure. What would be involved? I'm not familiar with a NiFi blog,
can you point me to some examples?

Thanks,
Scott

On Tue, Jul 27, 2021 at 10:00 AM Joe Witt  wrote:

> Scott
>
> This sounds pretty darn cool.  Any chance you'd be interested in
> kicking out a blog on it?
>
> Thanks
>
> On Tue, Jul 27, 2021 at 9:58 AM scott  wrote:
> >
> > Matt/all,
> > I was able to solve my problem using the QueryNiFiReportingTask with
> "SELECT * FROM CONNECTION_STATUS WHERE isBackPressureEnabled = true" and
> the new LoggingRecordSink as you suggested. Everything is working
> flawlessly now. Thank you again!
> >
> > Scott
> >
> > On Wed, Jul 21, 2021 at 5:09 PM Matt Burgess 
> wrote:
> >>
> >> Scott,
> >>
> >> Glad to hear it! Please let me know if you have any questions or if
> >> issues arise. One thing I forgot to mention is that I think
> >> backpressure prediction is disabled by default due to the extra
> >> consumption of CPU to do the regressions, make sure the
> >> "nifi.analytics.predict.enabled" property in nifi.properties is set to
> >> "true" before starting NiFi.
> >>
> >> Regards,
> >> Matt
> >>
> >> On Wed, Jul 21, 2021 at 7:21 PM scott  wrote:
> >> >
> >> > Excellent! Very much appreciate the help and for setting me on the
> right path. I'll give the queryNiFiReportingTask code a try.
> >> >
> >> > Scott
> >> >
> >> > On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess 
> wrote:
> >> >>
> >> >> Scott et al,
> >> >>
> >> >> There are a number of options for monitoring flows, including
> >> >> backpressure and even backpressure prediction:
> >> >>
> >> >> 1) The REST API for metrics. As you point out, it's subject to the
> >> >> same authz/authn as any other NiFi operation and doesn't sound like
> it
> >> >> will work out for you.
> >> >> 2) The Prometheus scrape target via the REST API. The issue would be
> >> >> the same as #1 I presume.
> >> >> 3) PrometheusReportingTask. This is similar to the REST scrape target
> >> >> but isn't subject to the usual NiFi authz/authn stuff, however it
> does
> >> >> support SSL/TLS for a secure solution (and is also a "pull" approach
> >> >> despite it being a reporting task)
> >> >> 4) QueryNiFiReportingTask. This is not included with the NiFi
> >> >> distribution but can be downloaded separately, the latest version
> >> >> (1.14.0) is at [1]. I believe this is what Andrew was referring to
> >> >> when he mentioned being able to run SQL queries over the information,
> >> >> you can do something like "SELECT * FROM
> CONNECTION_STATUS_PREDICTIONS
> >> >> WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
> >> >> done either as a push or pull depending on the Record Sink you
> choose.
> >> >> A SiteToSiteReportingRecordSink, KafkaRecordSink, or
> LoggingRecordSink
> >> >> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
> >> >> where a PrometheusRecordSink results in a pull the same as #2 and #3.
> >> >> There's even a ScriptedRecordSink where you can write your own script
> >> >> to put the results where you want them.
> >> >> 5) The other reporting tasks. These have been mentioned frequently in
> >> >> this thread so no need for elaboration here :)
> >> >>
> >> >> Regards,
> >> >> Matt
> >> >>
> >> >> [1]
> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
> >> >>
> >> >> On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
> >> >> >
> >> >> > Great comments all. I agree with the architecture comment about
> push monitoring. I've been monitoring applications for more than 2 decades
> now, but sometimes you have to work around the limitations of the
> situation. It would be really nice if NiFi had this logic built-in, and
> frankly I'm surprised it is not yet. I can't be the only one who has had to
> deal with queues filling up, causing problems downstream. NiFi certainly
> knows that the queues fill up, they change color and execute back-pressure
> logic. If it would just do something simple like write a log/error message
> to a log file when this happens, I would be good.
> >> >> > I have looked at the new metrics and reporting tasks but still
> haven't found the right thing to do to get notified when any queue in my
> instance fills up. Are there any examples of using them for a similar task
> you can share?
> >> >> >
> >> >> > Thanks,
> >> >> > Scott
> >> >> >
> >> >> > On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com <
> u...@moosheimer.com> wrote:
> >> >> >>
> >> >> >> In general, it is a bad architecture to do monitoring via pull
> request. You should always push. I recommend a look at the book "The Art of
> Monitoring" by James Turnbull.
> >> >> >>
> >> >> >> I also recommend the very good articles by Pierre Villard on the
> subject of NiFi monitoring at
> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
> >> >> >>
> >> >> >> Hope this helps.
> >> >> >>
> >> >> >> Mit freundlichen Grüßen / best regards
> >> >> >> Kay-Uwe Moosheimer
> >> >> >>
> >> >> 

Re: NiFi Queue Monitoring

2021-07-27 Thread Joe Witt
Scott

This sounds pretty darn cool.  Any chance you'd be interested in
kicking out a blog on it?

Thanks

On Tue, Jul 27, 2021 at 9:58 AM scott  wrote:
>
> Matt/all,
> I was able to solve my problem using the QueryNiFiReportingTask with "SELECT 
> * FROM CONNECTION_STATUS WHERE isBackPressureEnabled = true" and the new 
> LoggingRecordSink as you suggested. Everything is working flawlessly now. 
> Thank you again!
>
> Scott
>
> On Wed, Jul 21, 2021 at 5:09 PM Matt Burgess  wrote:
>>
>> Scott,
>>
>> Glad to hear it! Please let me know if you have any questions or if
>> issues arise. One thing I forgot to mention is that I think
>> backpressure prediction is disabled by default due to the extra
>> consumption of CPU to do the regressions, make sure the
>> "nifi.analytics.predict.enabled" property in nifi.properties is set to
>> "true" before starting NiFi.
>>
>> Regards,
>> Matt
>>
>> On Wed, Jul 21, 2021 at 7:21 PM scott  wrote:
>> >
>> > Excellent! Very much appreciate the help and for setting me on the right 
>> > path. I'll give the queryNiFiReportingTask code a try.
>> >
>> > Scott
>> >
>> > On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess  wrote:
>> >>
>> >> Scott et al,
>> >>
>> >> There are a number of options for monitoring flows, including
>> >> backpressure and even backpressure prediction:
>> >>
>> >> 1) The REST API for metrics. As you point out, it's subject to the
>> >> same authz/authn as any other NiFi operation and doesn't sound like it
>> >> will work out for you.
>> >> 2) The Prometheus scrape target via the REST API. The issue would be
>> >> the same as #1 I presume.
>> >> 3) PrometheusReportingTask. This is similar to the REST scrape target
>> >> but isn't subject to the usual NiFi authz/authn stuff, however it does
>> >> support SSL/TLS for a secure solution (and is also a "pull" approach
>> >> despite it being a reporting task)
>> >> 4) QueryNiFiReportingTask. This is not included with the NiFi
>> >> distribution but can be downloaded separately, the latest version
>> >> (1.14.0) is at [1]. I believe this is what Andrew was referring to
>> >> when he mentioned being able to run SQL queries over the information,
>> >> you can do something like "SELECT * FROM CONNECTION_STATUS_PREDICTIONS
>> >> WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
>> >> done either as a push or pull depending on the Record Sink you choose.
>> >> A SiteToSiteReportingRecordSink, KafkaRecordSink, or LoggingRecordSink
>> >> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
>> >> where a PrometheusRecordSink results in a pull the same as #2 and #3.
>> >> There's even a ScriptedRecordSink where you can write your own script
>> >> to put the results where you want them.
>> >> 5) The other reporting tasks. These have been mentioned frequently in
>> >> this thread so no need for elaboration here :)
>> >>
>> >> Regards,
>> >> Matt
>> >>
>> >> [1] 
>> >> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
>> >>
>> >> On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
>> >> >
>> >> > Great comments all. I agree with the architecture comment about push 
>> >> > monitoring. I've been monitoring applications for more than 2 decades 
>> >> > now, but sometimes you have to work around the limitations of the 
>> >> > situation. It would be really nice if NiFi had this logic built-in, and 
>> >> > frankly I'm surprised it is not yet. I can't be the only one who has 
>> >> > had to deal with queues filling up, causing problems downstream. NiFi 
>> >> > certainly knows that the queues fill up, they change color and execute 
>> >> > back-pressure logic. If it would just do something simple like write a 
>> >> > log/error message to a log file when this happens, I would be good.
>> >> > I have looked at the new metrics and reporting tasks but still haven't 
>> >> > found the right thing to do to get notified when any queue in my 
>> >> > instance fills up. Are there any examples of using them for a similar 
>> >> > task you can share?
>> >> >
>> >> > Thanks,
>> >> > Scott
>> >> >
>> >> > On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com 
>> >> >  wrote:
>> >> >>
>> >> >> In general, it is a bad architecture to do monitoring via pull 
>> >> >> request. You should always push. I recommend a look at the book "The 
>> >> >> Art of Monitoring" by James Turnbull.
>> >> >>
>> >> >> I also recommend the very good articles by Pierre Villard on the 
>> >> >> subject of NiFi monitoring at 
>> >> >> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
>> >> >>
>> >> >> Hope this helps.
>> >> >>
>> >> >> Mit freundlichen Grüßen / best regards
>> >> >> Kay-Uwe Moosheimer
>> >> >>
>> >> >> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
>> >> >>
>> >> >> 
>> >> >> Can't you leverage some of the recent nifi features and basically run 
>> >> >> sql queries over NiFi metrics directly as part of the flow? Then act 
>> >> >> on it with a full 

Re: NiFi Queue Monitoring

2021-07-27 Thread scott
Matt/all,
I was able to solve my problem using the QueryNiFiReportingTask with
"SELECT * FROM CONNECTION_STATUS WHERE isBackPressureEnabled = true" and
the new LoggingRecordSink as you suggested. Everything is working
flawlessly now. Thank you again!

Scott

On Wed, Jul 21, 2021 at 5:09 PM Matt Burgess  wrote:

> Scott,
>
> Glad to hear it! Please let me know if you have any questions or if
> issues arise. One thing I forgot to mention is that I think
> backpressure prediction is disabled by default due to the extra
> consumption of CPU to do the regressions, make sure the
> "nifi.analytics.predict.enabled" property in nifi.properties is set to
> "true" before starting NiFi.
>
> Regards,
> Matt
>
> On Wed, Jul 21, 2021 at 7:21 PM scott  wrote:
> >
> > Excellent! Very much appreciate the help and for setting me on the right
> path. I'll give the queryNiFiReportingTask code a try.
> >
> > Scott
> >
> > On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess 
> wrote:
> >>
> >> Scott et al,
> >>
> >> There are a number of options for monitoring flows, including
> >> backpressure and even backpressure prediction:
> >>
> >> 1) The REST API for metrics. As you point out, it's subject to the
> >> same authz/authn as any other NiFi operation and doesn't sound like it
> >> will work out for you.
> >> 2) The Prometheus scrape target via the REST API. The issue would be
> >> the same as #1 I presume.
> >> 3) PrometheusReportingTask. This is similar to the REST scrape target
> >> but isn't subject to the usual NiFi authz/authn stuff, however it does
> >> support SSL/TLS for a secure solution (and is also a "pull" approach
> >> despite it being a reporting task)
> >> 4) QueryNiFiReportingTask. This is not included with the NiFi
> >> distribution but can be downloaded separately, the latest version
> >> (1.14.0) is at [1]. I believe this is what Andrew was referring to
> >> when he mentioned being able to run SQL queries over the information,
> >> you can do something like "SELECT * FROM CONNECTION_STATUS_PREDICTIONS
> >> WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
> >> done either as a push or pull depending on the Record Sink you choose.
> >> A SiteToSiteReportingRecordSink, KafkaRecordSink, or LoggingRecordSink
> >> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
> >> where a PrometheusRecordSink results in a pull the same as #2 and #3.
> >> There's even a ScriptedRecordSink where you can write your own script
> >> to put the results where you want them.
> >> 5) The other reporting tasks. These have been mentioned frequently in
> >> this thread so no need for elaboration here :)
> >>
> >> Regards,
> >> Matt
> >>
> >> [1]
> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
> >>
> >> On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
> >> >
> >> > Great comments all. I agree with the architecture comment about push
> monitoring. I've been monitoring applications for more than 2 decades now,
> but sometimes you have to work around the limitations of the situation. It
> would be really nice if NiFi had this logic built-in, and frankly I'm
> surprised it is not yet. I can't be the only one who has had to deal with
> queues filling up, causing problems downstream. NiFi certainly knows that
> the queues fill up, they change color and execute back-pressure logic. If
> it would just do something simple like write a log/error message to a log
> file when this happens, I would be good.
> >> > I have looked at the new metrics and reporting tasks but still
> haven't found the right thing to do to get notified when any queue in my
> instance fills up. Are there any examples of using them for a similar task
> you can share?
> >> >
> >> > Thanks,
> >> > Scott
> >> >
> >> > On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com <
> u...@moosheimer.com> wrote:
> >> >>
> >> >> In general, it is a bad architecture to do monitoring via pull
> request. You should always push. I recommend a look at the book "The Art of
> Monitoring" by James Turnbull.
> >> >>
> >> >> I also recommend the very good articles by Pierre Villard on the
> subject of NiFi monitoring at
> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
> >> >>
> >> >> Hope this helps.
> >> >>
> >> >> Mit freundlichen Grüßen / best regards
> >> >> Kay-Uwe Moosheimer
> >> >>
> >> >> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
> >> >>
> >> >> 
> >> >> Can't you leverage some of the recent nifi features and basically
> run sql queries over NiFi metrics directly as part of the flow? Then act on
> it with a full flexibility of the flow. Kinda like a push design.
> >> >>
> >> >> Andrew
> >> >>
> >> >> On Tue, Jul 20, 2021, 2:31 PM scott  wrote:
> >> >>>
> >> >>> Hi all,
> >> >>> I'm trying to setup some monitoring of all queues in my NiFi
> instance, to catch before queues become full. One solution I am looking at
> is to use the API, but because I have a secure NiFi that uses LDAP, 

Re: NiFi Queue Monitoring

2021-07-21 Thread Matt Burgess
Scott,

Glad to hear it! Please let me know if you have any questions or if
issues arise. One thing I forgot to mention is that I think
backpressure prediction is disabled by default due to the extra
consumption of CPU to do the regressions, make sure the
"nifi.analytics.predict.enabled" property in nifi.properties is set to
"true" before starting NiFi.

Regards,
Matt

On Wed, Jul 21, 2021 at 7:21 PM scott  wrote:
>
> Excellent! Very much appreciate the help and for setting me on the right 
> path. I'll give the queryNiFiReportingTask code a try.
>
> Scott
>
> On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess  wrote:
>>
>> Scott et al,
>>
>> There are a number of options for monitoring flows, including
>> backpressure and even backpressure prediction:
>>
>> 1) The REST API for metrics. As you point out, it's subject to the
>> same authz/authn as any other NiFi operation and doesn't sound like it
>> will work out for you.
>> 2) The Prometheus scrape target via the REST API. The issue would be
>> the same as #1 I presume.
>> 3) PrometheusReportingTask. This is similar to the REST scrape target
>> but isn't subject to the usual NiFi authz/authn stuff, however it does
>> support SSL/TLS for a secure solution (and is also a "pull" approach
>> despite it being a reporting task)
>> 4) QueryNiFiReportingTask. This is not included with the NiFi
>> distribution but can be downloaded separately, the latest version
>> (1.14.0) is at [1]. I believe this is what Andrew was referring to
>> when he mentioned being able to run SQL queries over the information,
>> you can do something like "SELECT * FROM CONNECTION_STATUS_PREDICTIONS
>> WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
>> done either as a push or pull depending on the Record Sink you choose.
>> A SiteToSiteReportingRecordSink, KafkaRecordSink, or LoggingRecordSink
>> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
>> where a PrometheusRecordSink results in a pull the same as #2 and #3.
>> There's even a ScriptedRecordSink where you can write your own script
>> to put the results where you want them.
>> 5) The other reporting tasks. These have been mentioned frequently in
>> this thread so no need for elaboration here :)
>>
>> Regards,
>> Matt
>>
>> [1] 
>> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
>>
>> On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
>> >
>> > Great comments all. I agree with the architecture comment about push 
>> > monitoring. I've been monitoring applications for more than 2 decades now, 
>> > but sometimes you have to work around the limitations of the situation. It 
>> > would be really nice if NiFi had this logic built-in, and frankly I'm 
>> > surprised it is not yet. I can't be the only one who has had to deal with 
>> > queues filling up, causing problems downstream. NiFi certainly knows that 
>> > the queues fill up, they change color and execute back-pressure logic. If 
>> > it would just do something simple like write a log/error message to a log 
>> > file when this happens, I would be good.
>> > I have looked at the new metrics and reporting tasks but still haven't 
>> > found the right thing to do to get notified when any queue in my instance 
>> > fills up. Are there any examples of using them for a similar task you can 
>> > share?
>> >
>> > Thanks,
>> > Scott
>> >
>> > On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com  
>> > wrote:
>> >>
>> >> In general, it is a bad architecture to do monitoring via pull request. 
>> >> You should always push. I recommend a look at the book "The Art of 
>> >> Monitoring" by James Turnbull.
>> >>
>> >> I also recommend the very good articles by Pierre Villard on the subject 
>> >> of NiFi monitoring at 
>> >> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
>> >>
>> >> Hope this helps.
>> >>
>> >> Mit freundlichen Grüßen / best regards
>> >> Kay-Uwe Moosheimer
>> >>
>> >> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
>> >>
>> >> 
>> >> Can't you leverage some of the recent nifi features and basically run sql 
>> >> queries over NiFi metrics directly as part of the flow? Then act on it 
>> >> with a full flexibility of the flow. Kinda like a push design.
>> >>
>> >> Andrew
>> >>
>> >> On Tue, Jul 20, 2021, 2:31 PM scott  wrote:
>> >>>
>> >>> Hi all,
>> >>> I'm trying to setup some monitoring of all queues in my NiFi instance, 
>> >>> to catch before queues become full. One solution I am looking at is to 
>> >>> use the API, but because I have a secure NiFi that uses LDAP, it seems 
>> >>> to require a token that expires in 24 hours or so. I need this to be an 
>> >>> automated solution, so that is not going to work. Has anyone else 
>> >>> tackled this problem with a secure LDAP enabled cluster?
>> >>>
>> >>> Thanks,
>> >>> Scott


Re: NiFi Queue Monitoring

2021-07-21 Thread scott
Excellent! Very much appreciate the help and for setting me on the right
path. I'll give the queryNiFiReportingTask code a try.

Scott

On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess  wrote:

> Scott et al,
>
> There are a number of options for monitoring flows, including
> backpressure and even backpressure prediction:
>
> 1) The REST API for metrics. As you point out, it's subject to the
> same authz/authn as any other NiFi operation and doesn't sound like it
> will work out for you.
> 2) The Prometheus scrape target via the REST API. The issue would be
> the same as #1 I presume.
> 3) PrometheusReportingTask. This is similar to the REST scrape target
> but isn't subject to the usual NiFi authz/authn stuff, however it does
> support SSL/TLS for a secure solution (and is also a "pull" approach
> despite it being a reporting task)
> 4) QueryNiFiReportingTask. This is not included with the NiFi
> distribution but can be downloaded separately, the latest version
> (1.14.0) is at [1]. I believe this is what Andrew was referring to
> when he mentioned being able to run SQL queries over the information,
> you can do something like "SELECT * FROM CONNECTION_STATUS_PREDICTIONS
> WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
> done either as a push or pull depending on the Record Sink you choose.
> A SiteToSiteReportingRecordSink, KafkaRecordSink, or LoggingRecordSink
> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
> where a PrometheusRecordSink results in a pull the same as #2 and #3.
> There's even a ScriptedRecordSink where you can write your own script
> to put the results where you want them.
> 5) The other reporting tasks. These have been mentioned frequently in
> this thread so no need for elaboration here :)
>
> Regards,
> Matt
>
> [1]
> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
>
> On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
> >
> > Great comments all. I agree with the architecture comment about push
> monitoring. I've been monitoring applications for more than 2 decades now,
> but sometimes you have to work around the limitations of the situation. It
> would be really nice if NiFi had this logic built-in, and frankly I'm
> surprised it is not yet. I can't be the only one who has had to deal with
> queues filling up, causing problems downstream. NiFi certainly knows that
> the queues fill up, they change color and execute back-pressure logic. If
> it would just do something simple like write a log/error message to a log
> file when this happens, I would be good.
> > I have looked at the new metrics and reporting tasks but still haven't
> found the right thing to do to get notified when any queue in my instance
> fills up. Are there any examples of using them for a similar task you can
> share?
> >
> > Thanks,
> > Scott
> >
> > On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com 
> wrote:
> >>
> >> In general, it is a bad architecture to do monitoring via pull request.
> You should always push. I recommend a look at the book "The Art of
> Monitoring" by James Turnbull.
> >>
> >> I also recommend the very good articles by Pierre Villard on the
> subject of NiFi monitoring at
> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
> >>
> >> Hope this helps.
> >>
> >> Mit freundlichen Grüßen / best regards
> >> Kay-Uwe Moosheimer
> >>
> >> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
> >>
> >> 
> >> Can't you leverage some of the recent nifi features and basically run
> sql queries over NiFi metrics directly as part of the flow? Then act on it
> with a full flexibility of the flow. Kinda like a push design.
> >>
> >> Andrew
> >>
> >> On Tue, Jul 20, 2021, 2:31 PM scott  wrote:
> >>>
> >>> Hi all,
> >>> I'm trying to setup some monitoring of all queues in my NiFi instance,
> to catch before queues become full. One solution I am looking at is to use
> the API, but because I have a secure NiFi that uses LDAP, it seems to
> require a token that expires in 24 hours or so. I need this to be an
> automated solution, so that is not going to work. Has anyone else tackled
> this problem with a secure LDAP enabled cluster?
> >>>
> >>> Thanks,
> >>> Scott
>


Re: NiFi Queue Monitoring

2021-07-21 Thread Matt Burgess
Scott et al,

There are a number of options for monitoring flows, including
backpressure and even backpressure prediction:

1) The REST API for metrics. As you point out, it's subject to the
same authz/authn as any other NiFi operation and doesn't sound like it
will work out for you.
2) The Prometheus scrape target via the REST API. The issue would be
the same as #1 I presume.
3) PrometheusReportingTask. This is similar to the REST scrape target
but isn't subject to the usual NiFi authz/authn stuff, however it does
support SSL/TLS for a secure solution (and is also a "pull" approach
despite it being a reporting task)
4) QueryNiFiReportingTask. This is not included with the NiFi
distribution but can be downloaded separately, the latest version
(1.14.0) is at [1]. I believe this is what Andrew was referring to
when he mentioned being able to run SQL queries over the information,
you can do something like "SELECT * FROM CONNECTION_STATUS_PREDICTIONS
WHERE predictedTimeToBytesBackpressureMillis < 1". This can be
done either as a push or pull depending on the Record Sink you choose.
A SiteToSiteReportingRecordSink, KafkaRecordSink, or LoggingRecordSink
results in a push (to NiFi, Kafka, or nifi-app.log respectively),
where a PrometheusRecordSink results in a pull the same as #2 and #3.
There's even a ScriptedRecordSink where you can write your own script
to put the results where you want them.
5) The other reporting tasks. These have been mentioned frequently in
this thread so no need for elaboration here :)

Regards,
Matt

[1] 
https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/

On Wed, Jul 21, 2021 at 5:58 PM scott  wrote:
>
> Great comments all. I agree with the architecture comment about push 
> monitoring. I've been monitoring applications for more than 2 decades now, 
> but sometimes you have to work around the limitations of the situation. It 
> would be really nice if NiFi had this logic built-in, and frankly I'm 
> surprised it is not yet. I can't be the only one who has had to deal with 
> queues filling up, causing problems downstream. NiFi certainly knows that the 
> queues fill up, they change color and execute back-pressure logic. If it 
> would just do something simple like write a log/error message to a log file 
> when this happens, I would be good.
> I have looked at the new metrics and reporting tasks but still haven't found 
> the right thing to do to get notified when any queue in my instance fills up. 
> Are there any examples of using them for a similar task you can share?
>
> Thanks,
> Scott
>
> On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com  
> wrote:
>>
>> In general, it is a bad architecture to do monitoring via pull request. You 
>> should always push. I recommend a look at the book "The Art of Monitoring" 
>> by James Turnbull.
>>
>> I also recommend the very good articles by Pierre Villard on the subject of 
>> NiFi monitoring at 
>> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
>>
>> Hope this helps.
>>
>> Mit freundlichen Grüßen / best regards
>> Kay-Uwe Moosheimer
>>
>> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
>>
>> 
>> Can't you leverage some of the recent nifi features and basically run sql 
>> queries over NiFi metrics directly as part of the flow? Then act on it with 
>> a full flexibility of the flow. Kinda like a push design.
>>
>> Andrew
>>
>> On Tue, Jul 20, 2021, 2:31 PM scott  wrote:
>>>
>>> Hi all,
>>> I'm trying to setup some monitoring of all queues in my NiFi instance, to 
>>> catch before queues become full. One solution I am looking at is to use the 
>>> API, but because I have a secure NiFi that uses LDAP, it seems to require a 
>>> token that expires in 24 hours or so. I need this to be an automated 
>>> solution, so that is not going to work. Has anyone else tackled this 
>>> problem with a secure LDAP enabled cluster?
>>>
>>> Thanks,
>>> Scott


Re: NiFi Queue Monitoring

2021-07-21 Thread u...@moosheimer.com
Scott

Check out this from Pierre: https://pierrevillard.com/tag/reporting-task/

We monitor all parameters from NiFi via Reporting Tasks. We send all parameters 
via MQTT to InfluxDB and monitor that via Grafana. 
There we can then start alerts when the levels reach a critical value.

If that doesn't help you, then describe your problem specifically. Maybe we can 
help you.

Mit freundlichen Grüßen / best regards
Kay-Uwe Moosheimer

> Am 22.07.2021 um 00:04 schrieb Joe Witt :
> 
> 
> Scott
> 
> Nifi supports both push and pull. Push via reporting tasks and pull via rest 
> api.
> 
> Are you needing a particular impl of a reporting task?
> 
> You are right this is a common need.  Solved using one of these methods.
> 
> Thanks
> 
>> On Wed, Jul 21, 2021 at 2:58 PM scott  wrote:
>> Great comments all. I agree with the architecture comment about push 
>> monitoring. I've been monitoring applications for more than 2 decades now, 
>> but sometimes you have to work around the limitations of the situation. It 
>> would be really nice if NiFi had this logic built-in, and frankly I'm 
>> surprised it is not yet. I can't be the only one who has had to deal with 
>> queues filling up, causing problems downstream. NiFi certainly knows that 
>> the queues fill up, they change color and execute back-pressure logic. If it 
>> would just do something simple like write a log/error message to a log file 
>> when this happens, I would be good. 
>> I have looked at the new metrics and reporting tasks but still haven't found 
>> the right thing to do to get notified when any queue in my instance fills 
>> up. Are there any examples of using them for a similar task you can share?
>> 
>> Thanks,
>> Scott
>> 
>>> On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com  
>>> wrote:
>>> In general, it is a bad architecture to do monitoring via pull request. You 
>>> should always push. I recommend a look at the book "The Art of Monitoring" 
>>> by James Turnbull.
>>> 
>>> I also recommend the very good articles by Pierre Villard on the subject of 
>>> NiFi monitoring at 
>>> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
>>> 
>>> Hope this helps.
>>> 
>>> Mit freundlichen Grüßen / best regards
>>> Kay-Uwe Moosheimer
>>> 
> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
> 
 
 Can't you leverage some of the recent nifi features and basically run sql 
 queries over NiFi metrics directly as part of the flow? Then act on it 
 with a full flexibility of the flow. Kinda like a push design.
 
 Andrew
 
> On Tue, Jul 20, 2021, 2:31 PM scott  wrote:
> Hi all,
> I'm trying to setup some monitoring of all queues in my NiFi instance, to 
> catch before queues become full. One solution I am looking at is to use 
> the API, but because I have a secure NiFi that uses LDAP, it seems to 
> require a token that expires in 24 hours or so. I need this to be an 
> automated solution, so that is not going to work. Has anyone else tackled 
> this problem with a secure LDAP enabled cluster? 
> 
> Thanks,
> Scott


Re: NiFi Queue Monitoring

2021-07-21 Thread Joe Witt
Scott

Nifi supports both push and pull. Push via reporting tasks and pull via
rest api.

Are you needing a particular impl of a reporting task?

You are right this is a common need.  Solved using one of these methods.

Thanks

On Wed, Jul 21, 2021 at 2:58 PM scott  wrote:

> Great comments all. I agree with the architecture comment about push
> monitoring. I've been monitoring applications for more than 2 decades now,
> but sometimes you have to work around the limitations of the situation. It
> would be really nice if NiFi had this logic built-in, and frankly I'm
> surprised it is not yet. I can't be the only one who has had to deal with
> queues filling up, causing problems downstream. NiFi certainly knows that
> the queues fill up, they change color and execute back-pressure logic. If
> it would just do something simple like write a log/error message to a log
> file when this happens, I would be good.
> I have looked at the new metrics and reporting tasks but still haven't
> found the right thing to do to get notified when any queue in my
> instance fills up. Are there any examples of using them for a similar task
> you can share?
>
> Thanks,
> Scott
>
> On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com 
> wrote:
>
>> In general, it is a bad architecture to do monitoring via pull request.
>> You should always push. I recommend a look at the book "The Art of
>> Monitoring" by James Turnbull.
>>
>> I also recommend the very good articles by Pierre Villard on the subject
>> of NiFi monitoring at
>> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
>>
>> Hope this helps.
>>
>> Mit freundlichen Grüßen / best regards
>> Kay-Uwe Moosheimer
>>
>> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
>>
>> 
>> Can't you leverage some of the recent nifi features and basically run sql
>> queries over NiFi metrics directly as part of the flow? Then act on it with
>> a full flexibility of the flow. Kinda like a push design.
>>
>> Andrew
>>
>> On Tue, Jul 20, 2021, 2:31 PM scott  wrote:
>>
>>> Hi all,
>>> I'm trying to setup some monitoring of all queues in my NiFi instance,
>>> to catch before queues become full. One solution I am looking at is to use
>>> the API, but because I have a secure NiFi that uses LDAP, it seems to
>>> require a token that expires in 24 hours or so. I need this to be an
>>> automated solution, so that is not going to work. Has anyone else tackled
>>> this problem with a secure LDAP enabled cluster?
>>>
>>> Thanks,
>>> Scott
>>>
>>


Re: NiFi Queue Monitoring

2021-07-21 Thread scott
Great comments all. I agree with the architecture comment about push
monitoring. I've been monitoring applications for more than 2 decades now,
but sometimes you have to work around the limitations of the situation. It
would be really nice if NiFi had this logic built-in, and frankly I'm
surprised it is not yet. I can't be the only one who has had to deal with
queues filling up, causing problems downstream. NiFi certainly knows that
the queues fill up, they change color and execute back-pressure logic. If
it would just do something simple like write a log/error message to a log
file when this happens, I would be good.
I have looked at the new metrics and reporting tasks but still haven't
found the right thing to do to get notified when any queue in my
instance fills up. Are there any examples of using them for a similar task
you can share?

Thanks,
Scott

On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com 
wrote:

> In general, it is a bad architecture to do monitoring via pull request.
> You should always push. I recommend a look at the book "The Art of
> Monitoring" by James Turnbull.
>
> I also recommend the very good articles by Pierre Villard on the subject
> of NiFi monitoring at
> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
>
> Hope this helps.
>
> Mit freundlichen Grüßen / best regards
> Kay-Uwe Moosheimer
>
> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
>
> 
> Can't you leverage some of the recent nifi features and basically run sql
> queries over NiFi metrics directly as part of the flow? Then act on it with
> a full flexibility of the flow. Kinda like a push design.
>
> Andrew
>
> On Tue, Jul 20, 2021, 2:31 PM scott  wrote:
>
>> Hi all,
>> I'm trying to setup some monitoring of all queues in my NiFi instance, to
>> catch before queues become full. One solution I am looking at is to use the
>> API, but because I have a secure NiFi that uses LDAP, it seems to require a
>> token that expires in 24 hours or so. I need this to be an automated
>> solution, so that is not going to work. Has anyone else tackled this
>> problem with a secure LDAP enabled cluster?
>>
>> Thanks,
>> Scott
>>
>


Re: NiFi Queue Monitoring

2021-07-21 Thread u...@moosheimer.com
In general, it is a bad architecture to do monitoring via pull request. You 
should always push. I recommend a look at the book "The Art of Monitoring" by 
James Turnbull.

I also recommend the very good articles by Pierre Villard on the subject of 
NiFi monitoring at 
https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.

Hope this helps.

Mit freundlichen Grüßen / best regards
Kay-Uwe Moosheimer

> Am 21.07.2021 um 16:45 schrieb Andrew Grande :
> 
> 
> Can't you leverage some of the recent nifi features and basically run sql 
> queries over NiFi metrics directly as part of the flow? Then act on it with a 
> full flexibility of the flow. Kinda like a push design.
> 
> Andrew
> 
>> On Tue, Jul 20, 2021, 2:31 PM scott  wrote:
>> Hi all,
>> I'm trying to setup some monitoring of all queues in my NiFi instance, to 
>> catch before queues become full. One solution I am looking at is to use the 
>> API, but because I have a secure NiFi that uses LDAP, it seems to require a 
>> token that expires in 24 hours or so. I need this to be an automated 
>> solution, so that is not going to work. Has anyone else tackled this problem 
>> with a secure LDAP enabled cluster? 
>> 
>> Thanks,
>> Scott


Re: NiFi Queue Monitoring

2021-07-21 Thread Andrew Grande
Can't you leverage some of the recent nifi features and basically run sql
queries over NiFi metrics directly as part of the flow? Then act on it with
a full flexibility of the flow. Kinda like a push design.

Andrew

On Tue, Jul 20, 2021, 2:31 PM scott  wrote:

> Hi all,
> I'm trying to setup some monitoring of all queues in my NiFi instance, to
> catch before queues become full. One solution I am looking at is to use the
> API, but because I have a secure NiFi that uses LDAP, it seems to require a
> token that expires in 24 hours or so. I need this to be an automated
> solution, so that is not going to work. Has anyone else tackled this
> problem with a secure LDAP enabled cluster?
>
> Thanks,
> Scott
>


Re: NiFi Queue Monitoring

2021-07-20 Thread Lars Winderling
Scott,
you could use tls client cert auth, maybe including appropriate user-mapping. 
Since you have been using ldap, you maybe can use the dn as cert subject as-is. 
Only be aware that whitespace handling in the subject dn might differ between 
nifi and your ldap. We're also running nifi secured with an additional auth 
provider, but 2way tls is always accepted by nifi.
But maybe you could also employ a reporting task instead of polling the api.
Best, Lars

On 20 July 2021 23:31:02 CEST, scott  wrote:
>Hi all,
>I'm trying to setup some monitoring of all queues in my NiFi instance,
>to
>catch before queues become full. One solution I am looking at is to use
>the
>API, but because I have a secure NiFi that uses LDAP, it seems to
>require a
>token that expires in 24 hours or so. I need this to be an automated
>solution, so that is not going to work. Has anyone else tackled this
>problem with a secure LDAP enabled cluster?
>
>Thanks,
>Scott


NiFi Queue Monitoring

2021-07-20 Thread scott
Hi all,
I'm trying to setup some monitoring of all queues in my NiFi instance, to
catch before queues become full. One solution I am looking at is to use the
API, but because I have a secure NiFi that uses LDAP, it seems to require a
token that expires in 24 hours or so. I need this to be an automated
solution, so that is not going to work. Has anyone else tackled this
problem with a secure LDAP enabled cluster?

Thanks,
Scott