+1

On Tue, 27 Jul 2021 at 14:00, Joe Witt <joe.w...@gmail.com> wrote:

> Scott
>
> This sounds pretty darn cool.  Any chance you'd be interested in
> kicking out a blog on it?
>
> Thanks
>
> On Tue, Jul 27, 2021 at 9:58 AM scott <tcots8...@gmail.com> wrote:
> >
> > Matt/all,
> > I was able to solve my problem using the QueryNiFiReportingTask with
> "SELECT * FROM CONNECTION_STATUS WHERE isBackPressureEnabled = true" and
> the new LoggingRecordSink as you suggested. Everything is working
> flawlessly now. Thank you again!
> >
> > Scott
> >
> > On Wed, Jul 21, 2021 at 5:09 PM Matt Burgess <mattyb...@apache.org>
> wrote:
> >>
> >> Scott,
> >>
> >> Glad to hear it! Please let me know if you have any questions or if
> >> issues arise. One thing I forgot to mention is that I think
> >> backpressure prediction is disabled by default due to the extra
> >> consumption of CPU to do the regressions, make sure the
> >> "nifi.analytics.predict.enabled" property in nifi.properties is set to
> >> "true" before starting NiFi.
> >>
> >> Regards,
> >> Matt
> >>
> >> On Wed, Jul 21, 2021 at 7:21 PM scott <tcots8...@gmail.com> wrote:
> >> >
> >> > Excellent! Very much appreciate the help and for setting me on the
> right path. I'll give the queryNiFiReportingTask code a try.
> >> >
> >> > Scott
> >> >
> >> > On Wed, Jul 21, 2021 at 3:26 PM Matt Burgess <mattyb...@apache.org>
> wrote:
> >> >>
> >> >> Scott et al,
> >> >>
> >> >> There are a number of options for monitoring flows, including
> >> >> backpressure and even backpressure prediction:
> >> >>
> >> >> 1) The REST API for metrics. As you point out, it's subject to the
> >> >> same authz/authn as any other NiFi operation and doesn't sound like
> it
> >> >> will work out for you.
> >> >> 2) The Prometheus scrape target via the REST API. The issue would be
> >> >> the same as #1 I presume.
> >> >> 3) PrometheusReportingTask. This is similar to the REST scrape target
> >> >> but isn't subject to the usual NiFi authz/authn stuff, however it
> does
> >> >> support SSL/TLS for a secure solution (and is also a "pull" approach
> >> >> despite it being a reporting task)
> >> >> 4) QueryNiFiReportingTask. This is not included with the NiFi
> >> >> distribution but can be downloaded separately, the latest version
> >> >> (1.14.0) is at [1]. I believe this is what Andrew was referring to
> >> >> when he mentioned being able to run SQL queries over the information,
> >> >> you can do something like "SELECT * FROM
> CONNECTION_STATUS_PREDICTIONS
> >> >> WHERE predictedTimeToBytesBackpressureMillis < 10000". This can be
> >> >> done either as a push or pull depending on the Record Sink you
> choose.
> >> >> A SiteToSiteReportingRecordSink, KafkaRecordSink, or
> LoggingRecordSink
> >> >> results in a push (to NiFi, Kafka, or nifi-app.log respectively),
> >> >> where a PrometheusRecordSink results in a pull the same as #2 and #3.
> >> >> There's even a ScriptedRecordSink where you can write your own script
> >> >> to put the results where you want them.
> >> >> 5) The other reporting tasks. These have been mentioned frequently in
> >> >> this thread so no need for elaboration here :)
> >> >>
> >> >> Regards,
> >> >> Matt
> >> >>
> >> >> [1]
> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-sql-reporting-nar/1.14.0/
> >> >>
> >> >> On Wed, Jul 21, 2021 at 5:58 PM scott <tcots8...@gmail.com> wrote:
> >> >> >
> >> >> > Great comments all. I agree with the architecture comment about
> push monitoring. I've been monitoring applications for more than 2 decades
> now, but sometimes you have to work around the limitations of the
> situation. It would be really nice if NiFi had this logic built-in, and
> frankly I'm surprised it is not yet. I can't be the only one who has had to
> deal with queues filling up, causing problems downstream. NiFi certainly
> knows that the queues fill up, they change color and execute back-pressure
> logic. If it would just do something simple like write a log/error message
> to a log file when this happens, I would be good.
> >> >> > I have looked at the new metrics and reporting tasks but still
> haven't found the right thing to do to get notified when any queue in my
> instance fills up. Are there any examples of using them for a similar task
> you can share?
> >> >> >
> >> >> > Thanks,
> >> >> > Scott
> >> >> >
> >> >> > On Wed, Jul 21, 2021 at 11:29 AM u...@moosheimer.com <
> u...@moosheimer.com> wrote:
> >> >> >>
> >> >> >> In general, it is a bad architecture to do monitoring via pull
> request. You should always push. I recommend a look at the book "The Art of
> Monitoring" by James Turnbull.
> >> >> >>
> >> >> >> I also recommend the very good articles by Pierre Villard on the
> subject of NiFi monitoring at
> https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/.
> >> >> >>
> >> >> >> Hope this helps.
> >> >> >>
> >> >> >> Mit freundlichen Grüßen / best regards
> >> >> >> Kay-Uwe Moosheimer
> >> >> >>
> >> >> >> Am 21.07.2021 um 16:45 schrieb Andrew Grande <apere...@gmail.com
> >:
> >> >> >>
> >> >> >> 
> >> >> >> Can't you leverage some of the recent nifi features and basically
> run sql queries over NiFi metrics directly as part of the flow? Then act on
> it with a full flexibility of the flow. Kinda like a push design.
> >> >> >>
> >> >> >> Andrew
> >> >> >>
> >> >> >> On Tue, Jul 20, 2021, 2:31 PM scott <tcots8...@gmail.com> wrote:
> >> >> >>>
> >> >> >>> Hi all,
> >> >> >>> I'm trying to setup some monitoring of all queues in my NiFi
> instance, to catch before queues become full. One solution I am looking at
> is to use the API, but because I have a secure NiFi that uses LDAP, it
> seems to require a token that expires in 24 hours or so. I need this to be
> an automated solution, so that is not going to work. Has anyone else
> tackled this problem with a secure LDAP enabled cluster?
> >> >> >>>
> >> >> >>> Thanks,
> >> >> >>> Scott
>

Reply via email to