Hi Folks
That's our use case now. All our Models are run in python.
Currently we send events to the ML via http, although this is not optimal
Our use case is edge ML where we want a light weight wrapper for
Python code base.
Jython however does not work with the code base
I'm think of changing
Mike,
These are upcoming features, not yet released. It's no surprise you don't
see full reference docs where you'd expect them.
Should probably ask if those will make it into NiFi 1.10.x or there's a
different plan.
Andrew
On Tue, Jul 30, 2019, 1:43 PM HWANG, MICHAEL (MICHAEL) <
Hello
I'm playing with Nifi and the Nifi registry and trying to understand the
features and capabilities. I noticed that in the Nifi registry API there are
endpoints for "/bundles", "/extension-repository", "/extensions" but I don't
understand how they are used nor intended to be used. I've
Joe,
I think it might not be necessary or desirable to expose this outside
of the `CompressContent` processor. Whether the processor operates in
a lazy mode (as proposed here) or the current eager mode shouldn't
change the behavior of the flow. The next process (or processes) will
not know the
It was JsonTreeReader.
Thanks,
Mike
On Mon, Jul 29, 2019 at 3:36 PM Mark Payne wrote:
> Mike,
>
> What Record Reader is being used here? The problem appears to be due to
> the Record Reader itself assigning that as the field type.
>
> I created a dummy unit test to verify the RecordPath stuff
Definitely something that I think would really help the community. It
might make sense to frame/structure these APIs such that an internal option
could be available to reduce dependencies and get up and running but that
also just as easily a remote implementation where the engine lives and is
Yolanda,
I think this sounds like a great idea and will be very useful to admins/users,
as well as enabling some interesting next-level functionality and insight
generation. Thanks for putting this out there.
Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC
Joe,
My concern is that the record reading and writing as it stands isn't as
clear as it could be, and this could make it worse. I personally did find
it a little difficult understanding how some record processing processors
worked.
That aside however, I think that if a "flow level"/Processes
Pierre, There are a few other indicators of back pressure, including the red
shadow around the queue and the queue status bars. Do you think these are
enough for a user to distinguish between a back pressure queue and a connection
that someone marked as red? Any thoughts on how to keep these
A couple of other things to note:
- Connections can already be gray and dashed when the underlying
relationship no longer exists. This condition happens often using any
Processor whose connections are dynamic (like RouteOnAttribute).
- Also, there is already an option to adjust the z-index of a
Edward,
I like your point/comment regarding separation of concerns/cohesion. I
think we could/should consider automatically decompressing data on the fly
for processors in general in the event we know a given set of data to be
compressed but being accessed for plaintext purposes. For general
Hey,
I like the idea but we need to have a clear differentiator compared to the
situation where backpressure is enabled (and connection is coloured in red).
Pierre
Le mar. 30 juil. 2019 à 18:19, Peter Wicks (pwicks) a
écrit :
> You know when you have a crazy complex flow, and it's hard
So while I agree with in principle and it is a good idea on paper.
My concern is that this starts to add a bolt-on bloat problem. The Nifi
processors as they stand in general do follow the Unix Philosophy (Do one
thing, and do it well). My concern is while it could just be a case with
just
You know when you have a crazy complex flow, and it's hard sometimes to even
tell where things are going? Especially those failure conditions that are all
going back to a central Funnel or Port? I thought it would be visually very
helpful if you could add Color to your connections, using the
Malthe
I do see value in having the Record readers/writers understand and handle
compression directly as it will avoid the extra disk hit of decompress,
read, compress cycles using existing processes and further there are cases
where the compression is record specific and not just holistic block
Hello Everyone,
I wanted to reach out to the community to discuss potentially enhancing
NiFi to include predictive analytics that can help users assess and predict
NiFi behavior and performance. Currently NiFi has lots of metrics available
for areas including jvm and flow component usage (via
16 matches
Mail list logo