Whats the OS you are running on? What kind of systems? Memory stats,
network stats, JVM stats etc. How much data coming through?
On 10 August 2018 at 16:06, Joe Gresock wrote:
> Any nifi developers on this list that have any suggestions?
>
> On Wed, Aug 8, 2018 at 7:38 AM Joe Gresock wrote:
>
All, I'm seeking some advice on best practices for dealing with FlowFiles
that contain a large volume of JSON records.
My flow works like this:
Receive a FlowFile with millions of JSON records in it.
Potentially filter out some of the records based on the value of the JSON
fields. (custom
I am not. I continued googling for a bit after sending my email and
stumbled upon a slide deck by Brian Bende. I think my initial concern
looking at it is that it seems to require schema knowledge.
For most of our data sets, we operate in a space where we have a handful of
guaranteed fields and
When I read this I thought of NIFI-4598 [1] and this may be what Joe
remembers, too. If your site-to-site clients are older than 1.5.0, then
maybe this is a factor?
[1] - https://issues.apache.org/jira/browse/NIFI-4598
-- Mike
On Fri, Aug 10, 2018 at 4:43 PM Joe Witt wrote:
> Joe G
>
> I do
ben
are you familiar with the record readers, writers, and associated
processors?
i suspect if you make a record writer for your custom format at the end of
the flow chain youll get great performance and control.
thanks
On Fri, Aug 10, 2018, 4:27 PM Benjamin Janssen wrote:
> All, I'm seeking
Boris et al,
I put up a PR [1] to add ExecuteSQLRecord and QueryDatabaseTableRecord
under NIFI-4517, in case anyone wants to play around with it :)
Regards,
Matt
[1] https://github.com/apache/nifi/pull/2945
On Tue, Aug 7, 2018 at 8:30 PM Boris Tyukin wrote:
>
> Matt, you rock!! thank you!!
>
>
Yep what Mike points to is exactly what I was thinking of. Since
you're on 1.6.0 then probably the issue is something else. 1.6
included an updated jersey client or something related to that. Its
performance was really bad for our case. In 1.7.0 it was replaced
with an implementation
Curtis
Now that there is also a PR for this I'll comment directly there as
well to the specifics of the PR.
In reviewing the discussion here..
There is consensus that enabling the pattern of REST API interaction
you need for your case is a valuable capability.
However, we have not achieved
Joe G
I do recall there were some fixes and improvements related to
clustering performance/thread pooling/ as it relates to site to site.
I dont recall precisely which version they went into but i'd strongly
recommend trying the latest release if you're able.
Thanks
On Fri, Aug 10, 2018 at 4:13
Joe G,
Also, to clarify, when you say "when we add receiving Site-to-Site traffic to
the mix, the CPU spikes to the point that the nodes can't talk to each other,
resulting in the inability to view or modify the flow in the console"
what exactly does "when we add receiving Site-to-stie traffic
After upgrading our NiFi instances to 1.7.1 we are not able to see Provenance
data anymore in the UI. We see this across about a dozen instances.
In the UI it tells me provenance is available for about the last 24 hours, and
I can see that files have moved in and out of the processor in the last
Hi Peter,
There was a change to provenance related access policies in 1.7.0. Check
out the Migration Guide [1] for 1.7.0. It talks about what you'll need to
do.
[1] - https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance
-- Mike
On Fri, Aug 10, 2018 at 5:39 PM Peter Wicks
I created NIFI-5506 for the wantClientAuth specific issue, and submitted a
WIP PR#2944 for review.
Besides issues with getting OIDC working (on the OIDC Server side), this
enables external providers. Potentially, this could be amended to include
X509 through reverse proxy by way of a request
Any nifi developers on this list that have any suggestions?
On Wed, Aug 8, 2018 at 7:38 AM Joe Gresock wrote:
> I am running a 7-node NiFi 1.6.0 cluster that performs fairly well when
> it's simply processing its own data (putting records in Elasticsearch,
> MongoDB, running transforms, etc.).
14 matches
Mail list logo