In addtion, Manish, if u have a larger dataflow to design, you'll start
facing the issue of "difficult to interpret the flow visually" problem.
Process groups helps but if you have many process groups on UI, you'll
visibly see the problem.
For this, I am thinking of using the same approach of wrap
How i have handled this personally is to use wrap sql processors with
handlehttprequest processor essentially making the db operation as a REsT
webservice.
Then you have the option of fetchhttp processor update appending the
results in attribute instead of content, which is an option already
avail
Agreed. Additionally, if we want to get fancy, we can work with
incoming flow files based on MIME type (JSON, XML, CSV) and have a
"Path" property to a field in the document. Then the processor could
replace inline the value in the document with the lookup value. If XML
files are coming in, the Pa
I think the lookup processor should return data in a format that can be
efficiently parsed/processed by NiFi expression language. For example – JSON.
This would avoid using additional “Extract” type processor. All the downstream
processor can simply work with “jsonPath” for additional lookup ins
Manish,
Some of the queries in those processors could bring back lots of data, and
putting them into an attribute could cause memory issues. Another concern is
when the result is binary data, such as ExecuteSQL returning an Avro file. And
since the return of these is a collection of records, th
Thanks for the reply Joe. Just a thought – do you think it would be a good idea
for every Get processor (GetMongo, GetHBase etc.) to have 2 additional
properties like:
1. Result in Content or Result in Attribute
2. Result Attribute Name (only applicable when “Result in Attribute” is
You would need to make a custom process for now. I think we should have a
nice controller service to generalize jdbc lookups which supports caching.
And then a processor which leverages it.
This comes up fairly often and is pretty straightforward from a design
POV. Anyone want to take a stab at
Lorenzo
Without seeing the code and logs it would be very difficult to help. nifi
has no trouble by design writing large files (GBs) to many things including
hdfs so the issue is probably in how this client library interacts with the
data stream.
On Sep 2, 2016 4:19 PM, "Lorenzo Peder" wrote:
Hello Everyone,
Is there a processor that we can use for updating/adding an attribute of an
incoming flow file from some external service (say MongoDB or Couchbase or any
RDBMS)? The processor will use the attribute of incoming flow file, query the
external service, and simply modify/add an add
Hi All,
We've run into an issue uploading a larger file (~50Mb) into an Azure Data Lake
using a custom processor in nifi 0.7-1.0. This custom processor has worked
consistently for smaller files, but once encountered with this larger file, it
spits http error 404 (file not found). Eventually a m
Bryan,
I have found that the code is looking for an attribute called
“javax.servlet.request.X509Certificate” in the certificate and there does not
appear to be one.
https://goo.gl/photos/SwyiAfgezfTSzDeg7
file: X509CertificateExtractor.java Line 39
This article suggests the issue is with the ce
HI Gunjan,
In terms of creating and sending data, certainly seems feasible, but might
be a bit heavy handed for your testing purposes. Did you have some more
specific ideas in mind or a more pointed example of how you would do this?
The way I have typically handled this personally for this kind o
Ok, I am not completely familiar with all the ins and outs of the
site-to-site client code, but I know one place that it creates a connection
is here:
https://github.com/apache/nifi/blob/a3586e04d9978e105cc5645e893dc6d77b79b86e/nifi-commons/nifi-site-to-site-client/src/main/java/org/apache/nifi/re
Bryan,
In the log on the server side we see this message:
INFO [NiFi Web Server-324] o.a.n.w.a.c.AccessDeniedExceptionMapper anonymous
does not have permission to access the requested resource. Returning
Unauthorized response.
I forgot to mention, we tried adding a user named anonymous and gra
Paul/Peter,
Having Kerberos enabled should not have any impact, you can have Kerberos
or LDAP enabled, but if a certificate is provided that should always take
precedence.
What do you see in the nifi-user.log on the second instance (the one where
the remote process group is pointing at)?
If it f
Matt,
That was the case on our first go round, when we only had SSL certs. We went
back yesterday and got new certificates that support both Server Auth and
Client Auth and rebuilt our KeyStore.
When I use keytool to look at my KeyStore I can see both of these on the
certificate:
#6: ObjectI
Do the certs you created/obtained support being used for both client and
server auth. If they were created for server auth only, this could explain
your issue. NiFi instances need to act as a client and a server at times.
Thanks,
Matt
On Fri, Sep 2, 2016 at 10:59 AM, Peter Wicks (pwicks)
wrote
Bryan,
We’ve fixed our certs, with no change to the outcome.
We have username/password authentication enabled, via Kerberos, are there
issues having Kerberos enabled (username/password) and trying to do
site-to-site? When I try to initiate site-to-site with an instance of NiFI
configured for K
I am trying similar approach - hopefully I will make it working. I think , the
challenge is going to XSLT transformer for my use case.
[cid:image001.png@01D204F6.A9EB6D10]
From: gp...@live.co.za [mailto:gp...@live.co.za]
Sent: Thursday, September 01, 2016 7:52 PM
To: users@nifi.apache.org
Subje
19 matches
Mail list logo