I also realize the sleep may seem extraneous with the flowfile penalization
that NiFi offers, but I am not sure if applying that penalty is identical
across relationships, so I didn’t want to penalize the “not-found” relationship
that went to the Get Token processor. You may be interested in
I have a flow that looks like this
list file -> get file -> invokehttp (gets oauth token) -> invokehttp (post
file with token)
This works fine, but my token is good for a certain amount of time (maybe 1
hour), is there a setting where 1 processor runs every 30 minutes (to get
the token) but when
Joe,
I tried this call on both 1.7.1 and 1.6.0 and still getting the timeout
exception. I know that this is a very expensive call and requires lot of
caching from serverside. I was looking for a way to get all processors and the
controller Services they refer (if any?). Not sure how to get the
Karthik
I believe that call is/was very expensive on the server side. You
might want to experiment with the latest release of NiFi against the
same flow configuration. From conversations I have had I feel like
this is an addressed issue though admittedly i'm not sure which JIRA
would address it
All,
I was trying to get all processors from "root" Process group with the following
rest call -
/nifi-api/process-groups/root/processors?includeDescendantGroups=true and this
call keeps timing out with the below exception
javax.ws.rs.ProcessingException: java.net.SocketTimeoutException:
great, thanks all!
nice tool, Otto!
On Mon, Aug 13, 2018 at 9:15 AM Otto Fowler wrote:
> This script:
> https://github.com/ottobackwards/Metron-and-Nifi-Scripts/blob/master/nifi/checkout-nifi-pr
> will let you checkout any NIFI PR to a local directory and build it.
>
> Just:
> cd tmp
>
Did you check
https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html ?
I have similar issue with postgres. Mysql is case sensitive. Try to use
lower or upper case to test if that's the fix
On Mon, 13 Aug 2018 at 13:26 Rick wrote:
> Hi Everybody!
>
> I am seeing some really
Rick,
Are you using the version of the MySQL driver in NiFi that corresponds
to your 5.7.23 server? That error seems like it'd be coming from the
database rather than the processor. I have seen issues (although not
this one) with older drivers against newer DBs.
Another thing to try might be to
Hi Everybody!
I am seeing some really unusual and unexpected MySQL error messages
written into the NiFi log file by the latest MySQL version (5.7.23) when
I try to load simple records from a CSV file with a very simple
PutDatabaseRecord application.
So, before I expend a lot of effort
Bob,
InferAvroSchema can infer types like boolean, integer, long, float, double,
and I believe for JSON can correctly descend into arrays and nested
maps/structs/objects. Here is an example record from NiFi provenance data
that has most of those covered (except bool and float/double, but you can
Trying to develop a sample input file of json data to feed into InferAvroSchema
so I can feed that into PutDatabaseRecord.
Need a hello world example ☺
But, to get started, I’d be happy to get InferAvroSchema working. I’m “trial
and error”-ing the input file hoping to get lucky, but..
No log
Thanks Mike, that fixed it.
--Peter
From: Michael Moser [mailto:moser...@gmail.com]
Sent: Friday, August 10, 2018 3:49 PM
To: users@nifi.apache.org
Subject: [EXT] Re: After 1.7.1 upgrade, no Provenance data is visible
Hi Peter,
There was a change to provenance related access policies in 1.7.0.
Ah, makes sense. Hopefully GetFile -> ValidateRecord -> SplitRecord ->
downstream will work for you then, let us know if you run into any
issues!
Thanks,
Matt
On Mon, Aug 13, 2018 at 9:13 AM saloni udani wrote:
>
> Thanks Matt.
>
> Validaterecord would be useful. I am using SplitRecord to create
This script:
https://github.com/ottobackwards/Metron-and-Nifi-Scripts/blob/master/nifi/checkout-nifi-pr
will let you checkout any NIFI PR to a local directory and build it.
Just:
cd tmp
checkout-nifi-pr 2945
Maybe useful.
On August 13, 2018 at 08:36:04, Boris Tyukin (bo...@boristyukin.com)
Thanks Matt.
Validaterecord would be useful. I am using SplitRecord to create batches
for the large input files for efficiency of processing.
On Mon, Aug 13, 2018 at 6:02 PM, Matt Burgess wrote:
> You can use ValidateRecord (with a CSVReader and JSONRecordSetWriter,
> and another "invalid CSV
Haha thanks, but I can't take credit for that much throughput ;) I
moved 99% of ExecuteSQL out to a base class, since the main difference
was a line or two of code to do the actual write and update the
attributes, then the two processors just contain the differences in
logic and properties between
Boris,
Yeah, you can fork either his branch or his entire repo and try it out.
Also, usual caveat: user beware until it passes code review...
Mike
On Mon, Aug 13, 2018 at 8:36 AM Boris Tyukin wrote:
> Matt, you are awesome! 15 files changes and 3k lines of code - man, do not
> tell me you did
Matt, you are awesome! 15 files changes and 3k lines of code - man, do not
tell me you did that in just a few days :)
since it has not been merged yet with the master, can I just use your
personal branch to compile entire nifi? or is it better to cherry pick your
commit into master? I would like
You can use ValidateRecord (with a CSVReader and JSONRecordSetWriter,
and another "invalid CSV Reader" for invalid records) for that, then
SplitRecord if you need it. However if you can describe your
downstream flow, perhaps we can help you avoid the need to split the
records at all (unless you
Hi
I have a bunch of CSV files which I need to convert to JSON.
My current flow is
GetFile --> SplitRecord (CSVReader and JSONRecordSetWriter)
The issue is if the csv contains an invalid records then the file gets
stuck in the queue. Is there a way to discard the invalid CSV lines
encountered
20 matches
Mail list logo