Re: NiFi 1.1.1 & 1.2.0 with PostgreSql 9.5

2017-06-11 Thread Koji Kawamura
Hello Raymond,

Does the PutSQL has some number (probably 1) shown on its top right
corner in NiFi UI? If so, the execution thread is stuck at some state,
then looking at a thread dump may help to investigate where it stops.
Could you share "$NIFI_HOME/bin/nifi.sh dump" result? The output will
be written into logs/nifi-bootstrap.log.

Or if PutSQL has downstream flow such as 'success' or 'failure' and if
those relationships have 10,000 FlowFiles queued in it, then 'Back
Pressure' is enabled for those. In that case, NiFi won't schedule
PutSQL until those queued FlowFiles move forward or removed and number
of queued FlowFile and total size become less than configured
back-pressure threshold, 10,000 FlowFiles or 1GB in size by default.

Hope this helps.

Thanks,
Koji

On Sat, Jun 10, 2017 at 5:36 AM, Raymond Rogers
 wrote:
> We are having a performance issue where we are inserting a 2000+ rows into a
> PostreSql database.
>
> The database server is basically idle, disk writes 200-300 KB per second.
> We are using the postgresql-42.1.1.jar JDBC driver.
> The NiFi server has over 10,000 flow files waiting in queue for the PutSQL
> processor.
> It appears that the PutSQL has completely stopped processing flow-files.
> The two servers are sitting in the same rack and network traffic is less the
> 300 KB.
> I can't find where anything is logging any errors.
>
> Can anyone make any suggestions?


HDF NIfi - Does Nifi writes provenance/data on HDP Node ?

2017-06-11 Thread Shashi Vishwakarma
Hi

I have HDF cluster with 3 Nifi instance which lunches jobs(Hive/Spark) on
HDP cluster. Usually nifi writes all information to different repositories
available on local machine.

My question is - Does nifi writes any data,provenance information or does
spilling on HDP nodes (ex. data nodes in HDP cluster) while accessing
HDFS,Hive or Spark services ?

Thanks

Shashi


Re: Questions about record processors

2017-06-11 Thread Joe Witt
Mika

And yes to question one.  What you're doing us fine.  There will also soon
be an option for the writer to just infer its schema from the reader which
may be useful in simplifying the config more.

Glad you are finding these useful.  If you'd like to contribute a sample
flow that does this we have a wiki page for such things.  I'm sure others
would find it helpful.

Thanks
Joe

On Jun 11, 2017 1:20 PM, "Mika Borner"  wrote:

> Answering the second question myself.
>
> The record paths had one slash too much /bar and /foo instead of //bar and
> //foo.
>
>
> On 06/11/2017 10:06 PM, Mika Borner wrote:
>
>> Hi
>>
>> Been playing with the new record processors (great so far!). Now I have
>> two questions about usage.
>>
>> 1. Let's say I have an Update Record processor with a GrokReader and a
>> JsonRecordSetWriter manipulating the record. This works fine. Now I want to
>> further process the records with a LookupRecord and/or a QueryRecord
>> processor. In my case, setting the Record Reader for chained record
>> processors to JsonTreeReader seems to work. Just wondering if this is the
>> right thing to do, as there are only examples with a single record
>> processor.
>>
>> 2. I haven't figured out yet, how the LookupRecord processor
>> configuration has to be done. My lookup table contains two columns and one
>> row
>>
>> foo, bar
>> foovalue1, barvalue1
>>
>> The SimpleCsvFileLookupService has "Lookup Key Column" set to "foo" and
>> "Lookup Value Column" to "bar".
>>
>> The LookupRecord processor has "Result RecordPath" set to //bar. The
>> property "key" is set to //foo. Here I'm not sure if the property name
>> "key" is correct. Any other value causes an error. Unfortunately the lookup
>> doesn't work and I also couldn't find any examples in the documentation.
>>
>> Thanks for your help!
>>
>> Mika>
>>
>>
>>
>>
>>
>>
>>
>


Re: Questions about record processors

2017-06-11 Thread Mika Borner

Answering the second question myself.

The record paths had one slash too much /bar and /foo instead of //bar 
and //foo.



On 06/11/2017 10:06 PM, Mika Borner wrote:

Hi

Been playing with the new record processors (great so far!). Now I 
have two questions about usage.


1. Let's say I have an Update Record processor with a GrokReader and a 
JsonRecordSetWriter manipulating the record. This works fine. Now I 
want to further process the records with a LookupRecord and/or a 
QueryRecord processor. In my case, setting the Record Reader for 
chained record processors to JsonTreeReader seems to work. Just 
wondering if this is the right thing to do, as there are only examples 
with a single record processor.


2. I haven't figured out yet, how the LookupRecord processor 
configuration has to be done. My lookup table contains two columns and 
one row


foo, bar
foovalue1, barvalue1

The SimpleCsvFileLookupService has "Lookup Key Column" set to "foo" 
and "Lookup Value Column" to "bar".


The LookupRecord processor has "Result RecordPath" set to //bar. The 
property "key" is set to //foo. Here I'm not sure if the property name 
"key" is correct. Any other value causes an error. Unfortunately the 
lookup doesn't work and I also couldn't find any examples in the 
documentation.


Thanks for your help!

Mika>










Questions about record processors

2017-06-11 Thread Mika Borner

Hi

Been playing with the new record processors (great so far!). Now I have 
two questions about usage.


1. Let's say I have an Update Record processor with a GrokReader and a 
JsonRecordSetWriter manipulating the record. This works fine. Now I want 
to further process the records with a LookupRecord and/or a QueryRecord 
processor. In my case, setting the Record Reader for chained record 
processors to JsonTreeReader seems to work. Just wondering if this is 
the right thing to do, as there are only examples with a single record 
processor.


2. I haven't figured out yet, how the LookupRecord processor 
configuration has to be done. My lookup table contains two columns and 
one row


foo, bar
foovalue1, barvalue1

The SimpleCsvFileLookupService has "Lookup Key Column" set to "foo" and 
"Lookup Value Column" to "bar".


The LookupRecord processor has "Result RecordPath" set to //bar. The 
property "key" is set to //foo. Here I'm not sure if the property name 
"key" is correct. Any other value causes an error. Unfortunately the 
lookup doesn't work and I also couldn't find any examples in the 
documentation.


Thanks for your help!

Mika>








[ANNOUNCE] Apache NiFi CVE-2017-7667 and CVE-2017-7665

2017-06-11 Thread Matt Gilman
Apache NiFi PMC would like to announce the discovery and resolution of
CVE-2017-7667 and CVE-2017-7665. These issues have been resolved and new
versions of the Apache NiFi project were released in accordance with the
Apache Release Process.

Fixed in Apache NiFi 0.7.4 and 1.3.0

CVE-2017-7667: Apache NiFi XFS issue due to insufficient response headers

Severity: Important

Versions Affected:

Apache NiFi 0.0.1 - 0.7.3
Apache NiFi 1.0.0 - 1.2.0

Description: Apache NiFi needs to establish the response header telling
browsers to only allow framing with the same origin.

Mitigation: The fix to set this response header will be applied on Apache
NiFi 0.7.4 and Apache NiFi 1.3.0 releases.  Users running a prior 0.x or
1.x release should upgrade to the appropriate release.

Credit: This issue was discovered by Matt Gilman.

CVE-2017-7665: Apache NiFi XSS issue on certain user input components

Severity: Important

Versions Affected:

Apache NiFi 0.0.1 - 0.7.3
Apache NiFi 1.0.0 - 1.2.0

Description: There are certain user input components in the Apache NiFi UI
which had been guarding for some forms of XSS issues but were insufficient.

Mitigation: The fix for more complete user input sanitization will be
applied on Apache NiFi 0.7.4 and Apache NiFi 1.3.0 releases.  Users running
a prior 0.x or 1.x release should upgrade to the appropriate release.

Credit: This issue was discovered by Matt Gilman.