Re: Custom Processor Upgrade

2019-08-13 Thread Bimal Mehta
Does that mean I need to recreate the processor? Or there is some
workaround?

The processor gets unpacked and its bundled dependencies go in NAR_INF.
However when I drag the processor on the canvas, it comes with a yellow
triangle (and gives the error message I stated above) and properties are
missing as well.


On Tue, Aug 13, 2019 at 10:47 PM Bryan Bende  wrote:

> I don’t remember all the reasoning behind the change, but it had to do
> with an issue when we upgraded Jetty...
>
> https://issues.apache.org/jira/browse/NIFI-5479
>
> On Tue, Aug 13, 2019 at 9:47 PM Bimal Mehta  wrote:
>
>> Yes it does show as an option.
>> One thing I noticed is that the when the nar is unpacked, the bundled
>> dependencies are inside META_INF in the work folder in NiFi 1.6.0, however
>> in NiFI 1.9.0 they go inside NAR_INF.
>> Why does this happen?
>> It seems the custom processor that we have uses Springboot, and
>> references applicationcontext file which was inside META_INF when it was
>> built. However I cant see that file anymore in the unpacked nar.
>>
>> On Tue, Aug 13, 2019 at 8:57 PM Bryan Bende  wrote:
>>
>>> Does that custom processor type show as an option if you try to add a
>>> new processor to the canvas?
>>>
>>> On Tue, Aug 13, 2019 at 4:54 PM Bimal Mehta  wrote:
>>>
 Hi Mike and Bryan,

 One of my custom processors appears as inactive in NiFi with a yellow
 triangle error.
 When I hover over it I see a message saying 'Missing Processor'
 validated against 'Any Property' is invalid. This is not a valid processor.
 In the log it seems to invoke GhostProcessor.java which is giving the
 above error when restarting nifi.
 This custom processor sits (with my other processors) in my custom_lib
 folder and I have provided that path in the nifi properties file as

 *nifi.nar.library.directory.custom=/opt/nifi/custom_lib*


 Not sure what I missed?

 Do I need to make entry of this custom processor somewhere?


 On Thu, Aug 8, 2019 at 9:14 AM Bimal Mehta  wrote:

> Thanks Mike and Bryan.
> Yes it seems my template was still referring the old version.
> I will have it updated now and will reimport.
> Also the version of NiFi we are using is the one that comes with CDF.
> I am not sure if CDF supports 1.9.2 yet or not. I will reach out to
> Cloudera and see if we can get it upgraded.
>
>
>
> On Thu, Aug 8, 2019, 8:51 AM Bryan Bende  wrote:
>
>> What is in the template for the bundle coordinates of your processor?
>> and does that match the coordinates of the NAR that is deployed?
>>
>> Example:
>>
>>
>>   org.apache.nifi
>>   nifi-update-attribute-nar
>>   1.10.0-SNAPSHOT
>> 
>>
>> If you made a new version of your NAR, say 2.0.0 and your template
>> references 1.0.0, then you'll need to update your template.
>>
>> On Wed, Aug 7, 2019 at 10:05 PM Mike Thomsen 
>> wrote:
>> >
>> > If it's happening immediately upon trying to import the template, I
>> believe that's the error message saying that the 1.9 instance cannot find
>> the NAR file which provided the processor. Also, if you're referring to
>> 1.9.0 and not 1.9.2 you're going to want to upgrade to the latter because
>> there are a few critical bugs fixed in 1.9.2.
>> >
>> > On Wed, Aug 7, 2019 at 9:19 PM Bimal Mehta 
>> wrote:
>> >>
>> >> Thanks Bryan.
>> >> My custom processors are part of a template. However when I try to
>> import my template in NiFi 1.9, I get an error message saying
>> >> PutFeedMetadata is not known to this NiFi instance. I did update
>> all the dependencies to NiFi 1.9 and even the plugins. We are using a
>> Cloudera distributed version of NiFi 1.9.
>> >> Any idea why is this happening?
>> >>
>> >> Thanks
>> >>
>> >>
>> >>
>> >> On Wed, Aug 7, 2019 at 3:46 PM Bryan Bende 
>> wrote:
>> >>>
>> >>> Hello,
>> >>>
>> >>> Most likely your processor built against 1.6 would run fine in
>> 1.9,
>> >>> but to make sure you just need to update any nifi dependencies in
>> your
>> >>> poms to 1.9.2.
>> >>>
>> >>> If you created your project from the archetype and didn't change
>> >>> anything, then this should just be changing the parent in the
>> root pom
>> >>> to the new version of nifi-nar-bundles.
>> >>>
>> >>> If you set it up yourself, then anywhere you depend on nifi-api
>> you
>> >>> need to change.
>> >>>
>> >>> -Bryan
>> >>>
>> >>> On Wed, Aug 7, 2019 at 3:18 PM Bimal Mehta 
>> wrote:
>> >>> >
>> >>> > Hi,
>> >>> >
>> >>> > If we have a custom processor that was created with NiFi 1.6,
>> what are the steps we need to follow to make it work in 1.9?
>> >>> > Is there some sort of steps that explains the jar 

Re: Custom Processor Upgrade

2019-08-13 Thread Bryan Bende
I don’t remember all the reasoning behind the change, but it had to do with
an issue when we upgraded Jetty...

https://issues.apache.org/jira/browse/NIFI-5479

On Tue, Aug 13, 2019 at 9:47 PM Bimal Mehta  wrote:

> Yes it does show as an option.
> One thing I noticed is that the when the nar is unpacked, the bundled
> dependencies are inside META_INF in the work folder in NiFi 1.6.0, however
> in NiFI 1.9.0 they go inside NAR_INF.
> Why does this happen?
> It seems the custom processor that we have uses Springboot, and references
> applicationcontext file which was inside META_INF when it was built.
> However I cant see that file anymore in the unpacked nar.
>
> On Tue, Aug 13, 2019 at 8:57 PM Bryan Bende  wrote:
>
>> Does that custom processor type show as an option if you try to add a new
>> processor to the canvas?
>>
>> On Tue, Aug 13, 2019 at 4:54 PM Bimal Mehta  wrote:
>>
>>> Hi Mike and Bryan,
>>>
>>> One of my custom processors appears as inactive in NiFi with a yellow
>>> triangle error.
>>> When I hover over it I see a message saying 'Missing Processor'
>>> validated against 'Any Property' is invalid. This is not a valid processor.
>>> In the log it seems to invoke GhostProcessor.java which is giving the
>>> above error when restarting nifi.
>>> This custom processor sits (with my other processors) in my custom_lib
>>> folder and I have provided that path in the nifi properties file as
>>>
>>> *nifi.nar.library.directory.custom=/opt/nifi/custom_lib*
>>>
>>>
>>> Not sure what I missed?
>>>
>>> Do I need to make entry of this custom processor somewhere?
>>>
>>>
>>> On Thu, Aug 8, 2019 at 9:14 AM Bimal Mehta  wrote:
>>>
 Thanks Mike and Bryan.
 Yes it seems my template was still referring the old version.
 I will have it updated now and will reimport.
 Also the version of NiFi we are using is the one that comes with CDF. I
 am not sure if CDF supports 1.9.2 yet or not. I will reach out to Cloudera
 and see if we can get it upgraded.



 On Thu, Aug 8, 2019, 8:51 AM Bryan Bende  wrote:

> What is in the template for the bundle coordinates of your processor?
> and does that match the coordinates of the NAR that is deployed?
>
> Example:
>
>
>   org.apache.nifi
>   nifi-update-attribute-nar
>   1.10.0-SNAPSHOT
> 
>
> If you made a new version of your NAR, say 2.0.0 and your template
> references 1.0.0, then you'll need to update your template.
>
> On Wed, Aug 7, 2019 at 10:05 PM Mike Thomsen 
> wrote:
> >
> > If it's happening immediately upon trying to import the template, I
> believe that's the error message saying that the 1.9 instance cannot find
> the NAR file which provided the processor. Also, if you're referring to
> 1.9.0 and not 1.9.2 you're going to want to upgrade to the latter because
> there are a few critical bugs fixed in 1.9.2.
> >
> > On Wed, Aug 7, 2019 at 9:19 PM Bimal Mehta 
> wrote:
> >>
> >> Thanks Bryan.
> >> My custom processors are part of a template. However when I try to
> import my template in NiFi 1.9, I get an error message saying
> >> PutFeedMetadata is not known to this NiFi instance. I did update
> all the dependencies to NiFi 1.9 and even the plugins. We are using a
> Cloudera distributed version of NiFi 1.9.
> >> Any idea why is this happening?
> >>
> >> Thanks
> >>
> >>
> >>
> >> On Wed, Aug 7, 2019 at 3:46 PM Bryan Bende 
> wrote:
> >>>
> >>> Hello,
> >>>
> >>> Most likely your processor built against 1.6 would run fine in 1.9,
> >>> but to make sure you just need to update any nifi dependencies in
> your
> >>> poms to 1.9.2.
> >>>
> >>> If you created your project from the archetype and didn't change
> >>> anything, then this should just be changing the parent in the root
> pom
> >>> to the new version of nifi-nar-bundles.
> >>>
> >>> If you set it up yourself, then anywhere you depend on nifi-api you
> >>> need to change.
> >>>
> >>> -Bryan
> >>>
> >>> On Wed, Aug 7, 2019 at 3:18 PM Bimal Mehta 
> wrote:
> >>> >
> >>> > Hi,
> >>> >
> >>> > If we have a custom processor that was created with NiFi 1.6,
> what are the steps we need to follow to make it work in 1.9?
> >>> > Is there some sort of steps that explains the jar and pom
> updates we need to do for making it work in 1.9?
>
 --
>> Sent from Gmail Mobile
>>
> --
Sent from Gmail Mobile


Re: Custom Processor Upgrade

2019-08-13 Thread Bimal Mehta
Yes it does show as an option.
One thing I noticed is that the when the nar is unpacked, the bundled
dependencies are inside META_INF in the work folder in NiFi 1.6.0, however
in NiFI 1.9.0 they go inside NAR_INF.
Why does this happen?
It seems the custom processor that we have uses Springboot, and references
applicationcontext file which was inside META_INF when it was built.
However I cant see that file anymore in the unpacked nar.

On Tue, Aug 13, 2019 at 8:57 PM Bryan Bende  wrote:

> Does that custom processor type show as an option if you try to add a new
> processor to the canvas?
>
> On Tue, Aug 13, 2019 at 4:54 PM Bimal Mehta  wrote:
>
>> Hi Mike and Bryan,
>>
>> One of my custom processors appears as inactive in NiFi with a yellow
>> triangle error.
>> When I hover over it I see a message saying 'Missing Processor' validated
>> against 'Any Property' is invalid. This is not a valid processor.
>> In the log it seems to invoke GhostProcessor.java which is giving the
>> above error when restarting nifi.
>> This custom processor sits (with my other processors) in my custom_lib
>> folder and I have provided that path in the nifi properties file as
>>
>> *nifi.nar.library.directory.custom=/opt/nifi/custom_lib*
>>
>>
>> Not sure what I missed?
>>
>> Do I need to make entry of this custom processor somewhere?
>>
>>
>> On Thu, Aug 8, 2019 at 9:14 AM Bimal Mehta  wrote:
>>
>>> Thanks Mike and Bryan.
>>> Yes it seems my template was still referring the old version.
>>> I will have it updated now and will reimport.
>>> Also the version of NiFi we are using is the one that comes with CDF. I
>>> am not sure if CDF supports 1.9.2 yet or not. I will reach out to Cloudera
>>> and see if we can get it upgraded.
>>>
>>>
>>>
>>> On Thu, Aug 8, 2019, 8:51 AM Bryan Bende  wrote:
>>>
 What is in the template for the bundle coordinates of your processor?
 and does that match the coordinates of the NAR that is deployed?

 Example:


   org.apache.nifi
   nifi-update-attribute-nar
   1.10.0-SNAPSHOT
 

 If you made a new version of your NAR, say 2.0.0 and your template
 references 1.0.0, then you'll need to update your template.

 On Wed, Aug 7, 2019 at 10:05 PM Mike Thomsen 
 wrote:
 >
 > If it's happening immediately upon trying to import the template, I
 believe that's the error message saying that the 1.9 instance cannot find
 the NAR file which provided the processor. Also, if you're referring to
 1.9.0 and not 1.9.2 you're going to want to upgrade to the latter because
 there are a few critical bugs fixed in 1.9.2.
 >
 > On Wed, Aug 7, 2019 at 9:19 PM Bimal Mehta 
 wrote:
 >>
 >> Thanks Bryan.
 >> My custom processors are part of a template. However when I try to
 import my template in NiFi 1.9, I get an error message saying
 >> PutFeedMetadata is not known to this NiFi instance. I did update all
 the dependencies to NiFi 1.9 and even the plugins. We are using a Cloudera
 distributed version of NiFi 1.9.
 >> Any idea why is this happening?
 >>
 >> Thanks
 >>
 >>
 >>
 >> On Wed, Aug 7, 2019 at 3:46 PM Bryan Bende  wrote:
 >>>
 >>> Hello,
 >>>
 >>> Most likely your processor built against 1.6 would run fine in 1.9,
 >>> but to make sure you just need to update any nifi dependencies in
 your
 >>> poms to 1.9.2.
 >>>
 >>> If you created your project from the archetype and didn't change
 >>> anything, then this should just be changing the parent in the root
 pom
 >>> to the new version of nifi-nar-bundles.
 >>>
 >>> If you set it up yourself, then anywhere you depend on nifi-api you
 >>> need to change.
 >>>
 >>> -Bryan
 >>>
 >>> On Wed, Aug 7, 2019 at 3:18 PM Bimal Mehta 
 wrote:
 >>> >
 >>> > Hi,
 >>> >
 >>> > If we have a custom processor that was created with NiFi 1.6,
 what are the steps we need to follow to make it work in 1.9?
 >>> > Is there some sort of steps that explains the jar and pom updates
 we need to do for making it work in 1.9?

>>> --
> Sent from Gmail Mobile
>


Re: Custom Processor Upgrade

2019-08-13 Thread Bryan Bende
Does that custom processor type show as an option if you try to add a new
processor to the canvas?

On Tue, Aug 13, 2019 at 4:54 PM Bimal Mehta  wrote:

> Hi Mike and Bryan,
>
> One of my custom processors appears as inactive in NiFi with a yellow
> triangle error.
> When I hover over it I see a message saying 'Missing Processor' validated
> against 'Any Property' is invalid. This is not a valid processor.
> In the log it seems to invoke GhostProcessor.java which is giving the
> above error when restarting nifi.
> This custom processor sits (with my other processors) in my custom_lib
> folder and I have provided that path in the nifi properties file as
>
> *nifi.nar.library.directory.custom=/opt/nifi/custom_lib*
>
>
> Not sure what I missed?
>
> Do I need to make entry of this custom processor somewhere?
>
>
> On Thu, Aug 8, 2019 at 9:14 AM Bimal Mehta  wrote:
>
>> Thanks Mike and Bryan.
>> Yes it seems my template was still referring the old version.
>> I will have it updated now and will reimport.
>> Also the version of NiFi we are using is the one that comes with CDF. I
>> am not sure if CDF supports 1.9.2 yet or not. I will reach out to Cloudera
>> and see if we can get it upgraded.
>>
>>
>>
>> On Thu, Aug 8, 2019, 8:51 AM Bryan Bende  wrote:
>>
>>> What is in the template for the bundle coordinates of your processor?
>>> and does that match the coordinates of the NAR that is deployed?
>>>
>>> Example:
>>>
>>>
>>>   org.apache.nifi
>>>   nifi-update-attribute-nar
>>>   1.10.0-SNAPSHOT
>>> 
>>>
>>> If you made a new version of your NAR, say 2.0.0 and your template
>>> references 1.0.0, then you'll need to update your template.
>>>
>>> On Wed, Aug 7, 2019 at 10:05 PM Mike Thomsen 
>>> wrote:
>>> >
>>> > If it's happening immediately upon trying to import the template, I
>>> believe that's the error message saying that the 1.9 instance cannot find
>>> the NAR file which provided the processor. Also, if you're referring to
>>> 1.9.0 and not 1.9.2 you're going to want to upgrade to the latter because
>>> there are a few critical bugs fixed in 1.9.2.
>>> >
>>> > On Wed, Aug 7, 2019 at 9:19 PM Bimal Mehta  wrote:
>>> >>
>>> >> Thanks Bryan.
>>> >> My custom processors are part of a template. However when I try to
>>> import my template in NiFi 1.9, I get an error message saying
>>> >> PutFeedMetadata is not known to this NiFi instance. I did update all
>>> the dependencies to NiFi 1.9 and even the plugins. We are using a Cloudera
>>> distributed version of NiFi 1.9.
>>> >> Any idea why is this happening?
>>> >>
>>> >> Thanks
>>> >>
>>> >>
>>> >>
>>> >> On Wed, Aug 7, 2019 at 3:46 PM Bryan Bende  wrote:
>>> >>>
>>> >>> Hello,
>>> >>>
>>> >>> Most likely your processor built against 1.6 would run fine in 1.9,
>>> >>> but to make sure you just need to update any nifi dependencies in
>>> your
>>> >>> poms to 1.9.2.
>>> >>>
>>> >>> If you created your project from the archetype and didn't change
>>> >>> anything, then this should just be changing the parent in the root
>>> pom
>>> >>> to the new version of nifi-nar-bundles.
>>> >>>
>>> >>> If you set it up yourself, then anywhere you depend on nifi-api you
>>> >>> need to change.
>>> >>>
>>> >>> -Bryan
>>> >>>
>>> >>> On Wed, Aug 7, 2019 at 3:18 PM Bimal Mehta 
>>> wrote:
>>> >>> >
>>> >>> > Hi,
>>> >>> >
>>> >>> > If we have a custom processor that was created with NiFi 1.6, what
>>> are the steps we need to follow to make it work in 1.9?
>>> >>> > Is there some sort of steps that explains the jar and pom updates
>>> we need to do for making it work in 1.9?
>>>
>> --
Sent from Gmail Mobile


Re: Custom Processor Upgrade

2019-08-13 Thread Bimal Mehta
Hi Mike and Bryan,

One of my custom processors appears as inactive in NiFi with a yellow
triangle error.
When I hover over it I see a message saying 'Missing Processor' validated
against 'Any Property' is invalid. This is not a valid processor.
In the log it seems to invoke GhostProcessor.java which is giving the above
error when restarting nifi.
This custom processor sits (with my other processors) in my custom_lib
folder and I have provided that path in the nifi properties file as

*nifi.nar.library.directory.custom=/opt/nifi/custom_lib*


Not sure what I missed?

Do I need to make entry of this custom processor somewhere?


On Thu, Aug 8, 2019 at 9:14 AM Bimal Mehta  wrote:

> Thanks Mike and Bryan.
> Yes it seems my template was still referring the old version.
> I will have it updated now and will reimport.
> Also the version of NiFi we are using is the one that comes with CDF. I am
> not sure if CDF supports 1.9.2 yet or not. I will reach out to Cloudera and
> see if we can get it upgraded.
>
>
>
> On Thu, Aug 8, 2019, 8:51 AM Bryan Bende  wrote:
>
>> What is in the template for the bundle coordinates of your processor?
>> and does that match the coordinates of the NAR that is deployed?
>>
>> Example:
>>
>>
>>   org.apache.nifi
>>   nifi-update-attribute-nar
>>   1.10.0-SNAPSHOT
>> 
>>
>> If you made a new version of your NAR, say 2.0.0 and your template
>> references 1.0.0, then you'll need to update your template.
>>
>> On Wed, Aug 7, 2019 at 10:05 PM Mike Thomsen 
>> wrote:
>> >
>> > If it's happening immediately upon trying to import the template, I
>> believe that's the error message saying that the 1.9 instance cannot find
>> the NAR file which provided the processor. Also, if you're referring to
>> 1.9.0 and not 1.9.2 you're going to want to upgrade to the latter because
>> there are a few critical bugs fixed in 1.9.2.
>> >
>> > On Wed, Aug 7, 2019 at 9:19 PM Bimal Mehta  wrote:
>> >>
>> >> Thanks Bryan.
>> >> My custom processors are part of a template. However when I try to
>> import my template in NiFi 1.9, I get an error message saying
>> >> PutFeedMetadata is not known to this NiFi instance. I did update all
>> the dependencies to NiFi 1.9 and even the plugins. We are using a Cloudera
>> distributed version of NiFi 1.9.
>> >> Any idea why is this happening?
>> >>
>> >> Thanks
>> >>
>> >>
>> >>
>> >> On Wed, Aug 7, 2019 at 3:46 PM Bryan Bende  wrote:
>> >>>
>> >>> Hello,
>> >>>
>> >>> Most likely your processor built against 1.6 would run fine in 1.9,
>> >>> but to make sure you just need to update any nifi dependencies in your
>> >>> poms to 1.9.2.
>> >>>
>> >>> If you created your project from the archetype and didn't change
>> >>> anything, then this should just be changing the parent in the root pom
>> >>> to the new version of nifi-nar-bundles.
>> >>>
>> >>> If you set it up yourself, then anywhere you depend on nifi-api you
>> >>> need to change.
>> >>>
>> >>> -Bryan
>> >>>
>> >>> On Wed, Aug 7, 2019 at 3:18 PM Bimal Mehta 
>> wrote:
>> >>> >
>> >>> > Hi,
>> >>> >
>> >>> > If we have a custom processor that was created with NiFi 1.6, what
>> are the steps we need to follow to make it work in 1.9?
>> >>> > Is there some sort of steps that explains the jar and pom updates
>> we need to do for making it work in 1.9?
>>
>


RE: Anti-Virus Scanning

2019-08-13 Thread Jason Csencsits
Joe,
Thank you for the information.  Is this documented anywhere as I have a client 
looking for it from Apache?.

Thank you,
:::
Jason Csencsits
Manager of Technical Operations
Technically Creative Inc.
Simplifying IT Solutions

Office: 845.725.7883
jcsencs...@technicallycreative.com
www.TechnicallyCreative.com

::

From: Joe Witt 
Sent: Tuesday, August 13, 2019 2:27 PM
To: users@nifi.apache.org
Subject: Re: Anti-Virus Scanning

Jason

The work dir gets created at startup and possible as new nars are loaded.  I 
think you'd be ok to scan this.

The flowfile and content repository and provenance directories as configured 
should be skipped. The logs dir should be skipped.  The state directory should 
be skipped.  All else I believe would be fair game.

Thanks

On Tue, Aug 13, 2019 at 2:24 PM Jason Csencsits 
mailto:jcsencs...@technicallycreative.com>> 
wrote:
What is the recommended anti virus scanning exclusions from active scans. Can 
not find anything in the documents. Need to make sure my linux redhat scans do 
not compromise the flow files or anything else.

Thank you,
:::
Jason Csencsits
Manager of Technical Operations
Technically Creative Inc.
Simplifying IT Solutions

Office: 845.725.7883
jcsencs...@technicallycreative.com
www.TechnicallyCreative.com

::



Re: Anti-Virus Scanning

2019-08-13 Thread Joe Witt
Jason

The work dir gets created at startup and possible as new nars are loaded.
I think you'd be ok to scan this.

The flowfile and content repository and provenance directories as
configured should be skipped. The logs dir should be skipped.  The state
directory should be skipped.  All else I believe would be fair game.

Thanks

On Tue, Aug 13, 2019 at 2:24 PM Jason Csencsits <
jcsencs...@technicallycreative.com> wrote:

> What is the recommended anti virus scanning exclusions from active scans.
> Can not find anything in the documents. Need to make sure my linux redhat
> scans do not compromise the flow files or anything else.
>
>
>
> Thank you,
>
> :::
>
> *Jason Csencsits *
>
> Manager of Technical Operations
>
> Technically Creative Inc.
>
> *Simplifying IT Solutions*
>
>
>
> Office: 845.725.7883
>
> jcsencs...@technicallycreative.com
>
> www.TechnicallyCreative.com
> 
>
>
>
> ::
>
>
>


Re: Data Ingestion using NiFi

2019-08-13 Thread Mike Thomsen
One of the easiest ways to trigger events in NiFi is to have a message
queue processor set up and listening to a queue where you post an event to
trigger the flow.

On Tue, Aug 13, 2019 at 11:45 AM Bimal Mehta  wrote:

> Thanks Mike.
> ExecuteSQL looks good and am trying it.
>
> Also I wanted to understand how can we control triggering the NiFi jobs
> from devops tools like CloudBees/ElectricFlow?
>
> On Tue, Aug 13, 2019 at 7:35 AM Mike Thomsen 
> wrote:
>
>> Bimal,
>>
>> 1. Take a look at ExecuteSQLRecord and see if that works for you. I don't
>> use SQL databases that much, but it works like a charm for me and others
>> for querying and getting an inferred avro schema based on the schema of the
>> database table (you can massage it into another format with ConvertRecord).
>> 2. Take a look at QueryRecord and PartitionRecord with them configured to
>> use Avro readers and writers.
>>
>> Mike
>>
>> On Tue, Aug 13, 2019 at 12:25 AM Bimal Mehta  wrote:
>>
>>> Hi NiFi users,
>>>
>>> We had been using the kylo data ingest template to read the data from
>>> our Oracle and DB2 databases and move it into HDFS and Hive.
>>> The kylo data ingest template also provided some features to validate,
>>> profile and split the data based on validation rules. We also built some
>>> custom processors and added them to the template.
>>> We recently migrated to NiFi 1.9.0 (CDF), and a lot of Kylo processors
>>> don't work there. We were able to make our custom processors work in 1.9.0
>>> but the kylo nar files don't work. I don't know if any work around exists
>>> for that.
>>>
>>> However given that the kylo project is dead, I don't want to depend on
>>> those kylo-nar files and processors, what I wanted to understand is how do
>>> I replicate that functionality using the standard processors available in
>>> NiFi.
>>>
>>> Essentially are there processors that allow me to do the below:
>>> 1. Read data from database - I know QueryDatabaseTable. Any other? How
>>> do I make it parameterized so that I don't need to create one flow for one
>>> table. How can we pass the table name while running the job?
>>> 2. Partition and convert to avro- I know splitavro, but does it
>>> partition also, and how do I pass the partition parameters
>>> 3. Write data to HDFS and Hive- I know PutHDFS works for writing to
>>> HDFS, but should I use PutSQL for Hive by converting the avro in step 2 to
>>> SQL? Or is there a better option. Does this support upserts as well?
>>> 4. Apply validation rules to the data before being written into Hive.
>>> Like calling a custom spark job that will execute the validation rules and
>>> split the data. Any processor that can help achieve this?
>>>
>>> I know a few users in this group had used kylo on top of NiFi. It will
>>> be great if some of you can provide your perspective as well.
>>>
>>> Thanks in advance.
>>>
>>> Bimal Mehta
>>>
>>


Anti-Virus Scanning

2019-08-13 Thread Jason Csencsits
What is the recommended anti virus scanning exclusions from active scans. Can 
not find anything in the documents. Need to make sure my linux redhat scans do 
not compromise the flow files or anything else.

Thank you,
:::
Jason Csencsits
Manager of Technical Operations
Technically Creative Inc.
Simplifying IT Solutions

Office: 845.725.7883
jcsencs...@technicallycreative.com
www.TechnicallyCreative.com

::



Re: Data Ingestion using NiFi

2019-08-13 Thread Bimal Mehta
Thanks Mike.
ExecuteSQL looks good and am trying it.

Also I wanted to understand how can we control triggering the NiFi jobs
from devops tools like CloudBees/ElectricFlow?

On Tue, Aug 13, 2019 at 7:35 AM Mike Thomsen  wrote:

> Bimal,
>
> 1. Take a look at ExecuteSQLRecord and see if that works for you. I don't
> use SQL databases that much, but it works like a charm for me and others
> for querying and getting an inferred avro schema based on the schema of the
> database table (you can massage it into another format with ConvertRecord).
> 2. Take a look at QueryRecord and PartitionRecord with them configured to
> use Avro readers and writers.
>
> Mike
>
> On Tue, Aug 13, 2019 at 12:25 AM Bimal Mehta  wrote:
>
>> Hi NiFi users,
>>
>> We had been using the kylo data ingest template to read the data from our
>> Oracle and DB2 databases and move it into HDFS and Hive.
>> The kylo data ingest template also provided some features to validate,
>> profile and split the data based on validation rules. We also built some
>> custom processors and added them to the template.
>> We recently migrated to NiFi 1.9.0 (CDF), and a lot of Kylo processors
>> don't work there. We were able to make our custom processors work in 1.9.0
>> but the kylo nar files don't work. I don't know if any work around exists
>> for that.
>>
>> However given that the kylo project is dead, I don't want to depend on
>> those kylo-nar files and processors, what I wanted to understand is how do
>> I replicate that functionality using the standard processors available in
>> NiFi.
>>
>> Essentially are there processors that allow me to do the below:
>> 1. Read data from database - I know QueryDatabaseTable. Any other? How do
>> I make it parameterized so that I don't need to create one flow for one
>> table. How can we pass the table name while running the job?
>> 2. Partition and convert to avro- I know splitavro, but does it partition
>> also, and how do I pass the partition parameters
>> 3. Write data to HDFS and Hive- I know PutHDFS works for writing to HDFS,
>> but should I use PutSQL for Hive by converting the avro in step 2 to SQL?
>> Or is there a better option. Does this support upserts as well?
>> 4. Apply validation rules to the data before being written into Hive.
>> Like calling a custom spark job that will execute the validation rules and
>> split the data. Any processor that can help achieve this?
>>
>> I know a few users in this group had used kylo on top of NiFi. It will be
>> great if some of you can provide your perspective as well.
>>
>> Thanks in advance.
>>
>> Bimal Mehta
>>
>


Re: My nifi no more serve admin interface

2019-08-13 Thread Nicolas Delsaux

oh, sorry, I forgot to mention i use the nifi docker image, with
configuration

services:
nifi-runner:
hostname: nifi-psh.adeo.com
image: apache/nifi:1.9.2
ports:
- "38080:8443"
- "5000:8000"
volumes:
-
${project.basedir}/target/docker-compose/includes/nifi/node/conf:/opt/nifi/nifi-current/conf
-
${project.basedir}/target/docker-compose/includes/nifi/node/cacerts.jks:/opt/certs/cacerts.jks
-
${project.basedir}/target/docker-compose/includes/nifi/node/https_certificates.pkcs:/opt/certs/https_certificates.pkcs

And port 8443 is standard http port, I guess (the port 8000 is the
standard debug one)


Le 13/08/2019 à 16:10, Pierre Villard a écrit :

Might be a dumb question but I'm wondering why you're trying with port
38080? Did you change the configuration to use that specific port with
a secured instance?

Pierre

Le mar. 13 août 2019 à 16:00, Nicolas Delsaux mailto:nicolas.dels...@gmx.fr>> a écrit :

To go a little further, a test with openssl s_client gives the
following

nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
$ openssl s_client -host localhost -port 38080
CONNECTED(0164)
416:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake
failure:ssl\record\rec_layer_s3.c:1399:SSL alert number 40
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 176 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
 Protocol  : TLSv1.2
 Cipher    : 
 Session-ID:
 Session-ID-ctx:
 Master-Key:
 PSK identity: None
 PSK identity hint: None
 SRP username: None
 Start Time: 1565704262
 Timeout   : 7200 (sec)
 Verify return code: 0 (ok)
 Extended master secret: no
---


Which i weird considering nifi outputs in its startup log the lines

nifi-runner_1  | 2019-08-13 13:37:52,315 INFO [main]
o.e.jetty.server.handler.ContextHandler Started

o.e.j.w.WebAppContext@7cb81ae{nifi-error,/,file:///opt/nifi/nifi-current/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
nifi-runner_1  | 2019-08-13 13:37:52,490 INFO [main]
o.e.jetty.util.ssl.SslContextFactory
x509=X509@3d94d7f3(nifi-psh.adeo.com  (adeo
ca),h=[nifi-psh.adeo.com ],w=[]) for

SslContextFactory@da1abd6[provider=null,keyStore=file:///opt/certs/https_certificates.pkcs,trustStore=file:///opt/certs/cacerts.jks]
nifi-runner_1  | 2019-08-13 13:37:52,510 INFO [main]
o.eclipse.jetty.server.AbstractConnector Started
ServerConnector@2066f0d3{SSL,[ssl, http/1.1]}{0.0.0.0:8443
}


which seems to indicate Jetty is able to listen for https
connections on
port 8443 using certificates described in SslContextFactory. No ?

Le 13/08/2019 à 15:40, Nicolas Delsaux a écrit :
> I'm currently trying to implement ldap user group authorization
in nifi.
>
> For that, I've deployed nifi docker image with configuration files
> containing required config elements (a ldap identity provider, a
ldap
> user group provider).
>
> I've also configured https with a keystore/truststore that are
injected
> into docker container through volumes.
>
> Once all is configured, i've taken the time to do some debug
session to
> make sure tue FileAccessPolicyProvider correctly loads my user from
> ldap, and it works ok.
>
> Unfortunatly, now, when i try to load Nifi admin interface, I get a
> strange http response containing only the string "   � P".
>
> In other words,
>
>
> nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
> $ curl -v -H "Host: nifi-psh.adeo.com
" http://localhost:38080/ --output -
> *   Trying ::1...
> * TCP_NODELAY set
> * Connected to localhost (::1) port 38080 (#0)
> > GET / HTTP/1.1
> > Host: nifi-psh.adeo.com 
> > User-Agent: curl/7.55.1
> > Accept: */*
> >
> §♥♥ ☻☻P* Connection #0 to host localhost left intact
>
>
> http does not work (which i expects, since I've configured
> authentication/authorization
>
> nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
> $ curl -v -H "Host: nifi-psh.adeo.com
" https://localhost:38080/
> --output -
> *   Trying ::1...
> * TCP_NODELAY set
> * Connected to localhost (::1) port 38080 (#0)
> * schannel: SSL/TLS connection with localhost port 38080 (step 1/3)
> * schannel: checking server certificate revocation
> * schannel: sending initial 

Re: My nifi no more serve admin interface

2019-08-13 Thread Pierre Villard
Might be a dumb question but I'm wondering why you're trying with port
38080? Did you change the configuration to use that specific port with a
secured instance?

Pierre

Le mar. 13 août 2019 à 16:00, Nicolas Delsaux  a
écrit :

> To go a little further, a test with openssl s_client gives the following
>
> nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
> $ openssl s_client -host localhost -port 38080
> CONNECTED(0164)
> 416:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake
> failure:ssl\record\rec_layer_s3.c:1399:SSL alert number 40
> ---
> no peer certificate available
> ---
> No client certificate CA names sent
> ---
> SSL handshake has read 7 bytes and written 176 bytes
> Verification: OK
> ---
> New, (NONE), Cipher is (NONE)
> Secure Renegotiation IS NOT supported
> Compression: NONE
> Expansion: NONE
> No ALPN negotiated
> SSL-Session:
>  Protocol  : TLSv1.2
>  Cipher: 
>  Session-ID:
>  Session-ID-ctx:
>  Master-Key:
>  PSK identity: None
>  PSK identity hint: None
>  SRP username: None
>  Start Time: 1565704262
>  Timeout   : 7200 (sec)
>  Verify return code: 0 (ok)
>  Extended master secret: no
> ---
>
>
> Which i weird considering nifi outputs in its startup log the lines
>
> nifi-runner_1  | 2019-08-13 13:37:52,315 INFO [main]
> o.e.jetty.server.handler.ContextHandler Started
> o.e.j.w.WebAppContext@7cb81ae
> {nifi-error,/,file:///opt/nifi/nifi-current/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
> nifi-runner_1  | 2019-08-13 13:37:52,490 INFO [main]
> o.e.jetty.util.ssl.SslContextFactory
> x509=X509@3d94d7f3(nifi-psh.adeo.com (adeo
> ca),h=[nifi-psh.adeo.com],w=[]) for
> SslContextFactory@da1abd6
> [provider=null,keyStore=file:///opt/certs/https_certificates.pkcs,trustStore=file:///opt/certs/cacerts.jks]
> nifi-runner_1  | 2019-08-13 13:37:52,510 INFO [main]
> o.eclipse.jetty.server.AbstractConnector Started
> ServerConnector@2066f0d3{SSL,[ssl, http/1.1]}{0.0.0.0:8443}
>
>
> which seems to indicate Jetty is able to listen for https connections on
> port 8443 using certificates described in SslContextFactory. No ?
>
> Le 13/08/2019 à 15:40, Nicolas Delsaux a écrit :
> > I'm currently trying to implement ldap user group authorization in nifi.
> >
> > For that, I've deployed nifi docker image with configuration files
> > containing required config elements (a ldap identity provider, a ldap
> > user group provider).
> >
> > I've also configured https with a keystore/truststore that are injected
> > into docker container through volumes.
> >
> > Once all is configured, i've taken the time to do some debug session to
> > make sure tue FileAccessPolicyProvider correctly loads my user from
> > ldap, and it works ok.
> >
> > Unfortunatly, now, when i try to load Nifi admin interface, I get a
> > strange http response containing only the string "   �  P".
> >
> > In other words,
> >
> >
> > nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
> > $ curl -v -H "Host: nifi-psh.adeo.com" http://localhost:38080/ --output
> -
> > *   Trying ::1...
> > * TCP_NODELAY set
> > * Connected to localhost (::1) port 38080 (#0)
> > > GET / HTTP/1.1
> > > Host: nifi-psh.adeo.com
> > > User-Agent: curl/7.55.1
> > > Accept: */*
> > >
> > §♥♥ ☻☻P* Connection #0 to host localhost left intact
> >
> >
> > http does not work (which i expects, since I've configured
> > authentication/authorization
> >
> > nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
> > $ curl -v -H "Host: nifi-psh.adeo.com" https://localhost:38080/
> > --output -
> > *   Trying ::1...
> > * TCP_NODELAY set
> > * Connected to localhost (::1) port 38080 (#0)
> > * schannel: SSL/TLS connection with localhost port 38080 (step 1/3)
> > * schannel: checking server certificate revocation
> > * schannel: sending initial handshake data: sending 174 bytes...
> > * schannel: sent initial handshake data: sent 174 bytes
> > * schannel: SSL/TLS connection with localhost port 38080 (step 2/3)
> > * schannel: encrypted data got 7
> > * schannel: encrypted data buffer: offset 7 length 4096
> > * schannel: next InitializeSecurityContext failed: SEC_E_ILLEGAL_MESSAGE
> > (0x80090326) - This error usually occurs when a fatal SSL/TLS alert is
> > received (e.g. handshake failed). More detail may be available in the
> > Windows System event log.
> > * Closing connection 0
> > * schannel: shutting down SSL/TLS connection with localhost port 38080
> > * schannel: clear security context handle
> > curl: (35) schannel: next InitializeSecurityContext failed:
> > SEC_E_ILLEGAL_MESSAGE (0x80090326) - This error usually occurs when a
> > fatal SSL/TLS alert is received (e.g. handshake failed). More detail may
> > be available in the Windows System event log.
> >
> > But neither is https
> >
> > I guess there is something wrong with certificate, but the log doesn't
> > 

Re: My nifi no more serve admin interface

2019-08-13 Thread Nicolas Delsaux

To go a little further, a test with openssl s_client gives the following

nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
$ openssl s_client -host localhost -port 38080
CONNECTED(0164)
416:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake
failure:ssl\record\rec_layer_s3.c:1399:SSL alert number 40
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 176 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : 
    Session-ID:
    Session-ID-ctx:
    Master-Key:
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1565704262
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: no
---


Which i weird considering nifi outputs in its startup log the lines

nifi-runner_1  | 2019-08-13 13:37:52,315 INFO [main]
o.e.jetty.server.handler.ContextHandler Started
o.e.j.w.WebAppContext@7cb81ae{nifi-error,/,file:///opt/nifi/nifi-current/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
nifi-runner_1  | 2019-08-13 13:37:52,490 INFO [main]
o.e.jetty.util.ssl.SslContextFactory
x509=X509@3d94d7f3(nifi-psh.adeo.com (adeo
ca),h=[nifi-psh.adeo.com],w=[]) for
SslContextFactory@da1abd6[provider=null,keyStore=file:///opt/certs/https_certificates.pkcs,trustStore=file:///opt/certs/cacerts.jks]
nifi-runner_1  | 2019-08-13 13:37:52,510 INFO [main]
o.eclipse.jetty.server.AbstractConnector Started
ServerConnector@2066f0d3{SSL,[ssl, http/1.1]}{0.0.0.0:8443}


which seems to indicate Jetty is able to listen for https connections on
port 8443 using certificates described in SslContextFactory. No ?

Le 13/08/2019 à 15:40, Nicolas Delsaux a écrit :

I'm currently trying to implement ldap user group authorization in nifi.

For that, I've deployed nifi docker image with configuration files
containing required config elements (a ldap identity provider, a ldap
user group provider).

I've also configured https with a keystore/truststore that are injected
into docker container through volumes.

Once all is configured, i've taken the time to do some debug session to
make sure tue FileAccessPolicyProvider correctly loads my user from
ldap, and it works ok.

Unfortunatly, now, when i try to load Nifi admin interface, I get a
strange http response containing only the string "�P".

In other words,


nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
$ curl -v -H "Host: nifi-psh.adeo.com" http://localhost:38080/ --output -
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 38080 (#0)
> GET / HTTP/1.1
> Host: nifi-psh.adeo.com
> User-Agent: curl/7.55.1
> Accept: */*
>
§♥♥ ☻☻P* Connection #0 to host localhost left intact


http does not work (which i expects, since I've configured
authentication/authorization

nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
$ curl -v -H "Host: nifi-psh.adeo.com" https://localhost:38080/
--output -
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 38080 (#0)
* schannel: SSL/TLS connection with localhost port 38080 (step 1/3)
* schannel: checking server certificate revocation
* schannel: sending initial handshake data: sending 174 bytes...
* schannel: sent initial handshake data: sent 174 bytes
* schannel: SSL/TLS connection with localhost port 38080 (step 2/3)
* schannel: encrypted data got 7
* schannel: encrypted data buffer: offset 7 length 4096
* schannel: next InitializeSecurityContext failed: SEC_E_ILLEGAL_MESSAGE
(0x80090326) - This error usually occurs when a fatal SSL/TLS alert is
received (e.g. handshake failed). More detail may be available in the
Windows System event log.
* Closing connection 0
* schannel: shutting down SSL/TLS connection with localhost port 38080
* schannel: clear security context handle
curl: (35) schannel: next InitializeSecurityContext failed:
SEC_E_ILLEGAL_MESSAGE (0x80090326) - This error usually occurs when a
fatal SSL/TLS alert is received (e.g. handshake failed). More detail may
be available in the Windows System event log.

But neither is https

I guess there is something wrong with certificate, but the log doesn't
seems to indicate any certificate misconfiguration.


What have i done wrong ?




Re: unable to post updates api with user certificate.

2019-08-13 Thread Bryan Bende
Looks like you are using CompositeUserGroup provider which lets you combine
multiple user group providers.

The error message is saying the same user identity exists in more than one
of the user group providers which is not allowed.

The identity in the message looks like an LDAP user, so make sure you
didn't define that same user in the file user group provider.

On Tue, Aug 13, 2019 at 3:08 AM Felipe Garcia 
wrote:

>
> Issue #1 - You should be able to specify an LDAP user as your initial
> admin, what is the error you get?
>
> Keep in mind it is case and white-space sensitive, and also depends on
> whether you are returning full DN or short name, it must match exactly.
>
> error
> Multiple UserGroupProviders claim to provide user
> uid=XX,cn=users,cn=accounts,dc=
>
> logfile
>
> 2019-08-13 16:49:40,976 INFO [NiFi Web Server-23]
> o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException:
> Multiple UserGroupProviders claim to provide user
> uid=612442779,cn=users,cn=accounts,dc=ace. Returning Conflict response.
>
> 2019-08-13 16:49:40,977 DEBUG [NiFi Web Server-23]
> o.a.n.w.a.c.IllegalStateExceptionMapper
>
> java.lang.IllegalStateException: Multiple UserGroupProviders claim to
> provide user uid=XX,cn=users,cn=accounts,dc=
>
> at
> org.apache.nifi.authorization.CompositeConfigurableUserGroupProvider.getUserAndGroups(CompositeConfigurableUserGroupProvider.java:195)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>
>
> On Mon, Aug 5, 2019 at 10:38 PM Bryan Bende  wrote:
>
>> Hello,
>>
>> Issue #1 - You should be able to specify an LDAP user as your initial
>> admin, what is the error you get?
>>
>> Keep in mind it is case and white-space sensitive, and also depends on
>> whether you are returning full DN or short name, it must match exactly.
>>
>> Issue #2 - Since you are able to query the API with the client cert, it
>> seems like your cert is setup correctly.
>>
>> Is there an error in nifi-app.log or nifi-user.log when you try to modify
>> the policy? Can you modify policies through the UI without issues?
>>
>> Tokens are only issued for login methods that are based on username and
>> password, so it is expected behavior that you could not issue one for a
>> cert user.
>>
>> Thanks,
>>
>> Bryan
>>
>>
>> On Sun, Aug 4, 2019 at 8:30 PM Felipe Garcia 
>> wrote:
>>
>>> Setup
>>>
>>>
>>> a cluster of a few nifi boxes
>>>
>>>
>>> setup to authenticate with LDAP
>>>
>>> users and groups in LDAP
>>>
>>>
>>> Issue 1: unable to specify an LDAP user as Initial User
>>>
>>>
>>> I have only been able to set up the cluster with a client certificate
>>> user.
>>>
>>>
>>> Issue 2: I am unable to use the API with the initial certificate user to
>>> add an LDAP group.
>>>
>>>
>>> I exported the cert and key into a usable format for curl
>>>
>>>
>>> *# open*ssl pkcs12 -in /opt/nifi-certs/CN\=admin_OU\=NIFI.p12  -out
>>> /opt/nifi-certs/CN\=admin_OU\=NIFI.key -nocerts -nodes
>>>
>>> *# open*ssl pkcs12 -export -in /opt/nifi-certs/CN\=admin_OU\=NIFI.p12
>>> -out /opt/nifi-certs/CN\=admin_OU\=NIFI.pem -clcerts -nokeys -passin
>>> 'changeme'
>>>
>>>
>>> I am able to query the API
>>>
>>>
>>> curl -k -X GET
>>> https://nifi01-sst140.dev.cloud.ace:9443/nifi-api/policies/read/flow --cert
>>> /opt/nifi-certs/CN=admin_OU=NIFI.pem --key
>>> /opt/nifi-certs/CN=admin_OU=NIFI.key --compressed
>>>
>>>
>>>
>>> But I am unable to change or add via the API
>>>
>>>
>>>  curl -k -X PUT -H 'Content-Type: application/json'
>>> https://nifi01-sst140.dev.cloud.ace:9443/nifi-api/policies/f99bccd1-a30e-3e4a-98a2-dbc708edc67f
>>>  --cert
>>> /opt/nifi-certs/CN=admin_OU=NIFI.pem --key
>>> /opt/nifi-certs/CN=admin_OU=NIFI.key -d @/tmp/newpolicy.json
>>>
>>> Unable to save Authorizations
>>>
>>>
>>>
>>> I cannot create a token for a cert user
>>>
>>>
>>> curl -k -X POST '
>>> https://nifi01-sst140.dev.cloud.ace:9443/nifi-api/access/token' -H
>>> 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type:
>>> application/x-www-form-urlencoded; charset=UTF-8' -H 'Accept: */*' --cert
>>> /opt/nifi-certs/CN\=admin_OU\=NIFI.pem --key
>>> /opt/nifi-certs/CN\=admin_OU\=NIFI.key --compressed
>>>
>>> The username and password must be specified.
>>>
>>>


Re: Using Nifi OIDC authentication through a proxy?

2019-08-13 Thread Pat White
Great! Thank you very much for the info and references Elemir, and thanks
to Koji for the ReverseProxy experiment.

patw

On Mon, Aug 12, 2019 at 9:46 PM Elemir Stevko 
wrote:

> Hi Pat,
>
> Koji Kawamura has a set of examples on how to configure NiFi behind
> reverse proxy that I have successfully used for my setup:
> https://github.com/ijokarumawak/nifi-reverseproxy
>
> Also, check the following thread:
>
> http://apache-nifi-users-list.2361937.n4.nabble.com/Invalid-CORS-request-error-on-NiFi-v1-8-0-and-1-9-0-behind-nginx-td7030.html
>
> Best regards,
> Elemir
>
>
>
> *From: *Pat White 
> *Reply to: *"users@nifi.apache.org" 
> *Date: *Tuesday, 13 August 2019 at 7:16 am
> *To: *"users@nifi.apache.org" 
> *Subject: *Using Nifi OIDC authentication through a proxy?
>
>
>
> Hi Folks,
>
>
>
> Wondered if anyone has been able to configure Nifi to use oidc identity
> provider through a
>
> reverse proxy?
>
>
>
> I've been able to configure oidc and successfully work with an
> authentication provider directly, however i need to do this through a
> proxy, and not able to get the callback redirects to survive.
>
>
>
> patw
>
>
>
>
>


My nifi no more serve admin interface

2019-08-13 Thread Nicolas Delsaux

I'm currently trying to implement ldap user group authorization in nifi.

For that, I've deployed nifi docker image with configuration files
containing required config elements (a ldap identity provider, a ldap
user group provider).

I've also configured https with a keystore/truststore that are injected
into docker container through volumes.

Once all is configured, i've taken the time to do some debug session to
make sure tue FileAccessPolicyProvider correctly loads my user from
ldap, and it works ok.

Unfortunatly, now, when i try to load Nifi admin interface, I get a
strange http response containing only the string "�P".

In other words,


nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
$ curl -v -H "Host: nifi-psh.adeo.com" http://localhost:38080/ --output -
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 38080 (#0)
> GET / HTTP/1.1
> Host: nifi-psh.adeo.com
> User-Agent: curl/7.55.1
> Accept: */*
>
§♥♥ ☻☻P* Connection #0 to host localhost left intact


http does not work (which i expects, since I've configured
authentication/authorization

nicolas-delsaux@NICOLASDELSAUX C:\Users\nicolas-delsaux
$ curl -v -H "Host: nifi-psh.adeo.com" https://localhost:38080/ --output -
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 38080 (#0)
* schannel: SSL/TLS connection with localhost port 38080 (step 1/3)
* schannel: checking server certificate revocation
* schannel: sending initial handshake data: sending 174 bytes...
* schannel: sent initial handshake data: sent 174 bytes
* schannel: SSL/TLS connection with localhost port 38080 (step 2/3)
* schannel: encrypted data got 7
* schannel: encrypted data buffer: offset 7 length 4096
* schannel: next InitializeSecurityContext failed: SEC_E_ILLEGAL_MESSAGE
(0x80090326) - This error usually occurs when a fatal SSL/TLS alert is
received (e.g. handshake failed). More detail may be available in the
Windows System event log.
* Closing connection 0
* schannel: shutting down SSL/TLS connection with localhost port 38080
* schannel: clear security context handle
curl: (35) schannel: next InitializeSecurityContext failed:
SEC_E_ILLEGAL_MESSAGE (0x80090326) - This error usually occurs when a
fatal SSL/TLS alert is received (e.g. handshake failed). More detail may
be available in the Windows System event log.

But neither is https

I guess there is something wrong with certificate, but the log doesn't
seems to indicate any certificate misconfiguration.


What have i done wrong ?




Re: Data Ingestion using NiFi

2019-08-13 Thread Mike Thomsen
Bimal,

1. Take a look at ExecuteSQLRecord and see if that works for you. I don't
use SQL databases that much, but it works like a charm for me and others
for querying and getting an inferred avro schema based on the schema of the
database table (you can massage it into another format with ConvertRecord).
2. Take a look at QueryRecord and PartitionRecord with them configured to
use Avro readers and writers.

Mike

On Tue, Aug 13, 2019 at 12:25 AM Bimal Mehta  wrote:

> Hi NiFi users,
>
> We had been using the kylo data ingest template to read the data from our
> Oracle and DB2 databases and move it into HDFS and Hive.
> The kylo data ingest template also provided some features to validate,
> profile and split the data based on validation rules. We also built some
> custom processors and added them to the template.
> We recently migrated to NiFi 1.9.0 (CDF), and a lot of Kylo processors
> don't work there. We were able to make our custom processors work in 1.9.0
> but the kylo nar files don't work. I don't know if any work around exists
> for that.
>
> However given that the kylo project is dead, I don't want to depend on
> those kylo-nar files and processors, what I wanted to understand is how do
> I replicate that functionality using the standard processors available in
> NiFi.
>
> Essentially are there processors that allow me to do the below:
> 1. Read data from database - I know QueryDatabaseTable. Any other? How do
> I make it parameterized so that I don't need to create one flow for one
> table. How can we pass the table name while running the job?
> 2. Partition and convert to avro- I know splitavro, but does it partition
> also, and how do I pass the partition parameters
> 3. Write data to HDFS and Hive- I know PutHDFS works for writing to HDFS,
> but should I use PutSQL for Hive by converting the avro in step 2 to SQL?
> Or is there a better option. Does this support upserts as well?
> 4. Apply validation rules to the data before being written into Hive. Like
> calling a custom spark job that will execute the validation rules and split
> the data. Any processor that can help achieve this?
>
> I know a few users in this group had used kylo on top of NiFi. It will be
> great if some of you can provide your perspective as well.
>
> Thanks in advance.
>
> Bimal Mehta
>


Re: data metrics / data monitoring

2019-08-13 Thread Peter Piehler

  
  
Hi 



@Edward, @Pierre
thx for your replies. I will read the
  articles and hope I find an inspiration.


Best,
  Peter



On 13.08.19 09:11, Pierre Villard
  wrote:


  
  Hi Peter,


In addition to Edward's answer, you may be interested by
  the below posts:
https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/

https://pierrevillard.com/2018/02/07/fod-paris-jan-18-nifi-registry-and-workflow-monitoring-with-a-use-case/

https://pierrevillard.com/2018/08/29/monitoring-driven-development-with-nifi-1-7/



Hope this helps,
Pierre
  
  
  
Le mar. 13 août 2019 à 00:36,
  Edward Armes  a
  écrit :


  
Hi Peter,


I think this depends on where this data is
  stored. If this data is avaiable as metrics record by
  Nifi, then a reporting task would be the best way forward.
  However if this is data that is recorded in your FlowFiles
  as part of your flow then I think you're looking at either
  collecting in a KeyValue store of sorts and exposing it
  via a Web Server pattern or forwarding the metrics
  contained in the FlowFIle via a message bus, database or
  flow file reciever of some description.


As for displaying your metrics there are a
  lot of options out there that can recieve and processes
  data in various forms and it really depends on what is the
  best fit for your orginisation.


Personaly I would work out how and what you
  use display the data and from there use that to influence
  how you export it out from Nifi.


Edward

  
On Mon, 12 Aug 2019,
  22:54 Peter Piehler, 
  wrote:

Hello,
  
  does anyone have a tip for me on how I can provide
  metrics about data 
  processed in nifi in a web UI?
  
  I process XML files with nifi. for each file I
  calculate how many new, 
  modified, unmodified, and deleted records are
  contained. for each record 
  checks are still made. For example, whether values are
  in the value range.
  I would like to create an evaluation which shows me
  how the data 
  properties are. For example yesterday I had 5 files,
  one of them with 
  1000 deletions, but the average is only 10 deleted
  records per file, on 
  average we process 500 files per day.
  
  I'm currently looking for ideas on how to do this. I
  think it would be 
  useful to export this data and then evaluate it in an
  external 
  application. I am grateful for every hint.
  
  Thx,
  Peter
  

  

  

  



  



Re: data metrics / data monitoring

2019-08-13 Thread Pierre Villard
Hi Peter,

In addition to Edward's answer, you may be interested by the below posts:
https://pierrevillard.com/2017/05/11/monitoring-nifi-introduction/
https://pierrevillard.com/2018/02/07/fod-paris-jan-18-nifi-registry-and-workflow-monitoring-with-a-use-case/
https://pierrevillard.com/2018/08/29/monitoring-driven-development-with-nifi-1-7/

Hope this helps,
Pierre

Le mar. 13 août 2019 à 00:36, Edward Armes  a
écrit :

> Hi Peter,
>
> I think this depends on where this data is stored. If this data is
> avaiable as metrics record by Nifi, then a reporting task would be the best
> way forward. However if this is data that is recorded in your FlowFiles as
> part of your flow then I think you're looking at either collecting in a
> KeyValue store of sorts and exposing it via a Web Server pattern or
> forwarding the metrics contained in the FlowFIle via a message bus,
> database or flow file reciever of some description.
>
> As for displaying your metrics there are a lot of options out there that
> can recieve and processes data in various forms and it really depends on
> what is the best fit for your orginisation.
>
> Personaly I would work out how and what you use display the data and from
> there use that to influence how you export it out from Nifi.
>
> Edward
>
> On Mon, 12 Aug 2019, 22:54 Peter Piehler,  wrote:
>
>> Hello,
>>
>> does anyone have a tip for me on how I can provide metrics about data
>> processed in nifi in a web UI?
>>
>> I process XML files with nifi. for each file I calculate how many new,
>> modified, unmodified, and deleted records are contained. for each record
>> checks are still made. For example, whether values are in the value range.
>> I would like to create an evaluation which shows me how the data
>> properties are. For example yesterday I had 5 files, one of them with
>> 1000 deletions, but the average is only 10 deleted records per file, on
>> average we process 500 files per day.
>>
>> I'm currently looking for ideas on how to do this. I think it would be
>> useful to export this data and then evaluate it in an external
>> application. I am grateful for every hint.
>>
>> Thx,
>> Peter
>>
>>


Re: Unable to login to Nifi UI with ranger

2019-08-13 Thread Pierre Villard
Hi Mohit,

Did you create the appropriate rules in Ranger? Have the rules been
correctly synced between Ranger and NiFi nodes?

Pierre

Le mar. 13 août 2019 à 08:40, Mohit Jain  a écrit :

> Hi,
>
> I've integrated Nifi-1.9.2 with ranger. When I login to the UI, following
> error is shown -
>
> No applicable policies could be found. Contact the system administrator.
>
> I'm not able to figure out what I'm missing. Kindly help.
>
> Regards,
> Mohit
>


Re: unable to post updates api with user certificate.

2019-08-13 Thread Felipe Garcia
Issue #1 - You should be able to specify an LDAP user as your initial
admin, what is the error you get?

Keep in mind it is case and white-space sensitive, and also depends on
whether you are returning full DN or short name, it must match exactly.

error
Multiple UserGroupProviders claim to provide user
uid=XX,cn=users,cn=accounts,dc=

logfile

2019-08-13 16:49:40,976 INFO [NiFi Web Server-23]
o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException:
Multiple UserGroupProviders claim to provide user
uid=612442779,cn=users,cn=accounts,dc=ace. Returning Conflict response.

2019-08-13 16:49:40,977 DEBUG [NiFi Web Server-23]
o.a.n.w.a.c.IllegalStateExceptionMapper

java.lang.IllegalStateException: Multiple UserGroupProviders claim to
provide user uid=XX,cn=users,cn=accounts,dc=

at
org.apache.nifi.authorization.CompositeConfigurableUserGroupProvider.getUserAndGroups(CompositeConfigurableUserGroupProvider.java:195)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)



On Mon, Aug 5, 2019 at 10:38 PM Bryan Bende  wrote:

> Hello,
>
> Issue #1 - You should be able to specify an LDAP user as your initial
> admin, what is the error you get?
>
> Keep in mind it is case and white-space sensitive, and also depends on
> whether you are returning full DN or short name, it must match exactly.
>
> Issue #2 - Since you are able to query the API with the client cert, it
> seems like your cert is setup correctly.
>
> Is there an error in nifi-app.log or nifi-user.log when you try to modify
> the policy? Can you modify policies through the UI without issues?
>
> Tokens are only issued for login methods that are based on username and
> password, so it is expected behavior that you could not issue one for a
> cert user.
>
> Thanks,
>
> Bryan
>
>
> On Sun, Aug 4, 2019 at 8:30 PM Felipe Garcia 
> wrote:
>
>> Setup
>>
>>
>> a cluster of a few nifi boxes
>>
>>
>> setup to authenticate with LDAP
>>
>> users and groups in LDAP
>>
>>
>> Issue 1: unable to specify an LDAP user as Initial User
>>
>>
>> I have only been able to set up the cluster with a client certificate
>> user.
>>
>>
>> Issue 2: I am unable to use the API with the initial certificate user to
>> add an LDAP group.
>>
>>
>> I exported the cert and key into a usable format for curl
>>
>>
>> *# open*ssl pkcs12 -in /opt/nifi-certs/CN\=admin_OU\=NIFI.p12  -out
>> /opt/nifi-certs/CN\=admin_OU\=NIFI.key -nocerts -nodes
>>
>> *# open*ssl pkcs12 -export -in /opt/nifi-certs/CN\=admin_OU\=NIFI.p12
>> -out /opt/nifi-certs/CN\=admin_OU\=NIFI.pem -clcerts -nokeys -passin
>> 'changeme'
>>
>>
>> I am able to query the API
>>
>>
>> curl -k -X GET
>> https://nifi01-sst140.dev.cloud.ace:9443/nifi-api/policies/read/flow --cert
>> /opt/nifi-certs/CN=admin_OU=NIFI.pem --key
>> /opt/nifi-certs/CN=admin_OU=NIFI.key --compressed
>>
>>
>>
>> But I am unable to change or add via the API
>>
>>
>>  curl -k -X PUT -H 'Content-Type: application/json'
>> https://nifi01-sst140.dev.cloud.ace:9443/nifi-api/policies/f99bccd1-a30e-3e4a-98a2-dbc708edc67f
>>  --cert
>> /opt/nifi-certs/CN=admin_OU=NIFI.pem --key
>> /opt/nifi-certs/CN=admin_OU=NIFI.key -d @/tmp/newpolicy.json
>>
>> Unable to save Authorizations
>>
>>
>>
>> I cannot create a token for a cert user
>>
>>
>> curl -k -X POST '
>> https://nifi01-sst140.dev.cloud.ace:9443/nifi-api/access/token' -H
>> 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type:
>> application/x-www-form-urlencoded; charset=UTF-8' -H 'Accept: */*' --cert
>> /opt/nifi-certs/CN\=admin_OU\=NIFI.pem --key
>> /opt/nifi-certs/CN\=admin_OU\=NIFI.key --compressed
>>
>> The username and password must be specified.
>>
>>


Unable to login to Nifi UI with ranger

2019-08-13 Thread Mohit Jain
Hi,

I've integrated Nifi-1.9.2 with ranger. When I login to the UI, following
error is shown -

No applicable policies could be found. Contact the system administrator.

I'm not able to figure out what I'm missing. Kindly help.

Regards,
Mohit