Re: NiFi questions

2018-05-03 Thread Clay Teahouse
On Wed, May 2, 2018 at 10:00 PM, Andy LoPresto  wrote:

> Hi Clay,
>
> A common use case for NiFi and Kafka in conjunction is when you want the
> capabilities of a message broker system like Kafka with very low latency
> and multiple publishers/consumers, but you also have the need for some of
> the features NiFi provides like backpressure, as you mentioned. This is
> frequently found in industrial control systems or hardware/IoT integration
> (sometimes interfacing with MQTT).
>
> In the scenario you call out in 1), yes, NiFi can be a complete solution
> for record transformation and writing to HDFS.
>
> I am not a Storm expert, but you correctly identify NiFi as a good
> “deliverer” of data to stream processing applications.
>
> I also won’t address microbatching, but I’m confident some other community
> members will have good input on the topic.
>
> I’ve included a couple resources you may find helpful, and I would suggest
> you might also get good results sending this email to the
> us...@nifi.apache.org mailing list, as this list tends to focus more on
> the internals of NiFi, extensibility, and feature development. The users
> list has many contributors who deploy NiFi in real world scenarios and
> integrate with other systems that may not monitor this list.
>
> Good luck.
>
> https://bryanbende.com/development/2016/09/15/apache-nifi-and-apache-kafka
> https://hortonworks.com/webinar/apache-kafka-apache-nifi-better-together/
> https://hortonworks.com/tutorial/realtime-event-processing-in-hadoop-with-
> nifi-kafka-and-storm/
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On May 2, 2018, at 7:45 PM, Clay Teahouse  wrote:
>
> Hello All,
>
> 1) Why would one need both nifi and Kafka in an environment, considering
> that NiFi can handle back pressure, set up and deal with queues? I have an
> environment where I would be collecting data via NiFi and I would need to
> write the data to hdfs after some post processing. Can't I just process the
> records in NiFi, change the format and write the data to HDFS via a NiFi
> HDFS processor, PutHDFS?
>
> 2) Similarly, if I need to do some stream processing, can't I just pull the
> data from NiFi processor via NiFiSpout, do the processing via some bolts,
> and write the data to HDFS either via HDFS bolt or A NiFi HDFS processor?
>
> Do I even need storm in the picture?
>
> 3) Is NiFi suited for microbatch processing? Would it be better to pull the
> data from NiFi via spark streaming and do microbatching there? Which
> approach is most performant and reliable?
>
> thanks
>
> Clay
>
>
>


Re: NiFi questions

2018-05-03 Thread Clay Teahouse
Thanks Andy for the feedback

On Wed, May 2, 2018 at 10:00 PM, Andy LoPresto  wrote:

> Hi Clay,
>
> A common use case for NiFi and Kafka in conjunction is when you want the
> capabilities of a message broker system like Kafka with very low latency
> and multiple publishers/consumers, but you also have the need for some of
> the features NiFi provides like backpressure, as you mentioned. This is
> frequently found in industrial control systems or hardware/IoT integration
> (sometimes interfacing with MQTT).
>
> In the scenario you call out in 1), yes, NiFi can be a complete solution
> for record transformation and writing to HDFS.
>
> I am not a Storm expert, but you correctly identify NiFi as a good
> “deliverer” of data to stream processing applications.
>
> I also won’t address microbatching, but I’m confident some other community
> members will have good input on the topic.
>
> I’ve included a couple resources you may find helpful, and I would suggest
> you might also get good results sending this email to the
> us...@nifi.apache.org mailing list, as this list tends to focus more on
> the internals of NiFi, extensibility, and feature development. The users
> list has many contributors who deploy NiFi in real world scenarios and
> integrate with other systems that may not monitor this list.
>
> Good luck.
>
> https://bryanbende.com/development/2016/09/15/apache-nifi-and-apache-kafka
> https://hortonworks.com/webinar/apache-kafka-apache-nifi-better-together/
> https://hortonworks.com/tutorial/realtime-event-processing-in-hadoop-with-
> nifi-kafka-and-storm/
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On May 2, 2018, at 7:45 PM, Clay Teahouse  wrote:
>
> Hello All,
>
> 1) Why would one need both nifi and Kafka in an environment, considering
> that NiFi can handle back pressure, set up and deal with queues? I have an
> environment where I would be collecting data via NiFi and I would need to
> write the data to hdfs after some post processing. Can't I just process the
> records in NiFi, change the format and write the data to HDFS via a NiFi
> HDFS processor, PutHDFS?
>
> 2) Similarly, if I need to do some stream processing, can't I just pull the
> data from NiFi processor via NiFiSpout, do the processing via some bolts,
> and write the data to HDFS either via HDFS bolt or A NiFi HDFS processor?
>
> Do I even need storm in the picture?
>
> 3) Is NiFi suited for microbatch processing? Would it be better to pull the
> data from NiFi via spark streaming and do microbatching there? Which
> approach is most performant and reliable?
>
> thanks
>
> Clay
>
>
>


Re: NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-03 Thread Sivaprasanna
I'm interested although it might consume some time since I don't know how
big it is going to be. And I suppose it is better to capture it in a
separate Jira. The summary could be upgrade and refactor GCP processor
codebase or something like that. We could then make NIFI-5133 a dependent
of the new Jira. Thoughts?

Meanwhile, anyone who has any other recommendations, feel free to share. :)

-
Sivaprasanna

On Thu, May 3, 2018 at 9:43 PM, Joe Witt  wrote:

> Sivaprasanna
>
> Ok makes sense.  Are you in a position of interest/expertise/time to
> make those changes as well?
>
> Thanks
>
> On Thu, May 3, 2018 at 12:11 PM, Sivaprasanna 
> wrote:
> > Hi. As I had mentioned, upgrading to the latest version of the library is
> > not as simple as I thought. Google Cloud team introduced many breaking
> > changes. Many of the APIs (classes & methods) have been
> > scrapped/replaced/modified/refactored/renamed.
> >
> > In short, a simple change of version may demand changes on the
> processor's
> > code, especially on the AbstractProcessors (AbstractGCS, AbstractGCP)
> which
> > may pose backward compatibility issues, I'm afraid.
> >
> > Thanks,
> > Sivaprasanna
> >
> > On Thu, May 3, 2018 at 9:26 PM, Joe Witt  wrote:
> >
> >> Sivaprasanna
> >>
> >> I might not completely follow but is there a 3rd option to upgrade to
> >> a more recent library and solve the use of the proper jars
> >> problem(smaller nar)?
> >>
> >> Thanks
> >>
> >> On Thu, May 3, 2018 at 11:51 AM, Sivaprasanna <
> sivaprasanna...@gmail.com>
> >> wrote:
> >> > Hi
> >> >
> >> > I've started the initial works on implementing Google Cloud Pub/Sub
> >> > processors. The associated Jira ID is NIFI-5133
> >> > . This will go to
> the
> >> > existing GCP bundle which currently has only the storage processors.
> Upon
> >> > some inspection, I noticed the following:
> >> >
> >> >- As of now, the bundle uses google-cloud
> >> >
> as
> >> its
> >> >dependency which is like uber/fat jar that contains most of the
> Google
> >> >Cloud's client library SDKs including storage, bigquery, pubsub,
> etc.
> >> The
> >> >main point is it is using a very older version (0.8.0)
> >> >- I thought of using google-cloud-bom
> >> > google-cloud-bom>
> >> in
> >> >the bundle's POM
> >> > >> bundles/nifi-gcp-bundle/pom.xml>
> >> >and then use the required artifacts in the processor's POM
> >> > >> bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml>.
> >> >The benefit is, it will help us reduce the overall size of the NAR.
> >> >
> >> > When I tried to do #2, I realized this is not a simple version change
> >> but a
> >> > change that brings backward compatibility issues. Ex: Some APIs used
> in
> >> the
> >> > older version i.e. 0.8.0 have now been entirely scrapped and moved to
> >> > different library. We can do either two things:
> >> >
> >> >1. User the Pub/Sub APIs from the older version but the problem is
> >> it's
> >> >very old and the problem of upgrading would soon catchup with us.
> >> >2. Or we can continue to use the older version of
> google-cloud-storage
> >> >only for the storage processors and introduce the #2 mentioned
> above
> >> but I
> >> >don't think then the new processors can't properly extend the
> existing
> >> >AbstractGCPProcessor
> >> > >> bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/
> >> org/apache/nifi/processors/gcp/AbstractGCPProcessor.java>.
> >> >
> >> >
> >> > A quick glance on the processor code and the POM would help you
> >> understand
> >> > my concern.
> >> >
> >> > I'm stuck up here so any help & guidance in this regard is very much
> >> > appreciated. :)
> >> >
> >> > Thanks,
> >> >
> >> > Sivaprasanna
> >>
>


Re: NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-03 Thread Joe Witt
Sivaprasanna

Ok makes sense.  Are you in a position of interest/expertise/time to
make those changes as well?

Thanks

On Thu, May 3, 2018 at 12:11 PM, Sivaprasanna  wrote:
> Hi. As I had mentioned, upgrading to the latest version of the library is
> not as simple as I thought. Google Cloud team introduced many breaking
> changes. Many of the APIs (classes & methods) have been
> scrapped/replaced/modified/refactored/renamed.
>
> In short, a simple change of version may demand changes on the processor's
> code, especially on the AbstractProcessors (AbstractGCS, AbstractGCP) which
> may pose backward compatibility issues, I'm afraid.
>
> Thanks,
> Sivaprasanna
>
> On Thu, May 3, 2018 at 9:26 PM, Joe Witt  wrote:
>
>> Sivaprasanna
>>
>> I might not completely follow but is there a 3rd option to upgrade to
>> a more recent library and solve the use of the proper jars
>> problem(smaller nar)?
>>
>> Thanks
>>
>> On Thu, May 3, 2018 at 11:51 AM, Sivaprasanna 
>> wrote:
>> > Hi
>> >
>> > I've started the initial works on implementing Google Cloud Pub/Sub
>> > processors. The associated Jira ID is NIFI-5133
>> > . This will go to the
>> > existing GCP bundle which currently has only the storage processors. Upon
>> > some inspection, I noticed the following:
>> >
>> >- As of now, the bundle uses google-cloud
>> > as
>> its
>> >dependency which is like uber/fat jar that contains most of the Google
>> >Cloud's client library SDKs including storage, bigquery, pubsub, etc.
>> The
>> >main point is it is using a very older version (0.8.0)
>> >- I thought of using google-cloud-bom
>> >
>> in
>> >the bundle's POM
>> >> bundles/nifi-gcp-bundle/pom.xml>
>> >and then use the required artifacts in the processor's POM
>> >> bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml>.
>> >The benefit is, it will help us reduce the overall size of the NAR.
>> >
>> > When I tried to do #2, I realized this is not a simple version change
>> but a
>> > change that brings backward compatibility issues. Ex: Some APIs used in
>> the
>> > older version i.e. 0.8.0 have now been entirely scrapped and moved to
>> > different library. We can do either two things:
>> >
>> >1. User the Pub/Sub APIs from the older version but the problem is
>> it's
>> >very old and the problem of upgrading would soon catchup with us.
>> >2. Or we can continue to use the older version of google-cloud-storage
>> >only for the storage processors and introduce the #2 mentioned above
>> but I
>> >don't think then the new processors can't properly extend the existing
>> >AbstractGCPProcessor
>> >> bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/
>> org/apache/nifi/processors/gcp/AbstractGCPProcessor.java>.
>> >
>> >
>> > A quick glance on the processor code and the POM would help you
>> understand
>> > my concern.
>> >
>> > I'm stuck up here so any help & guidance in this regard is very much
>> > appreciated. :)
>> >
>> > Thanks,
>> >
>> > Sivaprasanna
>>


Re: NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-03 Thread Sivaprasanna
Hi. As I had mentioned, upgrading to the latest version of the library is
not as simple as I thought. Google Cloud team introduced many breaking
changes. Many of the APIs (classes & methods) have been
scrapped/replaced/modified/refactored/renamed.

In short, a simple change of version may demand changes on the processor's
code, especially on the AbstractProcessors (AbstractGCS, AbstractGCP) which
may pose backward compatibility issues, I'm afraid.

Thanks,
Sivaprasanna

On Thu, May 3, 2018 at 9:26 PM, Joe Witt  wrote:

> Sivaprasanna
>
> I might not completely follow but is there a 3rd option to upgrade to
> a more recent library and solve the use of the proper jars
> problem(smaller nar)?
>
> Thanks
>
> On Thu, May 3, 2018 at 11:51 AM, Sivaprasanna 
> wrote:
> > Hi
> >
> > I've started the initial works on implementing Google Cloud Pub/Sub
> > processors. The associated Jira ID is NIFI-5133
> > . This will go to the
> > existing GCP bundle which currently has only the storage processors. Upon
> > some inspection, I noticed the following:
> >
> >- As of now, the bundle uses google-cloud
> > as
> its
> >dependency which is like uber/fat jar that contains most of the Google
> >Cloud's client library SDKs including storage, bigquery, pubsub, etc.
> The
> >main point is it is using a very older version (0.8.0)
> >- I thought of using google-cloud-bom
> >
> in
> >the bundle's POM
> > bundles/nifi-gcp-bundle/pom.xml>
> >and then use the required artifacts in the processor's POM
> > bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml>.
> >The benefit is, it will help us reduce the overall size of the NAR.
> >
> > When I tried to do #2, I realized this is not a simple version change
> but a
> > change that brings backward compatibility issues. Ex: Some APIs used in
> the
> > older version i.e. 0.8.0 have now been entirely scrapped and moved to
> > different library. We can do either two things:
> >
> >1. User the Pub/Sub APIs from the older version but the problem is
> it's
> >very old and the problem of upgrading would soon catchup with us.
> >2. Or we can continue to use the older version of google-cloud-storage
> >only for the storage processors and introduce the #2 mentioned above
> but I
> >don't think then the new processors can't properly extend the existing
> >AbstractGCPProcessor
> > bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/
> org/apache/nifi/processors/gcp/AbstractGCPProcessor.java>.
> >
> >
> > A quick glance on the processor code and the POM would help you
> understand
> > my concern.
> >
> > I'm stuck up here so any help & guidance in this regard is very much
> > appreciated. :)
> >
> > Thanks,
> >
> > Sivaprasanna
>


Re: NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-03 Thread Joe Witt
Sivaprasanna

I might not completely follow but is there a 3rd option to upgrade to
a more recent library and solve the use of the proper jars
problem(smaller nar)?

Thanks

On Thu, May 3, 2018 at 11:51 AM, Sivaprasanna  wrote:
> Hi
>
> I've started the initial works on implementing Google Cloud Pub/Sub
> processors. The associated Jira ID is NIFI-5133
> . This will go to the
> existing GCP bundle which currently has only the storage processors. Upon
> some inspection, I noticed the following:
>
>- As of now, the bundle uses google-cloud
> as its
>dependency which is like uber/fat jar that contains most of the Google
>Cloud's client library SDKs including storage, bigquery, pubsub, etc. The
>main point is it is using a very older version (0.8.0)
>- I thought of using google-cloud-bom
> in
>the bundle's POM
>
> 
>and then use the required artifacts in the processor's POM
>
> .
>The benefit is, it will help us reduce the overall size of the NAR.
>
> When I tried to do #2, I realized this is not a simple version change but a
> change that brings backward compatibility issues. Ex: Some APIs used in the
> older version i.e. 0.8.0 have now been entirely scrapped and moved to
> different library. We can do either two things:
>
>1. User the Pub/Sub APIs from the older version but the problem is it's
>very old and the problem of upgrading would soon catchup with us.
>2. Or we can continue to use the older version of google-cloud-storage
>only for the storage processors and introduce the #2 mentioned above but I
>don't think then the new processors can't properly extend the existing
>AbstractGCPProcessor
>
> .
>
>
> A quick glance on the processor code and the POM would help you understand
> my concern.
>
> I'm stuck up here so any help & guidance in this regard is very much
> appreciated. :)
>
> Thanks,
>
> Sivaprasanna


NIFI-5133: Guidance & help with tackling dependency version issues

2018-05-03 Thread Sivaprasanna
Hi

I've started the initial works on implementing Google Cloud Pub/Sub
processors. The associated Jira ID is NIFI-5133
. This will go to the
existing GCP bundle which currently has only the storage processors. Upon
some inspection, I noticed the following:

   - As of now, the bundle uses google-cloud
    as its
   dependency which is like uber/fat jar that contains most of the Google
   Cloud's client library SDKs including storage, bigquery, pubsub, etc. The
   main point is it is using a very older version (0.8.0)
   - I thought of using google-cloud-bom
    in
   the bundle's POM
   

   and then use the required artifacts in the processor's POM
   
.
   The benefit is, it will help us reduce the overall size of the NAR.

When I tried to do #2, I realized this is not a simple version change but a
change that brings backward compatibility issues. Ex: Some APIs used in the
older version i.e. 0.8.0 have now been entirely scrapped and moved to
different library. We can do either two things:

   1. User the Pub/Sub APIs from the older version but the problem is it's
   very old and the problem of upgrading would soon catchup with us.
   2. Or we can continue to use the older version of google-cloud-storage
   only for the storage processors and introduce the #2 mentioned above but I
   don't think then the new processors can't properly extend the existing
   AbstractGCPProcessor
   
.


A quick glance on the processor code and the POM would help you understand
my concern.

I'm stuck up here so any help & guidance in this regard is very much
appreciated. :)

Thanks,

Sivaprasanna


NIFI-5070/NIFI-5049 patches available

2018-05-03 Thread Juan Pablo Gardella
Hi folks,

I've sent both patches some time ago, I know you are very busy but please
let me know if something else is required in both patches.

Thanks in advance,
Juan


Re: GetMongoDB : How to pass parameters as input to GetMongoDB processor

2018-05-03 Thread Mike Thomsen
Brajendra,

I would recommend an update to 1.6.0. It'll make your life a lot easier on
this. I did that patch to GetMongo because I had a client that had an
explosion of GetMongos due to that inflexibility. With that said, *be aware
of this bug* in 1.6.0 w/ PutMongo if you use it and upgrade. It is fixed in
1.7.0 (still in development):

Migrating from 1.5.0 to 1.6.0

   - PutMongo can fail in insert mode. Will be fixed in next release. In
   the mean time you can set query keys for insert even though they'll be
   ignored it should workaround the validation bug.


What it means is there is a validator function that is broken in PutMongo
when one is using the "insert mode" instead of "update mode." You can do a
work around by putting a dummy value in the "query key" field to make it
happy.

On Thu, May 3, 2018 at 8:26 AM Pierre Villard 
wrote:

> Hi,
>
> As Mike said: incoming relationship has been added for NiFi 1.6.0.
> https://issues.apache.org/jira/browse/NIFI-4827
>
> Pierre
>
> 2018-05-03 14:09 GMT+02:00 Brajendra Mishra <
> brajendra_mis...@persistent.com
> >:
>
> > Hi Mike,
> >
> > I did attach the same in my previous mail. Well reattaching it again.
> > Well error is at GetMongoDB Processor and error text is : "Upstream
> > Connections is invalid because Processor does not allow upstream
> > connections but currently has 1"
> >
> > Brajendra Mishra
> > Persistent Systems Ltd.
> >
> > -Original Message-
> > From: Mike Thomsen 
> > Sent: Thursday, May 03, 2018 5:20 PM
> > To: dev@nifi.apache.org
> > Subject: Re: GetMongoDB : How to pass parameters as input to GetMongoDB
> > processor
> >
> > Brajendra,
> >
> > Looks like the image didn't make it.
> >
> > On Wed, May 2, 2018 at 11:36 PM Brajendra Mishra <
> > brajendra_mis...@persistent.com> wrote:
> >
> > > Hi Mike,
> > >
> > > Thanks for responding.
> > >
> > > Here, I have attached missing image attachment.
> > >
> > >
> > >
> > > Brajendra Mishra
> > >
> > > Persistent Systems Ltd.
> > >
> > >
> > >
> > > *From:* Mike Thomsen 
> > > *Sent:* Wednesday, May 02, 2018 6:24 PM
> > >
> > >
> > > *To:* dev@nifi.apache.org
> > > *Subject:* Re: GetMongoDB : How to pass parameters as input to
> > > GetMongoDB processor
> > >
> > >
> > >
> > > That might require 1.6.0. Also, your image didn't come through in your
> > > response to Sivaprasanna so resend that too.
> > >
> > > On Wed, May 2, 2018 at 8:37 AM Brajendra Mishra <
> > > brajendra_mis...@persistent.com> wrote:
> > >
> > > Hi Mike,
> > >
> > > Thanks a lot for responding.
> > >
> > > On your statement
> > > "That is its new default behavior if you leave the query field blank
> > > and have an incoming connection from another processor. That would be
> > > a good way to integrate the flow with another application"
> > >
> > > Could you please share a sample template for the same?
> > >
> > >
> > > Brajendra Mishra
> > > Persistent Systems Ltd.
> > >
> > > -Original Message-
> > > From: Mike Thomsen 
> > > Sent: Wednesday, May 02, 2018 5:58 PM
> > > To: dev@nifi.apache.org
> > > Subject: Re: GetMongoDB : How to pass parameters as input to
> > > GetMongoDB processor
> > >
> > > GetMongo can also use the body of a flowfile for the query. That is
> > > its new default behavior if you leave the query field blank and have
> > > an incoming connection from another processor. That would be a good
> > > way to integrate the flow with another application. For example, you
> > > could add FetchKafka to the flow and have your applications post
> > > messages to Kafka with the queries they want it to run and FetchKafka
> > > would send that JSON to GetMongo as it comes in. Or you could build a
> > > REST service that writes the JSON to disk and use GetFile to load it.
> > Lots of ways to do this.
> > >
> > > On Wed, May 2, 2018 at 6:42 AM Sivaprasanna
> > > 
> > > wrote:
> > >
> > > > Since I'm not so sure about your exact use case, I have just created
> > > > a rough template based on the simple example flow that I had posted
> > > > earlier which is GenerateFlowfile -> UpdateAttribute -> GetMongo. I
> > > > have attached the template here.
> > > >
> > > > -
> > > > Sivaprasanna
> > > >
> > > > On Wed, May 2, 2018 at 2:55 PM, Brajendra Mishra <
> > > > brajendra_mis...@persistent.com> wrote:
> > > >
> > > >> Hi Sivaprasanna,
> > > >>
> > > >> Could you please provide me the sample template for the same, where
> > > >> I can pass parameters (and get those parameters' value to process
> > > >> further) to GetMongoDB processor?
> > > >> It would be a great help for us.
> > > >>
> > > >> Brajendra Mishra
> > > >> Persistent Systems Ltd.
> > > >>
> > > >> -Original Message-
> > > >> From: Sivaprasanna 
> > > >> Sent: Wednesday, May 02, 2018 2:28 PM
> > > >> To: dev@nifi.apache.org
> > > >> Subject: Re: GetMongoDB : How to pass parameters as 

Re: GetMongoDB : How to pass parameters as input to GetMongoDB processor

2018-05-03 Thread Pierre Villard
Hi,

As Mike said: incoming relationship has been added for NiFi 1.6.0.
https://issues.apache.org/jira/browse/NIFI-4827

Pierre

2018-05-03 14:09 GMT+02:00 Brajendra Mishra :

> Hi Mike,
>
> I did attach the same in my previous mail. Well reattaching it again.
> Well error is at GetMongoDB Processor and error text is : "Upstream
> Connections is invalid because Processor does not allow upstream
> connections but currently has 1"
>
> Brajendra Mishra
> Persistent Systems Ltd.
>
> -Original Message-
> From: Mike Thomsen 
> Sent: Thursday, May 03, 2018 5:20 PM
> To: dev@nifi.apache.org
> Subject: Re: GetMongoDB : How to pass parameters as input to GetMongoDB
> processor
>
> Brajendra,
>
> Looks like the image didn't make it.
>
> On Wed, May 2, 2018 at 11:36 PM Brajendra Mishra <
> brajendra_mis...@persistent.com> wrote:
>
> > Hi Mike,
> >
> > Thanks for responding.
> >
> > Here, I have attached missing image attachment.
> >
> >
> >
> > Brajendra Mishra
> >
> > Persistent Systems Ltd.
> >
> >
> >
> > *From:* Mike Thomsen 
> > *Sent:* Wednesday, May 02, 2018 6:24 PM
> >
> >
> > *To:* dev@nifi.apache.org
> > *Subject:* Re: GetMongoDB : How to pass parameters as input to
> > GetMongoDB processor
> >
> >
> >
> > That might require 1.6.0. Also, your image didn't come through in your
> > response to Sivaprasanna so resend that too.
> >
> > On Wed, May 2, 2018 at 8:37 AM Brajendra Mishra <
> > brajendra_mis...@persistent.com> wrote:
> >
> > Hi Mike,
> >
> > Thanks a lot for responding.
> >
> > On your statement
> > "That is its new default behavior if you leave the query field blank
> > and have an incoming connection from another processor. That would be
> > a good way to integrate the flow with another application"
> >
> > Could you please share a sample template for the same?
> >
> >
> > Brajendra Mishra
> > Persistent Systems Ltd.
> >
> > -Original Message-
> > From: Mike Thomsen 
> > Sent: Wednesday, May 02, 2018 5:58 PM
> > To: dev@nifi.apache.org
> > Subject: Re: GetMongoDB : How to pass parameters as input to
> > GetMongoDB processor
> >
> > GetMongo can also use the body of a flowfile for the query. That is
> > its new default behavior if you leave the query field blank and have
> > an incoming connection from another processor. That would be a good
> > way to integrate the flow with another application. For example, you
> > could add FetchKafka to the flow and have your applications post
> > messages to Kafka with the queries they want it to run and FetchKafka
> > would send that JSON to GetMongo as it comes in. Or you could build a
> > REST service that writes the JSON to disk and use GetFile to load it.
> Lots of ways to do this.
> >
> > On Wed, May 2, 2018 at 6:42 AM Sivaprasanna
> > 
> > wrote:
> >
> > > Since I'm not so sure about your exact use case, I have just created
> > > a rough template based on the simple example flow that I had posted
> > > earlier which is GenerateFlowfile -> UpdateAttribute -> GetMongo. I
> > > have attached the template here.
> > >
> > > -
> > > Sivaprasanna
> > >
> > > On Wed, May 2, 2018 at 2:55 PM, Brajendra Mishra <
> > > brajendra_mis...@persistent.com> wrote:
> > >
> > >> Hi Sivaprasanna,
> > >>
> > >> Could you please provide me the sample template for the same, where
> > >> I can pass parameters (and get those parameters' value to process
> > >> further) to GetMongoDB processor?
> > >> It would be a great help for us.
> > >>
> > >> Brajendra Mishra
> > >> Persistent Systems Ltd.
> > >>
> > >> -Original Message-
> > >> From: Sivaprasanna 
> > >> Sent: Wednesday, May 02, 2018 2:28 PM
> > >> To: dev@nifi.apache.org
> > >> Subject: Re: GetMongoDB : How to pass parameters as input to
> > >> GetMongoDB processor
> > >>
> > >> Hi.
> > >>
> > >> GetMongo can take input. So technically you can use a processor
> > >> before and then connect it  to GetMongo.
> > >>
> > >> A simple example :
> > >> GenerateFlowfile -> UpdateAttribute -> GetMongo
> > >>
> > >> In the UpdateAttribute, you can add attributes for the database and
> > >> collection and then use them in GetMong using NiFi Expression
> Language.
> > >>
> > >> Let me know, if that doesn’t help.
> > >>
> > >> -
> > >> Sivaprasanna
> > >>
> > >> On Wed, 2 May 2018 at 1:26 PM, Brajendra Mishra <
> > >> brajendra_mis...@persistent.com> wrote:
> > >>
> > >> > Hi Team,
> > >> > We have found there is only 'GetMongoDB' processor to connect and
> > >> > query to MongoDB in Apache NiFi.
> > >> > Hence, we this processor does not take any type or input.
> > >> >
> > >> > Do we have another type to Apache NiFi processor which can take
> > >> > parameters as input (details of MongoDB, query, instance etc.)
> > >> > from
> > >> other processor?
> > >> > If not then please suggest when such type of processor can be
> > >> > expected in upcoming 

RE: GetMongoDB : How to pass parameters as input to GetMongoDB processor

2018-05-03 Thread Brajendra Mishra
Hi Mike, 

I did attach the same in my previous mail. Well reattaching it again.
Well error is at GetMongoDB Processor and error text is : "Upstream Connections 
is invalid because Processor does not allow upstream connections but currently 
has 1"

Brajendra Mishra
Persistent Systems Ltd.

-Original Message-
From: Mike Thomsen  
Sent: Thursday, May 03, 2018 5:20 PM
To: dev@nifi.apache.org
Subject: Re: GetMongoDB : How to pass parameters as input to GetMongoDB 
processor

Brajendra,

Looks like the image didn't make it.

On Wed, May 2, 2018 at 11:36 PM Brajendra Mishra < 
brajendra_mis...@persistent.com> wrote:

> Hi Mike,
>
> Thanks for responding.
>
> Here, I have attached missing image attachment.
>
>
>
> Brajendra Mishra
>
> Persistent Systems Ltd.
>
>
>
> *From:* Mike Thomsen 
> *Sent:* Wednesday, May 02, 2018 6:24 PM
>
>
> *To:* dev@nifi.apache.org
> *Subject:* Re: GetMongoDB : How to pass parameters as input to 
> GetMongoDB processor
>
>
>
> That might require 1.6.0. Also, your image didn't come through in your 
> response to Sivaprasanna so resend that too.
>
> On Wed, May 2, 2018 at 8:37 AM Brajendra Mishra < 
> brajendra_mis...@persistent.com> wrote:
>
> Hi Mike,
>
> Thanks a lot for responding.
>
> On your statement
> "That is its new default behavior if you leave the query field blank 
> and have an incoming connection from another processor. That would be 
> a good way to integrate the flow with another application"
>
> Could you please share a sample template for the same?
>
>
> Brajendra Mishra
> Persistent Systems Ltd.
>
> -Original Message-
> From: Mike Thomsen 
> Sent: Wednesday, May 02, 2018 5:58 PM
> To: dev@nifi.apache.org
> Subject: Re: GetMongoDB : How to pass parameters as input to 
> GetMongoDB processor
>
> GetMongo can also use the body of a flowfile for the query. That is 
> its new default behavior if you leave the query field blank and have 
> an incoming connection from another processor. That would be a good 
> way to integrate the flow with another application. For example, you 
> could add FetchKafka to the flow and have your applications post 
> messages to Kafka with the queries they want it to run and FetchKafka 
> would send that JSON to GetMongo as it comes in. Or you could build a 
> REST service that writes the JSON to disk and use GetFile to load it. Lots of 
> ways to do this.
>
> On Wed, May 2, 2018 at 6:42 AM Sivaprasanna 
> 
> wrote:
>
> > Since I'm not so sure about your exact use case, I have just created 
> > a rough template based on the simple example flow that I had posted 
> > earlier which is GenerateFlowfile -> UpdateAttribute -> GetMongo. I 
> > have attached the template here.
> >
> > -
> > Sivaprasanna
> >
> > On Wed, May 2, 2018 at 2:55 PM, Brajendra Mishra < 
> > brajendra_mis...@persistent.com> wrote:
> >
> >> Hi Sivaprasanna,
> >>
> >> Could you please provide me the sample template for the same, where 
> >> I can pass parameters (and get those parameters' value to process
> >> further) to GetMongoDB processor?
> >> It would be a great help for us.
> >>
> >> Brajendra Mishra
> >> Persistent Systems Ltd.
> >>
> >> -Original Message-
> >> From: Sivaprasanna 
> >> Sent: Wednesday, May 02, 2018 2:28 PM
> >> To: dev@nifi.apache.org
> >> Subject: Re: GetMongoDB : How to pass parameters as input to 
> >> GetMongoDB processor
> >>
> >> Hi.
> >>
> >> GetMongo can take input. So technically you can use a processor 
> >> before and then connect it  to GetMongo.
> >>
> >> A simple example :
> >> GenerateFlowfile -> UpdateAttribute -> GetMongo
> >>
> >> In the UpdateAttribute, you can add attributes for the database and 
> >> collection and then use them in GetMong using NiFi Expression Language.
> >>
> >> Let me know, if that doesn’t help.
> >>
> >> -
> >> Sivaprasanna
> >>
> >> On Wed, 2 May 2018 at 1:26 PM, Brajendra Mishra < 
> >> brajendra_mis...@persistent.com> wrote:
> >>
> >> > Hi Team,
> >> > We have found there is only 'GetMongoDB' processor to connect and 
> >> > query to MongoDB in Apache NiFi.
> >> > Hence, we this processor does not take any type or input.
> >> >
> >> > Do we have another type to Apache NiFi processor which can take 
> >> > parameters as input (details of MongoDB, query, instance etc.) 
> >> > from
> >> other processor?
> >> > If not then please suggest when such type of processor can be 
> >> > expected in upcoming release?
> >> >
> >> > Brajendra Mishra
> >> > Persistent Systems Ltd.
> >> >
> >> > DISCLAIMER
> >> > ==
> >> > This e-mail may contain privileged and confidential information 
> >> > which is the property of Persistent Systems Ltd. It is intended 
> >> > only for the use of the individual or entity to which it is 
> >> > addressed. If you are not the intended recipient, you are not 
> >> > authorized to read, retain, copy, print, distribute or use this 
> 

Re: GetMongoDB : How to pass parameters as input to GetMongoDB processor

2018-05-03 Thread Mike Thomsen
Brajendra,

Looks like the image didn't make it.

On Wed, May 2, 2018 at 11:36 PM Brajendra Mishra <
brajendra_mis...@persistent.com> wrote:

> Hi Mike,
>
> Thanks for responding.
>
> Here, I have attached missing image attachment.
>
>
>
> Brajendra Mishra
>
> Persistent Systems Ltd.
>
>
>
> *From:* Mike Thomsen 
> *Sent:* Wednesday, May 02, 2018 6:24 PM
>
>
> *To:* dev@nifi.apache.org
> *Subject:* Re: GetMongoDB : How to pass parameters as input to GetMongoDB
> processor
>
>
>
> That might require 1.6.0. Also, your image didn't come through in your
> response to Sivaprasanna so resend that too.
>
> On Wed, May 2, 2018 at 8:37 AM Brajendra Mishra <
> brajendra_mis...@persistent.com> wrote:
>
> Hi Mike,
>
> Thanks a lot for responding.
>
> On your statement
> "That is its new default behavior if you leave the query field blank and
> have an incoming connection from another processor. That would be a good
> way to integrate the flow with another application"
>
> Could you please share a sample template for the same?
>
>
> Brajendra Mishra
> Persistent Systems Ltd.
>
> -Original Message-
> From: Mike Thomsen 
> Sent: Wednesday, May 02, 2018 5:58 PM
> To: dev@nifi.apache.org
> Subject: Re: GetMongoDB : How to pass parameters as input to GetMongoDB
> processor
>
> GetMongo can also use the body of a flowfile for the query. That is its
> new default behavior if you leave the query field blank and have an
> incoming connection from another processor. That would be a good way to
> integrate the flow with another application. For example, you could add
> FetchKafka to the flow and have your applications post messages to Kafka
> with the queries they want it to run and FetchKafka would send that JSON to
> GetMongo as it comes in. Or you could build a REST service that writes the
> JSON to disk and use GetFile to load it. Lots of ways to do this.
>
> On Wed, May 2, 2018 at 6:42 AM Sivaprasanna 
> wrote:
>
> > Since I'm not so sure about your exact use case, I have just created a
> > rough template based on the simple example flow that I had posted
> > earlier which is GenerateFlowfile -> UpdateAttribute -> GetMongo. I
> > have attached the template here.
> >
> > -
> > Sivaprasanna
> >
> > On Wed, May 2, 2018 at 2:55 PM, Brajendra Mishra <
> > brajendra_mis...@persistent.com> wrote:
> >
> >> Hi Sivaprasanna,
> >>
> >> Could you please provide me the sample template for the same, where I
> >> can pass parameters (and get those parameters' value to process
> >> further) to GetMongoDB processor?
> >> It would be a great help for us.
> >>
> >> Brajendra Mishra
> >> Persistent Systems Ltd.
> >>
> >> -Original Message-
> >> From: Sivaprasanna 
> >> Sent: Wednesday, May 02, 2018 2:28 PM
> >> To: dev@nifi.apache.org
> >> Subject: Re: GetMongoDB : How to pass parameters as input to
> >> GetMongoDB processor
> >>
> >> Hi.
> >>
> >> GetMongo can take input. So technically you can use a processor
> >> before and then connect it  to GetMongo.
> >>
> >> A simple example :
> >> GenerateFlowfile -> UpdateAttribute -> GetMongo
> >>
> >> In the UpdateAttribute, you can add attributes for the database and
> >> collection and then use them in GetMong using NiFi Expression Language.
> >>
> >> Let me know, if that doesn’t help.
> >>
> >> -
> >> Sivaprasanna
> >>
> >> On Wed, 2 May 2018 at 1:26 PM, Brajendra Mishra <
> >> brajendra_mis...@persistent.com> wrote:
> >>
> >> > Hi Team,
> >> > We have found there is only 'GetMongoDB' processor to connect and
> >> > query to MongoDB in Apache NiFi.
> >> > Hence, we this processor does not take any type or input.
> >> >
> >> > Do we have another type to Apache NiFi processor which can take
> >> > parameters as input (details of MongoDB, query, instance etc.) from
> >> other processor?
> >> > If not then please suggest when such type of processor can be
> >> > expected in upcoming release?
> >> >
> >> > Brajendra Mishra
> >> > Persistent Systems Ltd.
> >> >
> >> > DISCLAIMER
> >> > ==
> >> > This e-mail may contain privileged and confidential information
> >> > which is the property of Persistent Systems Ltd. It is intended
> >> > only for the use of the individual or entity to which it is
> >> > addressed. If you are not the intended recipient, you are not
> >> > authorized to read, retain, copy, print, distribute or use this
> >> > message. If you have received this communication in error, please
> >> > notify the sender and delete all copies
> >> of this message.
> >> > Persistent Systems Ltd. does not accept any liability for virus
> >> > infected mails.
> >> >
> >> DISCLAIMER
> >> ==
> >> This e-mail may contain privileged and confidential information which
> >> is the property of Persistent Systems Ltd. It is intended only for
> >> the use of the individual or entity to which it is addressed. If you
> >> are not the intended recipient, you are not authorized