Re: Queue wasn't emptying

2015-11-11 Thread Christopher Hamm
where does it dump? I cant get mine to expire fast enough. It is dev test I
want to blow away but cant delete.

On Fri, Nov 6, 2015 at 11:33 AM, Joe Witt  wrote:

> Elli,
>
> Can you share your flow configuration that we could possibly use to
> replicate this?  Perhaps turn this into a JIRA and attach that.
>
> Also anytime you see something that appears 'stuck' please try to get
> a stack dump (bin/nifi.sh dump)  As if there is truly a stuck thread
> we'll see it there and what it is blocked on.
>
> Thanks
> Joe
>
> On Fri, Nov 6, 2015 at 11:25 AM, Elli Schwarz 
> wrote:
> > I had a queue that built up overnight with several thousand flowfiles.
> The
> > queue was pointing to a RouteOnAttribute processor, which was running.
> For
> > some reason, the RouteOnAttribute processor wasn't emptying the queue so
> the
> > queue just built up. Stopping and starting the processor didn't help.
> > However, simply restarting Nifi got the flow moving again. I can't figure
> > out why the queue was stuck.
> >
> > What could cause a queue to build up like that? How would restarting Nifi
> > get it going again? There weren't any errors in the log.  I did have
> > backpressure on the relationship to the processor after the
> RouteOnAttribute
> > set to 1, but that queue was empty. There was no backpressure on the
> queue
> > heading in to the RouteOnAttribute processor. The RouteOnAttribute
> processor
> > has a Penalty Duration and a Yield of 1 sec. The relationships all used
> the
> > PriorityAttributeAnalyzer as the prioritizer.
> >
> > The interesting thing is I had the exact same problem on two different
> nifis
> > (one was version 0.2.0 and one 0.3.0). The same type of flow files were
> > going from one to the other using site-to-site (I was doing this for load
> > balancing). On the other Nifi, instead of restarting, I added another
> > RouteOnAttribute processor and rerouted the stuck queue to the new one
> > instead, and that also got things moving again. (I have a backpressure
> since
> > the processor after the RouteOnAttribute is a ControlRate processor, and
> I
> > only want to allow 1 flowfile through per second so as not to overwhelm
> the
> > system downstream. The backpressure is to assist with load balancing, so
> if
> > one pathway fills up, I have the other pathway which uses the
> site-to-site
> > to route the flow to the other nifi).
> >
> > Is there some combination of backpressure, penalty or yield along with
> > prioritization that could cause a kind of deadlock-like situation? Any
> ideas
> > as to how I can prevent this from occurring?
> >
> > Thanks!
> > -Elli
> >
>



-- 
Sincerely,
Chris Hamm
(E) ceham...@gmail.com
(Twitter) http://twitter.com/webhamm
(Linkedin) http://www.linkedin.com/in/chrishamm


Why does PutFile create directories for you but PutHDFS does not?

2015-11-11 Thread Mark Petronic
Just wondering about the history behind why one has the logic to
create them but the other does not?


Re: Route On Attribute Processor

2015-11-11 Thread Madhire, Naveen
Matt, I understood that each property actually becomes a relationship. I didn’t 
understand the way routeonattribute behaves initially, but after looking at the 
processor documentation, I figured out how to implement this.

The documentation is very good and in detail. Thanks to the community.




From: Matthew Clarke 
mailto:matt.clarke@gmail.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Wednesday, November 11, 2015 at 5:19 PM
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: Route On Attribute Processor


Naveen,
You need to add new properties to that processor. Each property you add 
becomes a new relationship. You can use the NiFi expression language to 
construct your routing rule in each property. What attribute are you trying to 
use to route with? I can help you create some rules.

Matt

On Nov 11, 2015 3:50 PM, "Madhire, Naveen" 
mailto:naveen.madh...@capitalone.com>> wrote:
Hi,

I’ve a question on RouteOnAttribute processor, I don’t see any “matched” 
relationship in the processor, it only has “unmatched” relationship.
I’ve implemented this and I could only see the “unmatched” ones.

So what happens to the flow files when the condition actually satisfies?


Auto terminate relationships [http://10.94.96.244:8080/nifi/images/iconInfo.png]
unmatched
FlowFiles that do not match any user-define expression will be routed here
Thanks,
Naveen



The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


Re: Route On Attribute Processor

2015-11-11 Thread Matthew Clarke
Naveen,
You need to add new properties to that processor. Each property you add
becomes a new relationship. You can use the NiFi expression language to
construct your routing rule in each property. What attribute are you trying
to use to route with? I can help you create some rules.

Matt
On Nov 11, 2015 3:50 PM, "Madhire, Naveen" 
wrote:

> Hi,
>
> I’ve a question on RouteOnAttribute processor, I don’t see any “matched”
> relationship in the processor, it only has “unmatched” relationship.
> I’ve implemented this and I could only see the “unmatched” ones.
>
> So what happens to the flow files when the condition actually satisfies?
>
>
> Auto terminate relationships [image: Info]
> unmatched
> FlowFiles that do not match any user-define expression will be routed here
> Thanks,
> Naveen
>
> --
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>


Route On Attribute Processor

2015-11-11 Thread Madhire, Naveen
Hi,

I’ve a question on RouteOnAttribute processor, I don’t see any “matched” 
relationship in the processor, it only has “unmatched” relationship.
I’ve implemented this and I could only see the “unmatched” ones.

So what happens to the flow files when the condition actually satisfies?


Auto terminate relationships [http://10.94.96.244:8080/nifi/images/iconInfo.png]
unmatched
FlowFiles that do not match any user-define expression will be routed here
Thanks,
Naveen


The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


Re: Memory Issues on Split Text

2015-11-11 Thread Madhire, Naveen
Hey Joe, 

I am just testing a simple flow of reading the file from local file system
and inserting into a kafka topic.

The flow is

GetFile -> SplitText (with 2 line split) -> SplitText(1 line split)
-> putkafka 


So total there are 220K events processed in 1 min 44 sec. Which I think is
pretty good so far.

I¹ve Nifi running on a single m4.4xlarge EC2 instance, I¹ve not changed
any other Nifi settings. Is there anything else which needs to be modified
for flow file, content, provenance repos?


Thanks,
Naveen

On 11/11/15, 12:31 PM, "Joe Witt"  wrote:

>Naveen,
>
>For throughput can you state what the desired events/sec/node would be
>for you and can you describe how the flowfile vs content vs prov repo
>is setup on the machine it is running on?
>
>Thanks
>Joe
>
>On Wed, Nov 11, 2015 at 1:21 PM, Madhire, Naveen
> wrote:
>> Thanks Mark. The workaround to have intermediate split text to split few
>> lines works well, as you said, the throughput is not quite there. I
>>think it
>> serves our purpose as of now.
>>
>>
>>
>> From: Mark Payne 
>> Reply-To: "users@nifi.apache.org" 
>> Date: Tuesday, November 10, 2015 at 7:12 PM
>> To: "users@nifi.apache.org" 
>> Subject: Re: Memory Issues on Split Text
>>
>> Naveen,
>>
>> There is a ticket [1] that will make this work more cleanly so that we
>>can
>> use SplitText to split a large
>> file into millions of FlowFiles. Right now, as you noted you will end up
>> running out of memory. There are
>> a few possible solutions that you can use.
>>
>> If you need to split each line into a separate FlowFile, the easiest
>>way is
>> to use two SplitText processors.
>> The first would be configured with a Line Split Count of say 10,000.
>>Then,
>> the "splits" relationship is routed
>> to a second SplitText processor with the Line Split Count set to 1. This
>> prevents the processor from holding
>> those millions of FlowFiles in memory. The only downside here is that
>>if you
>> create a FlowFile for every single
>> message, your throughput will not be quite as good.
>>
>> The next approach is to just send the entire 2 GB FlowFile to PutKafka
>>and
>> set the Message Delimiter to "\n".
>> This will send each line in the FlowFile to Kafka as a separate
>>message. The
>> down side here is that if you have
>> sent say 1 million messages to Kafka and then NiFi is restarted, it
>>doesn't
>> know that those 1 million messages have
>> been sent, so you will end up sending all of the data again and will
>> duplicate a lot of the messages.
>>
>> The third approach is a hybrid of the two. You can use SplitText to
>>split
>> the FlowFile into 10,000 lines each. Then,
>> instead of sending to another SplitText, you can send the "splits"
>> relationship to PutKafka with a Message Delimiter
>> of "\n". This way, you will still get great throughput by not splitting
>>each
>> FlowFile into millions of FlowFiles, but you will
>> avoid duplicating millions of messages (you'll duplicate at the very
>>most
>> 10,000 messages in this example).
>>
>> So you can use any of these approaches. You just have to consider the
>>pro's
>> and con's of each and decide which
>> trade-offs you want to make.
>>
>> Thanks
>> -Mark
>>
>>
>> [1] https://issues.apache.org/jira/browse/NIFI-1008
>>
>>
>>
>>
>>
>> On Nov 10, 2015, at 5:28 PM, Madhire, Naveen
>>
>> wrote:
>>
>> Hi,
>>
>> I am reading a 2 GB file from local and putting the data into a Kafka
>>topic.
>>
>> Since GetFile only creates one flow file per file, I am making use of
>> SplitText processor to split the file into one flow file per line before
>> inserting the data into a Kafka topic.
>> I am seeing a lot of ³GC Overhead limit exceeded errors² on SplitText
>> processor. I am running Nifi on a single linux server with 16 GB memory.
>>
>> Is this the right approach of reading and putting into Kafka?
>> Or there is any better approach?
>>
>> Thanks,
>> Naveen
>>
>>
>>
>> 
>> The information contained in this e-mail is confidential and/or
>>proprietary
>> to Capital One and/or its affiliates and may only be used solely in
>> performance of work or services for Capital One. The information
>>transmitted
>> herewith is intended only for use by the individual or entity to which
>>it is
>> addressed. If the reader of this message is not the intended recipient,
>>you
>> are hereby notified that any review, retransmission, dissemination,
>> distribution, copying or other use of, or taking of any action in
>>reliance
>> upon this information is strictly prohibited. If you have received this
>> communication in error, please contact the sender and delete the
>>material
>> from your computer.
>>
>>
>>
>> 
>>
>> The information contained in this e-mail is confidential and/or
>>proprietary
>> to Capital One and/or its affiliates and may only be used solely in
>> performance of work or services for Capital One. The information
>>transmitted
>> herewith is intended only for use by the individua

Re: Memory Issues on Split Text

2015-11-11 Thread Joe Witt
Naveen,

For throughput can you state what the desired events/sec/node would be
for you and can you describe how the flowfile vs content vs prov repo
is setup on the machine it is running on?

Thanks
Joe

On Wed, Nov 11, 2015 at 1:21 PM, Madhire, Naveen
 wrote:
> Thanks Mark. The workaround to have intermediate split text to split few
> lines works well, as you said, the throughput is not quite there. I think it
> serves our purpose as of now.
>
>
>
> From: Mark Payne 
> Reply-To: "users@nifi.apache.org" 
> Date: Tuesday, November 10, 2015 at 7:12 PM
> To: "users@nifi.apache.org" 
> Subject: Re: Memory Issues on Split Text
>
> Naveen,
>
> There is a ticket [1] that will make this work more cleanly so that we can
> use SplitText to split a large
> file into millions of FlowFiles. Right now, as you noted you will end up
> running out of memory. There are
> a few possible solutions that you can use.
>
> If you need to split each line into a separate FlowFile, the easiest way is
> to use two SplitText processors.
> The first would be configured with a Line Split Count of say 10,000. Then,
> the "splits" relationship is routed
> to a second SplitText processor with the Line Split Count set to 1. This
> prevents the processor from holding
> those millions of FlowFiles in memory. The only downside here is that if you
> create a FlowFile for every single
> message, your throughput will not be quite as good.
>
> The next approach is to just send the entire 2 GB FlowFile to PutKafka and
> set the Message Delimiter to "\n".
> This will send each line in the FlowFile to Kafka as a separate message. The
> down side here is that if you have
> sent say 1 million messages to Kafka and then NiFi is restarted, it doesn't
> know that those 1 million messages have
> been sent, so you will end up sending all of the data again and will
> duplicate a lot of the messages.
>
> The third approach is a hybrid of the two. You can use SplitText to split
> the FlowFile into 10,000 lines each. Then,
> instead of sending to another SplitText, you can send the "splits"
> relationship to PutKafka with a Message Delimiter
> of "\n". This way, you will still get great throughput by not splitting each
> FlowFile into millions of FlowFiles, but you will
> avoid duplicating millions of messages (you'll duplicate at the very most
> 10,000 messages in this example).
>
> So you can use any of these approaches. You just have to consider the pro's
> and con's of each and decide which
> trade-offs you want to make.
>
> Thanks
> -Mark
>
>
> [1] https://issues.apache.org/jira/browse/NIFI-1008
>
>
>
>
>
> On Nov 10, 2015, at 5:28 PM, Madhire, Naveen 
> wrote:
>
> Hi,
>
> I am reading a 2 GB file from local and putting the data into a Kafka topic.
>
> Since GetFile only creates one flow file per file, I am making use of
> SplitText processor to split the file into one flow file per line before
> inserting the data into a Kafka topic.
> I am seeing a lot of “GC Overhead limit exceeded errors” on SplitText
> processor. I am running Nifi on a single linux server with 16 GB memory.
>
> Is this the right approach of reading and putting into Kafka?
> Or there is any better approach?
>
> Thanks,
> Naveen
>
>
>
> 
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
>
>
>
> 
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.


Re: Memory Issues on Split Text

2015-11-11 Thread Madhire, Naveen
Thanks Mark. The workaround to have intermediate split text to split few lines 
works well, as you said, the throughput is not quite there. I think it serves 
our purpose as of now.



From: Mark Payne mailto:marka...@hotmail.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Tuesday, November 10, 2015 at 7:12 PM
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: Memory Issues on Split Text

Naveen,

There is a ticket [1] that will make this work more cleanly so that we can use 
SplitText to split a large
file into millions of FlowFiles. Right now, as you noted you will end up 
running out of memory. There are
a few possible solutions that you can use.

If you need to split each line into a separate FlowFile, the easiest way is to 
use two SplitText processors.
The first would be configured with a Line Split Count of say 10,000. Then, the 
"splits" relationship is routed
to a second SplitText processor with the Line Split Count set to 1. This 
prevents the processor from holding
those millions of FlowFiles in memory. The only downside here is that if you 
create a FlowFile for every single
message, your throughput will not be quite as good.

The next approach is to just send the entire 2 GB FlowFile to PutKafka and set 
the Message Delimiter to "\n".
This will send each line in the FlowFile to Kafka as a separate message. The 
down side here is that if you have
sent say 1 million messages to Kafka and then NiFi is restarted, it doesn't 
know that those 1 million messages have
been sent, so you will end up sending all of the data again and will duplicate 
a lot of the messages.

The third approach is a hybrid of the two. You can use SplitText to split the 
FlowFile into 10,000 lines each. Then,
instead of sending to another SplitText, you can send the "splits" relationship 
to PutKafka with a Message Delimiter
of "\n". This way, you will still get great throughput by not splitting each 
FlowFile into millions of FlowFiles, but you will
avoid duplicating millions of messages (you'll duplicate at the very most 
10,000 messages in this example).

So you can use any of these approaches. You just have to consider the pro's and 
con's of each and decide which
trade-offs you want to make.

Thanks
-Mark


[1] https://issues.apache.org/jira/browse/NIFI-1008





On Nov 10, 2015, at 5:28 PM, Madhire, Naveen 
mailto:naveen.madh...@capitalone.com>> wrote:

Hi,

I am reading a 2 GB file from local and putting the data into a Kafka topic.

Since GetFile only creates one flow file per file, I am making use of SplitText 
processor to split the file into one flow file per line before inserting the 
data into a Kafka topic.
I am seeing a lot of “GC Overhead limit exceeded errors” on SplitText 
processor. I am running Nifi on a single linux server with 16 GB memory.

Is this the right approach of reading and putting into Kafka?
Or there is any better approach?

Thanks,
Naveen




The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


Re: Managing flows

2015-11-11 Thread Darren Govoni

Excellent. Enjoying the product so far. Works great!

On 11/11/2015 10:22 AM, Mark Petronic wrote:

You can organize them by creating nested process groups to make it
more sane to manage

On Wed, Nov 11, 2015 at 10:13 AM, Darren Govoni  wrote:

Thanks Joe.

And it seems all the different flows would be seen on the one canvas, just
not connected?


On 11/11/2015 10:02 AM, Joe Witt wrote:

Darren,

A single NiFi instance (on one node or a cluster of 10+) can handle
*many* different flows.

Thanks
Joe

On Wed, Nov 11, 2015 at 10:00 AM, Darren Govoni 
wrote:

Mark,
 Thanks for the tips. Appreciate it.

So when I run nifi on a single server. It is essentially "one flow"?
If I wanted to have say 2 or 3 active flows, I would (reasonably) have to
run more instances of nifi with appropriate
configuration to not conflict. Is that right?

Darren


On 11/11/2015 09:54 AM, Mark Petronic wrote:

Look in your Nifi conf directory. The active flow is there as an aptly
named .gz file. Guessing you could just rename that and restart Nifi
which would create a blank new one. Build up another flow, then you
could repeat the same "copy to new file name" and restore some other
one to continue on some previous flow/. I'm pretty new to Nifi, too,
so maybe there is another way. Also, you can create point-in-time
backups of your from from the "Settings" dialog in the DFM. There is a
link that shows up in there to click. It will copy your master flow gz
to your conf/archive directory. You can create multiple snapshots of
your flow to retain change history. I actually gunzip my backups and
commit them to Git for a more formal change history tracking
mechanism.

Hope that helps.

On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni 
wrote:

Hi again,
  Sorry for the noob questions. I am reading all the online material
as
much as possible.
But what hasn't jumped out at me yet is how flows are managed?

Are they saved, loaded, etc? I access my nifi and build a flow. Now I
want
to save it and work on another flow.
Lastly, will the flow be running even if I exit the webapp?

thanks for any tips. If I missed something obvious, regrets.

D






Re: Managing flows

2015-11-11 Thread Mark Petronic
You can organize them by creating nested process groups to make it
more sane to manage

On Wed, Nov 11, 2015 at 10:13 AM, Darren Govoni  wrote:
> Thanks Joe.
>
> And it seems all the different flows would be seen on the one canvas, just
> not connected?
>
>
> On 11/11/2015 10:02 AM, Joe Witt wrote:
>>
>> Darren,
>>
>> A single NiFi instance (on one node or a cluster of 10+) can handle
>> *many* different flows.
>>
>> Thanks
>> Joe
>>
>> On Wed, Nov 11, 2015 at 10:00 AM, Darren Govoni 
>> wrote:
>>>
>>> Mark,
>>> Thanks for the tips. Appreciate it.
>>>
>>> So when I run nifi on a single server. It is essentially "one flow"?
>>> If I wanted to have say 2 or 3 active flows, I would (reasonably) have to
>>> run more instances of nifi with appropriate
>>> configuration to not conflict. Is that right?
>>>
>>> Darren
>>>
>>>
>>> On 11/11/2015 09:54 AM, Mark Petronic wrote:

 Look in your Nifi conf directory. The active flow is there as an aptly
 named .gz file. Guessing you could just rename that and restart Nifi
 which would create a blank new one. Build up another flow, then you
 could repeat the same "copy to new file name" and restore some other
 one to continue on some previous flow/. I'm pretty new to Nifi, too,
 so maybe there is another way. Also, you can create point-in-time
 backups of your from from the "Settings" dialog in the DFM. There is a
 link that shows up in there to click. It will copy your master flow gz
 to your conf/archive directory. You can create multiple snapshots of
 your flow to retain change history. I actually gunzip my backups and
 commit them to Git for a more formal change history tracking
 mechanism.

 Hope that helps.

 On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni 
 wrote:
>
> Hi again,
>  Sorry for the noob questions. I am reading all the online material
> as
> much as possible.
> But what hasn't jumped out at me yet is how flows are managed?
>
> Are they saved, loaded, etc? I access my nifi and build a flow. Now I
> want
> to save it and work on another flow.
> Lastly, will the flow be running even if I exit the webapp?
>
> thanks for any tips. If I missed something obvious, regrets.
>
> D
>>>
>>>
>


Re: Managing flows

2015-11-11 Thread Joe Witt
You got it.  Everything we're doing is about showing context so
visualizing different/disconnected flows together is part of that
story.  You can abstract them away in different process groups and
organize them in many different ways.  You will often find over time
all these seemingly disconnected linear flows have a way of growing
together and forming a true graph.

Thanks
Joe

On Wed, Nov 11, 2015 at 10:13 AM, Darren Govoni  wrote:
> Thanks Joe.
>
> And it seems all the different flows would be seen on the one canvas, just
> not connected?
>
>
> On 11/11/2015 10:02 AM, Joe Witt wrote:
>>
>> Darren,
>>
>> A single NiFi instance (on one node or a cluster of 10+) can handle
>> *many* different flows.
>>
>> Thanks
>> Joe
>>
>> On Wed, Nov 11, 2015 at 10:00 AM, Darren Govoni 
>> wrote:
>>>
>>> Mark,
>>> Thanks for the tips. Appreciate it.
>>>
>>> So when I run nifi on a single server. It is essentially "one flow"?
>>> If I wanted to have say 2 or 3 active flows, I would (reasonably) have to
>>> run more instances of nifi with appropriate
>>> configuration to not conflict. Is that right?
>>>
>>> Darren
>>>
>>>
>>> On 11/11/2015 09:54 AM, Mark Petronic wrote:

 Look in your Nifi conf directory. The active flow is there as an aptly
 named .gz file. Guessing you could just rename that and restart Nifi
 which would create a blank new one. Build up another flow, then you
 could repeat the same "copy to new file name" and restore some other
 one to continue on some previous flow/. I'm pretty new to Nifi, too,
 so maybe there is another way. Also, you can create point-in-time
 backups of your from from the "Settings" dialog in the DFM. There is a
 link that shows up in there to click. It will copy your master flow gz
 to your conf/archive directory. You can create multiple snapshots of
 your flow to retain change history. I actually gunzip my backups and
 commit them to Git for a more formal change history tracking
 mechanism.

 Hope that helps.

 On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni 
 wrote:
>
> Hi again,
>  Sorry for the noob questions. I am reading all the online material
> as
> much as possible.
> But what hasn't jumped out at me yet is how flows are managed?
>
> Are they saved, loaded, etc? I access my nifi and build a flow. Now I
> want
> to save it and work on another flow.
> Lastly, will the flow be running even if I exit the webapp?
>
> thanks for any tips. If I missed something obvious, regrets.
>
> D
>>>
>>>
>


Re: Managing flows

2015-11-11 Thread Darren Govoni

Thanks Joe.

And it seems all the different flows would be seen on the one canvas, 
just not connected?


On 11/11/2015 10:02 AM, Joe Witt wrote:

Darren,

A single NiFi instance (on one node or a cluster of 10+) can handle
*many* different flows.

Thanks
Joe

On Wed, Nov 11, 2015 at 10:00 AM, Darren Govoni  wrote:

Mark,
Thanks for the tips. Appreciate it.

So when I run nifi on a single server. It is essentially "one flow"?
If I wanted to have say 2 or 3 active flows, I would (reasonably) have to
run more instances of nifi with appropriate
configuration to not conflict. Is that right?

Darren


On 11/11/2015 09:54 AM, Mark Petronic wrote:

Look in your Nifi conf directory. The active flow is there as an aptly
named .gz file. Guessing you could just rename that and restart Nifi
which would create a blank new one. Build up another flow, then you
could repeat the same "copy to new file name" and restore some other
one to continue on some previous flow/. I'm pretty new to Nifi, too,
so maybe there is another way. Also, you can create point-in-time
backups of your from from the "Settings" dialog in the DFM. There is a
link that shows up in there to click. It will copy your master flow gz
to your conf/archive directory. You can create multiple snapshots of
your flow to retain change history. I actually gunzip my backups and
commit them to Git for a more formal change history tracking
mechanism.

Hope that helps.

On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni 
wrote:

Hi again,
 Sorry for the noob questions. I am reading all the online material as
much as possible.
But what hasn't jumped out at me yet is how flows are managed?

Are they saved, loaded, etc? I access my nifi and build a flow. Now I
want
to save it and work on another flow.
Lastly, will the flow be running even if I exit the webapp?

thanks for any tips. If I missed something obvious, regrets.

D






Re: Managing flows

2015-11-11 Thread Bryan Bende
In addition to what Mark said, there is also the option of templates [1].
Templates let you export a portion, or all of your flow,
and then import it again later. When you export a template it will not
export any properties that are marked as sensitive properties,
so it is safe to share with others.

Regarding "one flow", you can have as many different logical flows with in
one nifi instance as you want, but it is managed as one flow behind the
scenes.

[1] https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#templates

On Wed, Nov 11, 2015 at 10:00 AM, Darren Govoni  wrote:

> Mark,
>Thanks for the tips. Appreciate it.
>
> So when I run nifi on a single server. It is essentially "one flow"?
> If I wanted to have say 2 or 3 active flows, I would (reasonably) have to
> run more instances of nifi with appropriate
> configuration to not conflict. Is that right?
>
> Darren
>
>
> On 11/11/2015 09:54 AM, Mark Petronic wrote:
>
>> Look in your Nifi conf directory. The active flow is there as an aptly
>> named .gz file. Guessing you could just rename that and restart Nifi
>> which would create a blank new one. Build up another flow, then you
>> could repeat the same "copy to new file name" and restore some other
>> one to continue on some previous flow/. I'm pretty new to Nifi, too,
>> so maybe there is another way. Also, you can create point-in-time
>> backups of your from from the "Settings" dialog in the DFM. There is a
>> link that shows up in there to click. It will copy your master flow gz
>> to your conf/archive directory. You can create multiple snapshots of
>> your flow to retain change history. I actually gunzip my backups and
>> commit them to Git for a more formal change history tracking
>> mechanism.
>>
>> Hope that helps.
>>
>> On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni 
>> wrote:
>>
>>> Hi again,
>>> Sorry for the noob questions. I am reading all the online material as
>>> much as possible.
>>> But what hasn't jumped out at me yet is how flows are managed?
>>>
>>> Are they saved, loaded, etc? I access my nifi and build a flow. Now I
>>> want
>>> to save it and work on another flow.
>>> Lastly, will the flow be running even if I exit the webapp?
>>>
>>> thanks for any tips. If I missed something obvious, regrets.
>>>
>>> D
>>>
>>
>


Re: Managing flows

2015-11-11 Thread Joe Witt
Darren,

A single NiFi instance (on one node or a cluster of 10+) can handle
*many* different flows.

Thanks
Joe

On Wed, Nov 11, 2015 at 10:00 AM, Darren Govoni  wrote:
> Mark,
>Thanks for the tips. Appreciate it.
>
> So when I run nifi on a single server. It is essentially "one flow"?
> If I wanted to have say 2 or 3 active flows, I would (reasonably) have to
> run more instances of nifi with appropriate
> configuration to not conflict. Is that right?
>
> Darren
>
>
> On 11/11/2015 09:54 AM, Mark Petronic wrote:
>>
>> Look in your Nifi conf directory. The active flow is there as an aptly
>> named .gz file. Guessing you could just rename that and restart Nifi
>> which would create a blank new one. Build up another flow, then you
>> could repeat the same "copy to new file name" and restore some other
>> one to continue on some previous flow/. I'm pretty new to Nifi, too,
>> so maybe there is another way. Also, you can create point-in-time
>> backups of your from from the "Settings" dialog in the DFM. There is a
>> link that shows up in there to click. It will copy your master flow gz
>> to your conf/archive directory. You can create multiple snapshots of
>> your flow to retain change history. I actually gunzip my backups and
>> commit them to Git for a more formal change history tracking
>> mechanism.
>>
>> Hope that helps.
>>
>> On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni 
>> wrote:
>>>
>>> Hi again,
>>> Sorry for the noob questions. I am reading all the online material as
>>> much as possible.
>>> But what hasn't jumped out at me yet is how flows are managed?
>>>
>>> Are they saved, loaded, etc? I access my nifi and build a flow. Now I
>>> want
>>> to save it and work on another flow.
>>> Lastly, will the flow be running even if I exit the webapp?
>>>
>>> thanks for any tips. If I missed something obvious, regrets.
>>>
>>> D
>
>


Re: Managing flows

2015-11-11 Thread Darren Govoni

Mark,
   Thanks for the tips. Appreciate it.

So when I run nifi on a single server. It is essentially "one flow"?
If I wanted to have say 2 or 3 active flows, I would (reasonably) have 
to run more instances of nifi with appropriate

configuration to not conflict. Is that right?

Darren

On 11/11/2015 09:54 AM, Mark Petronic wrote:

Look in your Nifi conf directory. The active flow is there as an aptly
named .gz file. Guessing you could just rename that and restart Nifi
which would create a blank new one. Build up another flow, then you
could repeat the same "copy to new file name" and restore some other
one to continue on some previous flow/. I'm pretty new to Nifi, too,
so maybe there is another way. Also, you can create point-in-time
backups of your from from the "Settings" dialog in the DFM. There is a
link that shows up in there to click. It will copy your master flow gz
to your conf/archive directory. You can create multiple snapshots of
your flow to retain change history. I actually gunzip my backups and
commit them to Git for a more formal change history tracking
mechanism.

Hope that helps.

On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni  wrote:

Hi again,
Sorry for the noob questions. I am reading all the online material as
much as possible.
But what hasn't jumped out at me yet is how flows are managed?

Are they saved, loaded, etc? I access my nifi and build a flow. Now I want
to save it and work on another flow.
Lastly, will the flow be running even if I exit the webapp?

thanks for any tips. If I missed something obvious, regrets.

D




Re: Managing flows

2015-11-11 Thread Mark Petronic
Regarding Nifi always running. Yes, it stays running. It is
effectively a service with a REST and Web UI. Closing the web UI does
not have any effect on the running processors - just your visibility
to them.

On Wed, Nov 11, 2015 at 9:54 AM, Mark Petronic  wrote:
> Look in your Nifi conf directory. The active flow is there as an aptly
> named .gz file. Guessing you could just rename that and restart Nifi
> which would create a blank new one. Build up another flow, then you
> could repeat the same "copy to new file name" and restore some other
> one to continue on some previous flow/. I'm pretty new to Nifi, too,
> so maybe there is another way. Also, you can create point-in-time
> backups of your from from the "Settings" dialog in the DFM. There is a
> link that shows up in there to click. It will copy your master flow gz
> to your conf/archive directory. You can create multiple snapshots of
> your flow to retain change history. I actually gunzip my backups and
> commit them to Git for a more formal change history tracking
> mechanism.
>
> Hope that helps.
>
> On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni  wrote:
>> Hi again,
>>Sorry for the noob questions. I am reading all the online material as
>> much as possible.
>> But what hasn't jumped out at me yet is how flows are managed?
>>
>> Are they saved, loaded, etc? I access my nifi and build a flow. Now I want
>> to save it and work on another flow.
>> Lastly, will the flow be running even if I exit the webapp?
>>
>> thanks for any tips. If I missed something obvious, regrets.
>>
>> D


Re: Managing flows

2015-11-11 Thread Mark Petronic
Look in your Nifi conf directory. The active flow is there as an aptly
named .gz file. Guessing you could just rename that and restart Nifi
which would create a blank new one. Build up another flow, then you
could repeat the same "copy to new file name" and restore some other
one to continue on some previous flow/. I'm pretty new to Nifi, too,
so maybe there is another way. Also, you can create point-in-time
backups of your from from the "Settings" dialog in the DFM. There is a
link that shows up in there to click. It will copy your master flow gz
to your conf/archive directory. You can create multiple snapshots of
your flow to retain change history. I actually gunzip my backups and
commit them to Git for a more formal change history tracking
mechanism.

Hope that helps.

On Wed, Nov 11, 2015 at 9:45 AM, Darren Govoni  wrote:
> Hi again,
>Sorry for the noob questions. I am reading all the online material as
> much as possible.
> But what hasn't jumped out at me yet is how flows are managed?
>
> Are they saved, loaded, etc? I access my nifi and build a flow. Now I want
> to save it and work on another flow.
> Lastly, will the flow be running even if I exit the webapp?
>
> thanks for any tips. If I missed something obvious, regrets.
>
> D


Managing flows

2015-11-11 Thread Darren Govoni

Hi again,
   Sorry for the noob questions. I am reading all the online material 
as much as possible.

But what hasn't jumped out at me yet is how flows are managed?

Are they saved, loaded, etc? I access my nifi and build a flow. Now I 
want to save it and work on another flow.

Lastly, will the flow be running even if I exit the webapp?

thanks for any tips. If I missed something obvious, regrets.

D


Re: Post to REST service?

2015-11-11 Thread Bryan Bende
Hello,

You should be able to use expression language in the URL value, you could
reference any attribute by doing the following: ${attributeName}. So your
URL could be http://myhost/${id}

-Bryan

On Wed, Nov 11, 2015 at 8:03 AM, Darren Govoni  wrote:

> Hi,
>   I am trying to get my PostHTTP processor to post the incoming content to
> a REST url.
> The incoming flowfile has an attribute set 'id' that needs to be part of
> the URL of the POST.
>
> Is there a notation for parameterizing the post URL from flowfile
> attributes?
>
> thanks,
> Darren
>


Post to REST service?

2015-11-11 Thread Darren Govoni

Hi,
  I am trying to get my PostHTTP processor to post the incoming content 
to a REST url.
The incoming flowfile has an attribute set 'id' that needs to be part of 
the URL of the POST.


Is there a notation for parameterizing the post URL from flowfile 
attributes?


thanks,
Darren


Re: Replicate flow files to multiple processors

2015-11-11 Thread Mark Payne
Oleg,

Replication simply means to make a copy of something. You're thinking of 
replication as
distributed data replication in order to provide high availability, I believe. 
What we are talking
about here is simply sending a FlowFile from Processor A to Processor B and 
also sending
that same FlowFile (or a copy of it) from Processor A to Processor C.

So when you create two connections with the same relationship, you are sending 
a copy
of the FlowFile to both connections (i.e., you are replicating it).

Thanks
-Mark

> On Nov 11, 2015, at 7:42 AM, Oleg Zhurakousky  
> wrote:
> 
> I am still a bit confused with the problem that is being solved here.
> “replication” implies some type of redundancy allowing processing that failed 
> “here” to be resumed “there”.
> 
> What I am reading here is more about "content based routing” - (route to 
> their respective workflows based on their attribute)
> 
> Am I missing something?
> 
> Cheers
> Oleg
>> On Nov 10, 2015, at 5:35 PM, Andrew Grande > > wrote:
>> 
>> As mentioned, simply keep connecting things together (e.g. multiple 
>> 'success' relationship links). For better organization, consider putting a 
>> Funnel in the flow and connecting to it instead of a processor.
>> 
>> Andrew
>> 
>> From: Chakrader Dewaragatla > >
>> Reply-To: "users@nifi.apache.org " 
>> mailto:users@nifi.apache.org>>
>> Date: Tuesday, November 10, 2015 at 3:01 PM
>> To: "users@nifi.apache.org " 
>> mailto:users@nifi.apache.org>>
>> Subject: RE: Replicate flow files to multiple processors
>> 
>> Thanks Mark. This should help. 
>> 
>>  Our use case is to route traffic (flowflies) to multiple independent 
>> processors that inline route to their respective workflows based on their 
>> attribute.
>>  
>> From: Mark Payne [marka...@hotmail.com ]
>> Sent: Tuesday, November 10, 2015 11:45 AM
>> To: users@nifi.apache.org 
>> Subject: Re: Replicate flow files to multiple processors
>> 
>> Chakri,
>> 
>> This can be done with any Processor. You can simply drag multiple 
>> connections that have the same Relationship.
>> 
>> For example, you can create a GetSFTP processor and draw a connection from 
>> GetSFTP to UpdateAttribute with the 'success' relationship.
>> and then also draw a connection from GetSFTP to PutHDFS with the 'success' 
>> relationship.
>> 
>> This will result in each FlowFile that is routed to 'success' going to both 
>> Processors.
>> 
>> NiFi does this without copying the data or anything, simply by creating a 
>> new FlowFile that points to the same content on disk, so
>> it is able to do this extremely efficiently.
>> 
>> Thanks
>> -Mark
>> 
>> 
>> 
>> 
>>> On Nov 10, 2015, at 2:39 PM, Chakrader Dewaragatla 
>>> >> > wrote:
>>> 
>>> Hi - Do we have any built in processor that replicate flow files to 
>>> multiple  processors in parallel (in memory, not staging on disk)? 
>>> I was looking at distributedload processor that distribute load on 
>>> weighted, roudrobin technique. I am looking for something that replicate 
>>> the flow files. 
>>> 
>>> Thanks,
>>> -Chakri
>>> The information contained in this transmission may contain privileged and 
>>> confidential information. It is intended only for the use of the person(s) 
>>> named above. If you are not the intended recipient, you are hereby notified 
>>> that any review, dissemination, distribution or duplication of this 
>>> communication is strictly prohibited. If you are not the intended 
>>> recipient, please contact the sender by reply email and destroy all copies 
>>> of the original message.
>> 
>> The information contained in this transmission may contain privileged and 
>> confidential information. It is intended only for the use of the person(s) 
>> named above. If you are not the intended recipient, you are hereby notified 
>> that any review, dissemination, distribution or duplication of this 
>> communication is strictly prohibited. If you are not the intended recipient, 
>> please contact the sender by reply email and destroy all copies of the 
>> original message.
> 



Re: Replicate flow files to multiple processors

2015-11-11 Thread Oleg Zhurakousky
I am still a bit confused with the problem that is being solved here.
“replication” implies some type of redundancy allowing processing that failed 
“here” to be resumed “there”.

What I am reading here is more about "content based routing” - (route to their 
respective workflows based on their attribute)

Am I missing something?

Cheers
Oleg
On Nov 10, 2015, at 5:35 PM, Andrew Grande 
mailto:agra...@hortonworks.com>> wrote:

As mentioned, simply keep connecting things together (e.g. multiple 'success' 
relationship links). For better organization, consider putting a Funnel in the 
flow and connecting to it instead of a processor.

Andrew

From: Chakrader Dewaragatla 
mailto:chakrader.dewaraga...@lifelock.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Tuesday, November 10, 2015 at 3:01 PM
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: RE: Replicate flow files to multiple processors

Thanks Mark. This should help.

 Our use case is to route traffic (flowflies) to multiple independent 
processors that inline route to their respective workflows based on their 
attribute.


From: Mark Payne [marka...@hotmail.com]
Sent: Tuesday, November 10, 2015 11:45 AM
To: users@nifi.apache.org
Subject: Re: Replicate flow files to multiple processors

Chakri,

This can be done with any Processor. You can simply drag multiple connections 
that have the same Relationship.

For example, you can create a GetSFTP processor and draw a connection from 
GetSFTP to UpdateAttribute with the 'success' relationship.
and then also draw a connection from GetSFTP to PutHDFS with the 'success' 
relationship.

This will result in each FlowFile that is routed to 'success' going to both 
Processors.

NiFi does this without copying the data or anything, simply by creating a new 
FlowFile that points to the same content on disk, so
it is able to do this extremely efficiently.

Thanks
-Mark




On Nov 10, 2015, at 2:39 PM, Chakrader Dewaragatla 
mailto:chakrader.dewaraga...@lifelock.com>> 
wrote:

Hi - Do we have any built in processor that replicate flow files to multiple  
processors in parallel (in memory, not staging on disk)?
I was looking at distributedload processor that distribute load on weighted, 
roudrobin technique. I am looking for something that replicate the flow files.

Thanks,
-Chakri

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.