Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread Neil Derraugh
Worked like a charm, thanks!

On Fri, May 19, 2017 at 2:17 PM, ddewaele  wrote:

> Sorry, payload should also include the nodeId
>
> curl -v -X PUT -d
> "{\"node\":{\"nodeId\":\"b89e8418-4b7f-4743-bdf4-
> 4a08a92f3892\",\"status\":\"DISCONNECTING\"}}"
> -H "Content-Type: application/json"
> http://192.168.122.141:8080/nifi-api/controller/cluster/
> nodes/b89e8418-4b7f-4743-bdf4-4a08a92f3892
>
>
>
> --
> View this message in context: http://apache-nifi-users-list.
> 2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-
> node-when-node-was-killed-tp1942p1982.html
> Sent from the Apache NiFi Users List mailing list archive at Nabble.com.
>


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread ddewaele
Sorry, payload should also include the nodeId

curl -v -X PUT -d
"{\"node\":{\"nodeId\":\"b89e8418-4b7f-4743-bdf4-4a08a92f3892\",\"status\":\"DISCONNECTING\"}}"
-H "Content-Type: application/json"
http://192.168.122.141:8080/nifi-api/controller/cluster/nodes/b89e8418-4b7f-4743-bdf4-4a08a92f3892
 



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942p1982.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread ddewaele
You should be able to put it into DISCONNECTED mode by doing the following
call :

curl -v -X PUT -d "{\"node\":{\"status\":\"DISCONNECTING\”}}” -H
"Content-Type: application/json"
http://192.168.122.141:8080/nifi-api/controller/cluster/nodes/b89e8418-4b7f-4743-bdf4-4a08a92f3892
 

It should respond with an HTTP 200 and a message saying it went to state
DISCONNECTED.

That way you can access the GUI again and delete the node from the cluster
if you want to.

Tested this workaround with 1.2.0 and works.



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-Cluster-fails-to-disconnect-node-when-node-was-killed-tp1942p1981.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread Joe Witt
I see.  Yeah that sounds like something the jira gilman mentioned will
resolve.  Thanks for clarifying.  I'm sure that jira will be addressed soon.

On May 19, 2017 1:06 PM, "Neil Derraugh" <
neil.derra...@intellifylearning.com> wrote:

> That's the whole problem from my perspective: it stays CONNECTED.  It
> never becomes DISCONNECTED.  You can't delete it from the API in 1.2.0.
>
> That's why I said it was a single point of failure.  The exact semantics
> of calling it a single point of failure might be debatable, but the fact
> that the cluster can't be modified and/or gracefully shutdown (afaik) is
> what I was referring to.
>
> On Fri, May 19, 2017 at 12:40 PM, Joe Witt  wrote:
>
>> I believe at the state you describe that down node is now considered
>> disconnected.  The cluster behavior prohibits you from making changes when
>> it knows not all members of the cluster cannot honor the change.  If you
>> are sure you want to make the changes anyway and move on without that node
>> you should be able to remove it/delete it from the cluster.  Now you have a
>> cluster of two connected nodes and you can make changes.
>>
>> On May 19, 2017 12:23 PM, "Neil Derraugh" > ng.com> wrote:
>>
>>> That's fair.  But for the sake of total clarity on my own part, after
>>> one of these disaster scenarios with a newly quorum-elected primary things
>>> cannot be driven through the UI and at least through parts the REST API.
>>>
>>> I just ran through the following.  We have 3 nodes A, B, C with A
>>> primary, and A becomes unreachable without first disconnecting.  Then B and
>>> C may (I haven't verified) continue operating the flow they had in the
>>> clusters' last "good" state.  But they do elect a new primary, as per the
>>> REST nifi-api/controller/cluster response.  But now the flow can't be
>>> changed, and in some cases it can't be reported on either, i.e. some GETs
>>> fail, like nifi-api/flow/process-groups/root.
>>>
>>> Are we describing the same behavior?
>>>
>>> On Fri, May 19, 2017 at 11:12 AM, Joe Witt  wrote:
>>>
 If there is no longer a quorum then we cannot drive things from the UI
 but the cluster remaining is in tact from a functioning point of view
 other than being able to assign a primary to handle the one-off items.

 On Fri, May 19, 2017 at 11:04 AM, Neil Derraugh
  wrote:
 > Hi Joe,
 >
 > Maybe I'm missing something, but if the primary node suffers a network
 > partition or container/vm/machine loss or becomes otherwise
 unreachable then
 > the cluster is unusable, at least from the UI.
 >
 > If that's not so please correct me.
 >
 > Thanks,
 > Neil
 >
 > On Thu, May 18, 2017 at 9:56 PM, Joe Witt  wrote:
 >>
 >> Neil,
 >>
 >> Want to make sure I understand what you're saying.  What are stating
 >> is a single point of failure?
 >>
 >> Thanks
 >> Joe
 >>
 >> On Thu, May 18, 2017 at 5:27 PM, Neil Derraugh
 >>  wrote:
 >> > Thanks for the insight Matt.
 >> >
 >> > It's a disaster recovery issue.  It's not something I plan on
 doing on
 >> > purpose.  It seems it is a single point of failure unfortunately.
 I can
 >> > see
 >> > no other way to resolve the issue other than to blow everything
 away and
 >> > start a new cluster.
 >> >
 >> > On Thu, May 18, 2017 at 2:49 PM, Matt Gilman <
 matt.c.gil...@gmail.com>
 >> > wrote:
 >> >>
 >> >> Neil,
 >> >>
 >> >> Disconnecting a node prior to removal is the correct process. It
 >> >> appears
 >> >> that the check was lost going from 0.x to 1.x. Folks reported
 this JIRA
 >> >> [1]
 >> >> indicating that deleting a connected node did not work. This
 process
 >> >> does
 >> >> not work because the node needs to be disconnected first. The
 JIRA was
 >> >> addressed by restoring the check that a node is disconnected
 prior to
 >> >> deletion.
 >> >>
 >> >> Hopefully the JIRA I filed earlier today [2] will address the
 phantom
 >> >> node
 >> >> you were seeing. Until then, can you update your workaround to
 >> >> disconnect
 >> >> the node in question prior to deletion?
 >> >>
 >> >> Thanks
 >> >>
 >> >> Matt
 >> >>
 >> >> [1] https://issues.apache.org/jira/browse/NIFI-3295
 >> >> [2] https://issues.apache.org/jira/browse/NIFI-3933
 >> >>
 >> >> On Thu, May 18, 2017 at 12:29 PM, Neil Derraugh
 >> >>  wrote:
 >> >>>
 >> >>> Pretty sure this is the problem I was describing in the "Phantom
 Node"
 >> >>> thread recently.
 >> >>>
 >> >>> If I kill non-primary nodes the cluster remains healthy despite
 the
 >> >>> lost
 >> 

Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread Neil Derraugh
That's the whole problem from my perspective: it stays CONNECTED.  It never
becomes DISCONNECTED.  You can't delete it from the API in 1.2.0.

That's why I said it was a single point of failure.  The exact semantics of
calling it a single point of failure might be debatable, but the fact that
the cluster can't be modified and/or gracefully shutdown (afaik) is what I
was referring to.

On Fri, May 19, 2017 at 12:40 PM, Joe Witt  wrote:

> I believe at the state you describe that down node is now considered
> disconnected.  The cluster behavior prohibits you from making changes when
> it knows not all members of the cluster cannot honor the change.  If you
> are sure you want to make the changes anyway and move on without that node
> you should be able to remove it/delete it from the cluster.  Now you have a
> cluster of two connected nodes and you can make changes.
>
> On May 19, 2017 12:23 PM, "Neil Derraugh"  intellifylearning.com> wrote:
>
>> That's fair.  But for the sake of total clarity on my own part, after one
>> of these disaster scenarios with a newly quorum-elected primary things
>> cannot be driven through the UI and at least through parts the REST API.
>>
>> I just ran through the following.  We have 3 nodes A, B, C with A
>> primary, and A becomes unreachable without first disconnecting.  Then B and
>> C may (I haven't verified) continue operating the flow they had in the
>> clusters' last "good" state.  But they do elect a new primary, as per the
>> REST nifi-api/controller/cluster response.  But now the flow can't be
>> changed, and in some cases it can't be reported on either, i.e. some GETs
>> fail, like nifi-api/flow/process-groups/root.
>>
>> Are we describing the same behavior?
>>
>> On Fri, May 19, 2017 at 11:12 AM, Joe Witt  wrote:
>>
>>> If there is no longer a quorum then we cannot drive things from the UI
>>> but the cluster remaining is in tact from a functioning point of view
>>> other than being able to assign a primary to handle the one-off items.
>>>
>>> On Fri, May 19, 2017 at 11:04 AM, Neil Derraugh
>>>  wrote:
>>> > Hi Joe,
>>> >
>>> > Maybe I'm missing something, but if the primary node suffers a network
>>> > partition or container/vm/machine loss or becomes otherwise
>>> unreachable then
>>> > the cluster is unusable, at least from the UI.
>>> >
>>> > If that's not so please correct me.
>>> >
>>> > Thanks,
>>> > Neil
>>> >
>>> > On Thu, May 18, 2017 at 9:56 PM, Joe Witt  wrote:
>>> >>
>>> >> Neil,
>>> >>
>>> >> Want to make sure I understand what you're saying.  What are stating
>>> >> is a single point of failure?
>>> >>
>>> >> Thanks
>>> >> Joe
>>> >>
>>> >> On Thu, May 18, 2017 at 5:27 PM, Neil Derraugh
>>> >>  wrote:
>>> >> > Thanks for the insight Matt.
>>> >> >
>>> >> > It's a disaster recovery issue.  It's not something I plan on doing
>>> on
>>> >> > purpose.  It seems it is a single point of failure unfortunately.
>>> I can
>>> >> > see
>>> >> > no other way to resolve the issue other than to blow everything
>>> away and
>>> >> > start a new cluster.
>>> >> >
>>> >> > On Thu, May 18, 2017 at 2:49 PM, Matt Gilman <
>>> matt.c.gil...@gmail.com>
>>> >> > wrote:
>>> >> >>
>>> >> >> Neil,
>>> >> >>
>>> >> >> Disconnecting a node prior to removal is the correct process. It
>>> >> >> appears
>>> >> >> that the check was lost going from 0.x to 1.x. Folks reported this
>>> JIRA
>>> >> >> [1]
>>> >> >> indicating that deleting a connected node did not work. This
>>> process
>>> >> >> does
>>> >> >> not work because the node needs to be disconnected first. The JIRA
>>> was
>>> >> >> addressed by restoring the check that a node is disconnected prior
>>> to
>>> >> >> deletion.
>>> >> >>
>>> >> >> Hopefully the JIRA I filed earlier today [2] will address the
>>> phantom
>>> >> >> node
>>> >> >> you were seeing. Until then, can you update your workaround to
>>> >> >> disconnect
>>> >> >> the node in question prior to deletion?
>>> >> >>
>>> >> >> Thanks
>>> >> >>
>>> >> >> Matt
>>> >> >>
>>> >> >> [1] https://issues.apache.org/jira/browse/NIFI-3295
>>> >> >> [2] https://issues.apache.org/jira/browse/NIFI-3933
>>> >> >>
>>> >> >> On Thu, May 18, 2017 at 12:29 PM, Neil Derraugh
>>> >> >>  wrote:
>>> >> >>>
>>> >> >>> Pretty sure this is the problem I was describing in the "Phantom
>>> Node"
>>> >> >>> thread recently.
>>> >> >>>
>>> >> >>> If I kill non-primary nodes the cluster remains healthy despite
>>> the
>>> >> >>> lost
>>> >> >>> nodes.  The terminated nodes end up with a DISCONNECTED status.
>>> >> >>>
>>> >> >>> If I kill the primary it winds up with a CONNECTED status, but a
>>> new
>>> >> >>> primary/cluster coordinator gets elected too.
>>> >> >>>
>>> >> >>> Additionally it seems in 1.2.0 that the REST API no longer support
>>> >> >>> deleting a node in a CONNECTED state (Cannot 

Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread Joe Witt
I believe at the state you describe that down node is now considered
disconnected.  The cluster behavior prohibits you from making changes when
it knows not all members of the cluster cannot honor the change.  If you
are sure you want to make the changes anyway and move on without that node
you should be able to remove it/delete it from the cluster.  Now you have a
cluster of two connected nodes and you can make changes.

On May 19, 2017 12:23 PM, "Neil Derraugh" <
neil.derra...@intellifylearning.com> wrote:

> That's fair.  But for the sake of total clarity on my own part, after one
> of these disaster scenarios with a newly quorum-elected primary things
> cannot be driven through the UI and at least through parts the REST API.
>
> I just ran through the following.  We have 3 nodes A, B, C with A primary,
> and A becomes unreachable without first disconnecting.  Then B and C may (I
> haven't verified) continue operating the flow they had in the clusters'
> last "good" state.  But they do elect a new primary, as per the REST
> nifi-api/controller/cluster response.  But now the flow can't be changed,
> and in some cases it can't be reported on either, i.e. some GETs fail, like
> nifi-api/flow/process-groups/root.
>
> Are we describing the same behavior?
>
> On Fri, May 19, 2017 at 11:12 AM, Joe Witt  wrote:
>
>> If there is no longer a quorum then we cannot drive things from the UI
>> but the cluster remaining is in tact from a functioning point of view
>> other than being able to assign a primary to handle the one-off items.
>>
>> On Fri, May 19, 2017 at 11:04 AM, Neil Derraugh
>>  wrote:
>> > Hi Joe,
>> >
>> > Maybe I'm missing something, but if the primary node suffers a network
>> > partition or container/vm/machine loss or becomes otherwise unreachable
>> then
>> > the cluster is unusable, at least from the UI.
>> >
>> > If that's not so please correct me.
>> >
>> > Thanks,
>> > Neil
>> >
>> > On Thu, May 18, 2017 at 9:56 PM, Joe Witt  wrote:
>> >>
>> >> Neil,
>> >>
>> >> Want to make sure I understand what you're saying.  What are stating
>> >> is a single point of failure?
>> >>
>> >> Thanks
>> >> Joe
>> >>
>> >> On Thu, May 18, 2017 at 5:27 PM, Neil Derraugh
>> >>  wrote:
>> >> > Thanks for the insight Matt.
>> >> >
>> >> > It's a disaster recovery issue.  It's not something I plan on doing
>> on
>> >> > purpose.  It seems it is a single point of failure unfortunately.  I
>> can
>> >> > see
>> >> > no other way to resolve the issue other than to blow everything away
>> and
>> >> > start a new cluster.
>> >> >
>> >> > On Thu, May 18, 2017 at 2:49 PM, Matt Gilman <
>> matt.c.gil...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> Neil,
>> >> >>
>> >> >> Disconnecting a node prior to removal is the correct process. It
>> >> >> appears
>> >> >> that the check was lost going from 0.x to 1.x. Folks reported this
>> JIRA
>> >> >> [1]
>> >> >> indicating that deleting a connected node did not work. This process
>> >> >> does
>> >> >> not work because the node needs to be disconnected first. The JIRA
>> was
>> >> >> addressed by restoring the check that a node is disconnected prior
>> to
>> >> >> deletion.
>> >> >>
>> >> >> Hopefully the JIRA I filed earlier today [2] will address the
>> phantom
>> >> >> node
>> >> >> you were seeing. Until then, can you update your workaround to
>> >> >> disconnect
>> >> >> the node in question prior to deletion?
>> >> >>
>> >> >> Thanks
>> >> >>
>> >> >> Matt
>> >> >>
>> >> >> [1] https://issues.apache.org/jira/browse/NIFI-3295
>> >> >> [2] https://issues.apache.org/jira/browse/NIFI-3933
>> >> >>
>> >> >> On Thu, May 18, 2017 at 12:29 PM, Neil Derraugh
>> >> >>  wrote:
>> >> >>>
>> >> >>> Pretty sure this is the problem I was describing in the "Phantom
>> Node"
>> >> >>> thread recently.
>> >> >>>
>> >> >>> If I kill non-primary nodes the cluster remains healthy despite the
>> >> >>> lost
>> >> >>> nodes.  The terminated nodes end up with a DISCONNECTED status.
>> >> >>>
>> >> >>> If I kill the primary it winds up with a CONNECTED status, but a
>> new
>> >> >>> primary/cluster coordinator gets elected too.
>> >> >>>
>> >> >>> Additionally it seems in 1.2.0 that the REST API no longer support
>> >> >>> deleting a node in a CONNECTED state (Cannot remove Node with ID
>> >> >>> 1780fde7-c2f4-469c-9884-fe843eac5b73 because it is not
>> disconnected,
>> >> >>> current
>> >> >>> state = CONNECTED).  So right now I don't have a workaround and
>> have
>> >> >>> to kill
>> >> >>> all the nodes and start over.
>> >> >>>
>> >> >>> On Thu, May 18, 2017 at 11:20 AM, Mark Payne > >
>> >> >>> wrote:
>> >> 
>> >>  Hello,
>> >> 
>> >>  Just looking through this thread now. I believe that I understand
>> the
>> >>  problem. I have updated the JIRA with details about what I think
>> is
>> >>  the
>> 

Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread Neil Derraugh
That's fair.  But for the sake of total clarity on my own part, after one
of these disaster scenarios with a newly quorum-elected primary things
cannot be driven through the UI and at least through parts the REST API.

I just ran through the following.  We have 3 nodes A, B, C with A primary,
and A becomes unreachable without first disconnecting.  Then B and C may (I
haven't verified) continue operating the flow they had in the clusters'
last "good" state.  But they do elect a new primary, as per the REST
nifi-api/controller/cluster response.  But now the flow can't be changed,
and in some cases it can't be reported on either, i.e. some GETs fail, like
nifi-api/flow/process-groups/root.

Are we describing the same behavior?

On Fri, May 19, 2017 at 11:12 AM, Joe Witt  wrote:

> If there is no longer a quorum then we cannot drive things from the UI
> but the cluster remaining is in tact from a functioning point of view
> other than being able to assign a primary to handle the one-off items.
>
> On Fri, May 19, 2017 at 11:04 AM, Neil Derraugh
>  wrote:
> > Hi Joe,
> >
> > Maybe I'm missing something, but if the primary node suffers a network
> > partition or container/vm/machine loss or becomes otherwise unreachable
> then
> > the cluster is unusable, at least from the UI.
> >
> > If that's not so please correct me.
> >
> > Thanks,
> > Neil
> >
> > On Thu, May 18, 2017 at 9:56 PM, Joe Witt  wrote:
> >>
> >> Neil,
> >>
> >> Want to make sure I understand what you're saying.  What are stating
> >> is a single point of failure?
> >>
> >> Thanks
> >> Joe
> >>
> >> On Thu, May 18, 2017 at 5:27 PM, Neil Derraugh
> >>  wrote:
> >> > Thanks for the insight Matt.
> >> >
> >> > It's a disaster recovery issue.  It's not something I plan on doing on
> >> > purpose.  It seems it is a single point of failure unfortunately.  I
> can
> >> > see
> >> > no other way to resolve the issue other than to blow everything away
> and
> >> > start a new cluster.
> >> >
> >> > On Thu, May 18, 2017 at 2:49 PM, Matt Gilman  >
> >> > wrote:
> >> >>
> >> >> Neil,
> >> >>
> >> >> Disconnecting a node prior to removal is the correct process. It
> >> >> appears
> >> >> that the check was lost going from 0.x to 1.x. Folks reported this
> JIRA
> >> >> [1]
> >> >> indicating that deleting a connected node did not work. This process
> >> >> does
> >> >> not work because the node needs to be disconnected first. The JIRA
> was
> >> >> addressed by restoring the check that a node is disconnected prior to
> >> >> deletion.
> >> >>
> >> >> Hopefully the JIRA I filed earlier today [2] will address the phantom
> >> >> node
> >> >> you were seeing. Until then, can you update your workaround to
> >> >> disconnect
> >> >> the node in question prior to deletion?
> >> >>
> >> >> Thanks
> >> >>
> >> >> Matt
> >> >>
> >> >> [1] https://issues.apache.org/jira/browse/NIFI-3295
> >> >> [2] https://issues.apache.org/jira/browse/NIFI-3933
> >> >>
> >> >> On Thu, May 18, 2017 at 12:29 PM, Neil Derraugh
> >> >>  wrote:
> >> >>>
> >> >>> Pretty sure this is the problem I was describing in the "Phantom
> Node"
> >> >>> thread recently.
> >> >>>
> >> >>> If I kill non-primary nodes the cluster remains healthy despite the
> >> >>> lost
> >> >>> nodes.  The terminated nodes end up with a DISCONNECTED status.
> >> >>>
> >> >>> If I kill the primary it winds up with a CONNECTED status, but a new
> >> >>> primary/cluster coordinator gets elected too.
> >> >>>
> >> >>> Additionally it seems in 1.2.0 that the REST API no longer support
> >> >>> deleting a node in a CONNECTED state (Cannot remove Node with ID
> >> >>> 1780fde7-c2f4-469c-9884-fe843eac5b73 because it is not
> disconnected,
> >> >>> current
> >> >>> state = CONNECTED).  So right now I don't have a workaround and have
> >> >>> to kill
> >> >>> all the nodes and start over.
> >> >>>
> >> >>> On Thu, May 18, 2017 at 11:20 AM, Mark Payne 
> >> >>> wrote:
> >> 
> >>  Hello,
> >> 
> >>  Just looking through this thread now. I believe that I understand
> the
> >>  problem. I have updated the JIRA with details about what I think is
> >>  the
> >>  problem and a potential remedy for the problem.
> >> 
> >>  Thanks
> >>  -Mark
> >> 
> >>  > On May 18, 2017, at 9:49 AM, Matt Gilman <
> matt.c.gil...@gmail.com>
> >>  > wrote:
> >>  >
> >>  > Thanks for the additional details. They will be helpful when
> >>  > working
> >>  > the JIRA. All nodes, including the coordinator, heartbeat to the
> >>  > active
> >>  > coordinator. This means that the coordinator effectively
> heartbeats
> >>  > to
> >>  > itself. It appears, based on your log messages, that this is not
> >>  > happening.
> >>  > Because no heartbeats were receive from any node, 

Re: how to set version for NIFI customized Processor?

2017-05-19 Thread Bryan Bende
Can you provide the contents of the MANIFEST file of your NAR after
deploying your updated NAR?

The MANIFEST file should be at:

nifi_home/work/nar/extensions/-unpacked/META-INF/MANIFEST.MF

On Fri, May 19, 2017 at 3:16 AM, 尹文才  wrote:
> Hi Bryan, I could set a specific version for my customized processor, but
> there's a side effect that the version of NIFI components I referenced in my
> processor are also updated to the same version.
> I actually referenced DBCPService in my processor and I set the version to a
> random version 3.0, and DBCPConnectionPool shows as 3.0 in NIFI. is there
> any way to work around this problem? Thanks.
>
> 2017-05-19 9:04 GMT+08:00 尹文才 :
>>
>> Thanks Bryan and Joe, I managed to set the specific version for my
>> processor with the properties.
>>
>> 2017-05-18 20:50 GMT+08:00 Bryan Bende :
>>>
>>> As Joe mentioned, the default behavior is to use the Maven group,
>>> artifact, and version, which will happen by default if you build your
>>> NAR with the latest NAR Maven plugin (version 1.2.0 at this time).
>>>
>>> If you prefer to set different values than the Maven fields, you can
>>> override them by specifying the following properties in your NAR's
>>> pom.xml:
>>>
>>> 
>>>org.apache.nifi.overridden
>>>nifi-overridden-test-nar
>>>2.0
>>>  
>>>
>>> Again, you only need to do this if you want your NAR to show up in
>>> NiFi differently than your Maven project.
>>>
>>>
>>> On Thu, May 18, 2017 at 8:05 AM, Joe Witt  wrote:
>>> > Just rebuild it with the latest nifi nar maven plugin and it will get
>>> > the version info at that time.
>>> >
>>> > thanks
>>> >
>>> > On Thu, May 18, 2017 at 4:20 AM, 尹文才  wrote:
>>> >> Thanks Joe, is it possible to set a specific version for a customized
>>> >> NIFI
>>> >> processor and show the version in that processor selection dialog?
>>> >>
>>> >> 2017-05-18 10:42 GMT+08:00 Joe Witt :
>>> >>>
>>> >>> Hello
>>> >>>
>>> >>> They will remain unversioned until they are built with the latest nar
>>> >>> plugin.  This is described briefly in the migration guidance [1].
>>> >>>
>>> >>> For anything built with the older nifi nar plugin the resulting nar
>>> >>> will not have suffiicient bundle data for the framework to have it
>>> >>> support component versioning so that is why it shows as unversioned.
>>> >>>
>>> >>> [1]
>>> >>> https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance
>>> >>>
>>> >>> Thanks
>>> >>> Joe
>>> >>>
>>> >>> On Wed, May 17, 2017 at 10:37 PM, 尹文才  wrote:
>>> >>> > Hi guys, I have installed NIFI 1.2 and have played with it for a
>>> >>> > while.
>>> >>> > One thing I noticed for 1.2 is that when I placed my previously
>>> >>> > written
>>> >>> > customized Processor
>>> >>> > into 1.2, I could see in the Processor select dialog there's a
>>> >>> > Version
>>> >>> > field
>>> >>> > for each processor
>>> >>> > and the value for my processor is unversioned. So does anyone know
>>> >>> > how
>>> >>> > to
>>> >>> > set version for
>>> >>> > customized processor? Thanks.
>>> >>
>>> >>
>>
>>
>


Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread Joe Witt
If there is no longer a quorum then we cannot drive things from the UI
but the cluster remaining is in tact from a functioning point of view
other than being able to assign a primary to handle the one-off items.

On Fri, May 19, 2017 at 11:04 AM, Neil Derraugh
 wrote:
> Hi Joe,
>
> Maybe I'm missing something, but if the primary node suffers a network
> partition or container/vm/machine loss or becomes otherwise unreachable then
> the cluster is unusable, at least from the UI.
>
> If that's not so please correct me.
>
> Thanks,
> Neil
>
> On Thu, May 18, 2017 at 9:56 PM, Joe Witt  wrote:
>>
>> Neil,
>>
>> Want to make sure I understand what you're saying.  What are stating
>> is a single point of failure?
>>
>> Thanks
>> Joe
>>
>> On Thu, May 18, 2017 at 5:27 PM, Neil Derraugh
>>  wrote:
>> > Thanks for the insight Matt.
>> >
>> > It's a disaster recovery issue.  It's not something I plan on doing on
>> > purpose.  It seems it is a single point of failure unfortunately.  I can
>> > see
>> > no other way to resolve the issue other than to blow everything away and
>> > start a new cluster.
>> >
>> > On Thu, May 18, 2017 at 2:49 PM, Matt Gilman 
>> > wrote:
>> >>
>> >> Neil,
>> >>
>> >> Disconnecting a node prior to removal is the correct process. It
>> >> appears
>> >> that the check was lost going from 0.x to 1.x. Folks reported this JIRA
>> >> [1]
>> >> indicating that deleting a connected node did not work. This process
>> >> does
>> >> not work because the node needs to be disconnected first. The JIRA was
>> >> addressed by restoring the check that a node is disconnected prior to
>> >> deletion.
>> >>
>> >> Hopefully the JIRA I filed earlier today [2] will address the phantom
>> >> node
>> >> you were seeing. Until then, can you update your workaround to
>> >> disconnect
>> >> the node in question prior to deletion?
>> >>
>> >> Thanks
>> >>
>> >> Matt
>> >>
>> >> [1] https://issues.apache.org/jira/browse/NIFI-3295
>> >> [2] https://issues.apache.org/jira/browse/NIFI-3933
>> >>
>> >> On Thu, May 18, 2017 at 12:29 PM, Neil Derraugh
>> >>  wrote:
>> >>>
>> >>> Pretty sure this is the problem I was describing in the "Phantom Node"
>> >>> thread recently.
>> >>>
>> >>> If I kill non-primary nodes the cluster remains healthy despite the
>> >>> lost
>> >>> nodes.  The terminated nodes end up with a DISCONNECTED status.
>> >>>
>> >>> If I kill the primary it winds up with a CONNECTED status, but a new
>> >>> primary/cluster coordinator gets elected too.
>> >>>
>> >>> Additionally it seems in 1.2.0 that the REST API no longer support
>> >>> deleting a node in a CONNECTED state (Cannot remove Node with ID
>> >>> 1780fde7-c2f4-469c-9884-fe843eac5b73 because it is not disconnected,
>> >>> current
>> >>> state = CONNECTED).  So right now I don't have a workaround and have
>> >>> to kill
>> >>> all the nodes and start over.
>> >>>
>> >>> On Thu, May 18, 2017 at 11:20 AM, Mark Payne 
>> >>> wrote:
>> 
>>  Hello,
>> 
>>  Just looking through this thread now. I believe that I understand the
>>  problem. I have updated the JIRA with details about what I think is
>>  the
>>  problem and a potential remedy for the problem.
>> 
>>  Thanks
>>  -Mark
>> 
>>  > On May 18, 2017, at 9:49 AM, Matt Gilman 
>>  > wrote:
>>  >
>>  > Thanks for the additional details. They will be helpful when
>>  > working
>>  > the JIRA. All nodes, including the coordinator, heartbeat to the
>>  > active
>>  > coordinator. This means that the coordinator effectively heartbeats
>>  > to
>>  > itself. It appears, based on your log messages, that this is not
>>  > happening.
>>  > Because no heartbeats were receive from any node, the lack of
>>  > heartbeats
>>  > from the terminated node is not considered.
>>  >
>>  > Matt
>>  >
>>  > Sent from my iPhone
>>  >
>>  >> On May 18, 2017, at 8:30 AM, ddewaele  wrote:
>>  >>
>>  >> Found something interesting in the centos-b debug logging
>>  >>
>>  >> after centos-a (the coordinator) is killed centos-b takes over.
>>  >> Notice how
>>  >> it "Will not disconnect any nodes due to lack of heartbeat" and
>>  >> how
>>  >> it still
>>  >> sees centos-a as connected despite the fact that there are no
>>  >> heartbeats
>>  >> anymore.
>>  >>
>>  >> 2017-05-18 12:41:38,010 INFO [Leader Election Notification
>>  >> Thread-2]
>>  >> o.apache.nifi.controller.FlowController This node elected Active
>>  >> Cluster
>>  >> Coordinator
>>  >> 2017-05-18 12:41:38,010 DEBUG [Leader Election Notification
>>  >> Thread-2]
>>  >> o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Purging old heartbeats
>> 

Re: Nifi Cluster fails to disconnect node when node was killed

2017-05-19 Thread Neil Derraugh
Hi Joe,

Maybe I'm missing something, but if the primary node suffers a network
partition or container/vm/machine loss or becomes otherwise unreachable
then the cluster is unusable, at least from the UI.

If that's not so please correct me.

Thanks,
Neil

On Thu, May 18, 2017 at 9:56 PM, Joe Witt  wrote:

> Neil,
>
> Want to make sure I understand what you're saying.  What are stating
> is a single point of failure?
>
> Thanks
> Joe
>
> On Thu, May 18, 2017 at 5:27 PM, Neil Derraugh
>  wrote:
> > Thanks for the insight Matt.
> >
> > It's a disaster recovery issue.  It's not something I plan on doing on
> > purpose.  It seems it is a single point of failure unfortunately.  I can
> see
> > no other way to resolve the issue other than to blow everything away and
> > start a new cluster.
> >
> > On Thu, May 18, 2017 at 2:49 PM, Matt Gilman 
> > wrote:
> >>
> >> Neil,
> >>
> >> Disconnecting a node prior to removal is the correct process. It appears
> >> that the check was lost going from 0.x to 1.x. Folks reported this JIRA
> [1]
> >> indicating that deleting a connected node did not work. This process
> does
> >> not work because the node needs to be disconnected first. The JIRA was
> >> addressed by restoring the check that a node is disconnected prior to
> >> deletion.
> >>
> >> Hopefully the JIRA I filed earlier today [2] will address the phantom
> node
> >> you were seeing. Until then, can you update your workaround to
> disconnect
> >> the node in question prior to deletion?
> >>
> >> Thanks
> >>
> >> Matt
> >>
> >> [1] https://issues.apache.org/jira/browse/NIFI-3295
> >> [2] https://issues.apache.org/jira/browse/NIFI-3933
> >>
> >> On Thu, May 18, 2017 at 12:29 PM, Neil Derraugh
> >>  wrote:
> >>>
> >>> Pretty sure this is the problem I was describing in the "Phantom Node"
> >>> thread recently.
> >>>
> >>> If I kill non-primary nodes the cluster remains healthy despite the
> lost
> >>> nodes.  The terminated nodes end up with a DISCONNECTED status.
> >>>
> >>> If I kill the primary it winds up with a CONNECTED status, but a new
> >>> primary/cluster coordinator gets elected too.
> >>>
> >>> Additionally it seems in 1.2.0 that the REST API no longer support
> >>> deleting a node in a CONNECTED state (Cannot remove Node with ID
> >>> 1780fde7-c2f4-469c-9884-fe843eac5b73 because it is not disconnected,
> current
> >>> state = CONNECTED).  So right now I don't have a workaround and have
> to kill
> >>> all the nodes and start over.
> >>>
> >>> On Thu, May 18, 2017 at 11:20 AM, Mark Payne 
> >>> wrote:
> 
>  Hello,
> 
>  Just looking through this thread now. I believe that I understand the
>  problem. I have updated the JIRA with details about what I think is
> the
>  problem and a potential remedy for the problem.
> 
>  Thanks
>  -Mark
> 
>  > On May 18, 2017, at 9:49 AM, Matt Gilman 
>  > wrote:
>  >
>  > Thanks for the additional details. They will be helpful when working
>  > the JIRA. All nodes, including the coordinator, heartbeat to the
> active
>  > coordinator. This means that the coordinator effectively heartbeats
> to
>  > itself. It appears, based on your log messages, that this is not
> happening.
>  > Because no heartbeats were receive from any node, the lack of
> heartbeats
>  > from the terminated node is not considered.
>  >
>  > Matt
>  >
>  > Sent from my iPhone
>  >
>  >> On May 18, 2017, at 8:30 AM, ddewaele  wrote:
>  >>
>  >> Found something interesting in the centos-b debug logging
>  >>
>  >> after centos-a (the coordinator) is killed centos-b takes over.
>  >> Notice how
>  >> it "Will not disconnect any nodes due to lack of heartbeat" and how
>  >> it still
>  >> sees centos-a as connected despite the fact that there are no
>  >> heartbeats
>  >> anymore.
>  >>
>  >> 2017-05-18 12:41:38,010 INFO [Leader Election Notification
> Thread-2]
>  >> o.apache.nifi.controller.FlowController This node elected Active
>  >> Cluster
>  >> Coordinator
>  >> 2017-05-18 12:41:38,010 DEBUG [Leader Election Notification
> Thread-2]
>  >> o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Purging old heartbeats
>  >> 2017-05-18 12:41:38,014 INFO [Leader Election Notification
> Thread-1]
>  >> o.apache.nifi.controller.FlowController This node has been elected
>  >> Primary
>  >> Node
>  >> 2017-05-18 12:41:38,353 DEBUG [Heartbeat Monitor Thread-1]
>  >> o.a.n.c.c.h.AbstractHeartbeatMonitor Received no new heartbeats.
> Will
>  >> not
>  >> disconnect any nodes due to lack of heartbeat
>  >> 2017-05-18 12:41:41,336 DEBUG [Process Cluster Protocol Request-3]
>  >> o.a.n.c.c.h.ClusterProtocolHeartbeatMonitor Received 

Re: Nifi clusters : duplicate nodes shown in cluster overview

2017-05-19 Thread Joe Witt
That is correct.

On May 19, 2017 9:47 AM, "ddewaele"  wrote:

> We're using docker, and in our failover scenario the machine is rebooted
> and/or the docker system is restarted.
>
> We're currently volume mapping the following :
>
>   - /srv/nifi/flow/archive:/opt/nifi/nifi-1.2.0/conf/archive:Z
>   - /srv/nifi/flows:/opt/nifi/nifi-1.2.0/conf/flows:Z
>   -
> /srv/nifi/content_repository:/opt/nifi/nifi-1.2.0/content_repository:Z
>   -
> /srv/nifi/database_repository:/opt/nifi/nifi-1.2.0/database_repository:Z
>   -
> /srv/nifi/flowfile_repository:/opt/nifi/nifi-1.2.0/flowfile_repository:Z
>   -
> /srv/nifi/provenance_repository:/opt/nifi/nifi-1.2.
> 0/provenance_repository:Z
>   - /srv/nifi/work:/opt/nifi/nifi-1.2.0/work:Z
>   - /srv/nifi/logs:/opt/nifi/nifi-1.2.0/logs:Z
>
> Are you referring to the local state management provider value (default
> /opt/nifi/nifi-1.2.0/state/local) ?
>
> If so I guess volume mapping that folder should fix it ? Would that be the
> right thing to do ?
>
>
>
>
> --
> View this message in context: http://apache-nifi-users-list.
> 2361937.n4.nabble.com/Nifi-clusters-duplicate-nodes-
> shown-in-cluster-overview-tp1966p1972.html
> Sent from the Apache NiFi Users List mailing list archive at Nabble.com.
>


Re: Nifi clusters : duplicate nodes shown in cluster overview

2017-05-19 Thread ddewaele
We're using docker, and in our failover scenario the machine is rebooted
and/or the docker system is restarted.

We're currently volume mapping the following :

  - /srv/nifi/flow/archive:/opt/nifi/nifi-1.2.0/conf/archive:Z
  - /srv/nifi/flows:/opt/nifi/nifi-1.2.0/conf/flows:Z
  -
/srv/nifi/content_repository:/opt/nifi/nifi-1.2.0/content_repository:Z
  -
/srv/nifi/database_repository:/opt/nifi/nifi-1.2.0/database_repository:Z
  -
/srv/nifi/flowfile_repository:/opt/nifi/nifi-1.2.0/flowfile_repository:Z
  -
/srv/nifi/provenance_repository:/opt/nifi/nifi-1.2.0/provenance_repository:Z
  - /srv/nifi/work:/opt/nifi/nifi-1.2.0/work:Z
  - /srv/nifi/logs:/opt/nifi/nifi-1.2.0/logs:Z

Are you referring to the local state management provider value (default
/opt/nifi/nifi-1.2.0/state/local) ?

If so I guess volume mapping that folder should fix it ? Would that be the
right thing to do ?




--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-clusters-duplicate-nodes-shown-in-cluster-overview-tp1966p1972.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: Nifi clusters : duplicate nodes shown in cluster overview

2017-05-19 Thread Joe Witt
When a node joins a cluster it writes its node identifier into its local
state.  This is in a state directory.  Is that directory being removed?  If
so it will get a new identifier.  Otherwise when the node is started it
will reuse that identifier.

On May 19, 2017 4:13 AM, "ddewaele"  wrote:

> We have a 2 node cluster (centos-a / centos-b).  During on of your failover
> tests, we noticed that when we rebooted centos-b, sometimes "duplicate"
> node
> entries can be seen in the cluster.
>
> We rebooted centos-b and when it came back online the cluster NiFi saw 2
> out
> of 3 nodes connected.
>
> centos-b was added twice (using different nodeIds).
>
> 1. centos-b : 05/19/2017 06:48:51 UTC : Node disconnected from cluster due
> to Have not received a heartbeat from node in 44 seconds
> 2. centos-b : 05/19/2017 07:42:54 UTC : Received first heartbeat from
> connecting node. Node connected.
>
> Is this by design ? In this case (and I assume in most cases), an address /
> apiPort combo should uniquely identify a particular node. Why does it get
> assigned a new nodeId ?
>
> As a result, we need to manually disconnected the duplicate disconnected
> centos-b
>
>
> Output of the cluster rest endpoint :
>
>
> {
>   "cluster": {
> "nodes": [
>   {
> "nodeId": "62be0e80-306a-4037-80e5-b4def5fbc78e",
> "address": "centos-b",
> "apiPort": 8080,
> "status": "DISCONNECTED",
> "roles": [],
> "events": [
>   {
> "timestamp": "05/19/2017 06:48:51 UTC",
> "category": "WARNING",
> "message": "Node disconnected from cluster due to Have not
> received a heartbeat from node in 44 seconds"
>   },
>   {
> "timestamp": "05/18/2017 13:33:56 UTC",
> "category": "INFO",
> "message": "Node Status changed from CONNECTING to CONNECTED"
>   }
> ]
>   },
>   {
> "nodeId": "d41d71f2-0ab4-4d6e-bbf2-793bd4faad06",
> "address": "centos-a",
> "apiPort": 8080,
> "status": "CONNECTED",
> "heartbeat": "05/19/2017 07:44:39 UTC",
> "roles": [
>   "Primary Node",
>   "Cluster Coordinator"
> ],
> "activeThreadCount": 0,
> "queued": "0 / 0 bytes",
> "events": [
>   {
> "timestamp": "05/18/2017 13:33:56 UTC",
> "category": "INFO",
> "message": "Node Status changed from CONNECTING to CONNECTED"
>   }
> ],
> "nodeStartTime": "05/18/2017 13:33:51 UTC"
>   },
>   {
> "nodeId": "ddd371c7-2618-4079-8c61-ee30245d15cc",
> "address": "centos-b",
> "apiPort": 8080,
> "status": "CONNECTED",
> "heartbeat": "05/19/2017 07:44:36 UTC",
> "roles": [],
> "activeThreadCount": 0,
> "queued": "0 / 0 bytes",
> "events": [
>   {
> "timestamp": "05/19/2017 07:42:54 UTC",
> "category": "INFO",
> "message": "Received first heartbeat from connecting node. Node
> connected."
>   },
>   {
> "timestamp": "05/19/2017 07:42:47 UTC",
> "category": "INFO",
> "message": "Connection requested from existing node. Setting
> status to connecting."
>   }
> ],
> "nodeStartTime": "05/19/2017 07:42:40 UTC"
>   }
> ],
> "generated": "07:44:39 UTC"
>   }
> }
>
>
>
> --
> View this message in context: http://apache-nifi-users-list.
> 2361937.n4.nabble.com/Nifi-clusters-duplicate-nodes-
> shown-in-cluster-overview-tp1966.html
> Sent from the Apache NiFi Users List mailing list archive at Nabble.com.
>


Re: How to process files sequentially?

2017-05-19 Thread prabhu Mahendran
Sorry Koji,

I have used NiFi in Windows So i can't able to use ExecuteStreamCommand for
it.

It shows error in "System cannot find path Specified"

Is any other way for doing it in windows?

On Fri, May 19, 2017 at 12:50 PM, Koji Kawamura 
wrote:

> Hi Prabhu,
>
> I just used MergeContent to confirm test result. In your case, I
> thought the goal is sending queries to SQL Server in order so I think
> you don't have to use MergeContent.
>
> Having said that, I came up with an idea. This is kind of a hack but
> using ExecuteStreamingCommand to analyze number of files in a dir and
> use MergeContent 'Defragment' mode, I was able to merge content
> without using static minimum number of files.
>
> Here is another template for that example:
> https://gist.githubusercontent.com/ijokarumawak/
> 7e6158460cfcb0b5911acefbb455edf0/raw/967a051177878a98f5ddb57653478c
> 6091a7b23c/process-files-in-order-and-defrag.xml
>
> Thanks,
> Koji
>
> On Fri, May 19, 2017 at 6:11 PM, prabhu Mahendran
>  wrote:
> > Hi Koji,
> >
> > Thanks for your mail.
> >
> > In your template i have one query regarding if number of files taken by
> get
> > file is unknown then MergeContent Processor could not work right? because
> > you have specify maximum number of bins to be 5.
> >
> > But in my case i am having dynamic number of file counts.In that case
> merge
> > content will failed to merge.
> >
> > Please stop me if i'm understand anything wrong.
> >
> > How to give dynamic number of entries/bin for MergeContent due to
> currently
> > there is no expression language supported?
> >
> > On Fri, May 19, 2017 at 10:34 AM, Koji Kawamura 
> > wrote:
> >>
> >> Hi Prabhu,
> >>
> >> I think you can use EnforceOrder processor which is available since
> >> 1.2.0, without Wait/Notify processor.
> >>
> >> Here is a sample flow I tested how it can be used for use-cases like
> >> yours:
> >> https://gist.github.com/ijokarumawak/7e6158460cfcb0b5911acefbb455edf0
> >>
> >> Thanks,
> >> Koji
> >>
> >> On Fri, May 19, 2017 at 1:52 PM, prabhu Mahendran
> >>  wrote:
> >> > I have approximately 1000 files in local drive.I need to move that
> files
> >> > into SQL Server accordingly one after another.
> >> >
> >> > Since local drive having files like file1.csv,file2.csv,..upto
> >> > file1000.csv.I am sure that number of files in local drive may change
> >> > dynamically.
> >> >
> >> > I can able to created template for move that files into SQL
> Server.But i
> >> > have to process the file2 when file 1 has been completely moved into
> SQL
> >> > Server.
> >> >
> >> > Is this possible in NiFi without using Wait\Notify processor?
> >> >
> >> > can anyone please guide me to solve this?
> >
> >
>


Re: How to process files sequentially?

2017-05-19 Thread Koji Kawamura
Hi Prabhu,

I just used MergeContent to confirm test result. In your case, I
thought the goal is sending queries to SQL Server in order so I think
you don't have to use MergeContent.

Having said that, I came up with an idea. This is kind of a hack but
using ExecuteStreamingCommand to analyze number of files in a dir and
use MergeContent 'Defragment' mode, I was able to merge content
without using static minimum number of files.

Here is another template for that example:
https://gist.githubusercontent.com/ijokarumawak/7e6158460cfcb0b5911acefbb455edf0/raw/967a051177878a98f5ddb57653478c6091a7b23c/process-files-in-order-and-defrag.xml

Thanks,
Koji

On Fri, May 19, 2017 at 6:11 PM, prabhu Mahendran
 wrote:
> Hi Koji,
>
> Thanks for your mail.
>
> In your template i have one query regarding if number of files taken by get
> file is unknown then MergeContent Processor could not work right? because
> you have specify maximum number of bins to be 5.
>
> But in my case i am having dynamic number of file counts.In that case merge
> content will failed to merge.
>
> Please stop me if i'm understand anything wrong.
>
> How to give dynamic number of entries/bin for MergeContent due to currently
> there is no expression language supported?
>
> On Fri, May 19, 2017 at 10:34 AM, Koji Kawamura 
> wrote:
>>
>> Hi Prabhu,
>>
>> I think you can use EnforceOrder processor which is available since
>> 1.2.0, without Wait/Notify processor.
>>
>> Here is a sample flow I tested how it can be used for use-cases like
>> yours:
>> https://gist.github.com/ijokarumawak/7e6158460cfcb0b5911acefbb455edf0
>>
>> Thanks,
>> Koji
>>
>> On Fri, May 19, 2017 at 1:52 PM, prabhu Mahendran
>>  wrote:
>> > I have approximately 1000 files in local drive.I need to move that files
>> > into SQL Server accordingly one after another.
>> >
>> > Since local drive having files like file1.csv,file2.csv,..upto
>> > file1000.csv.I am sure that number of files in local drive may change
>> > dynamically.
>> >
>> > I can able to created template for move that files into SQL Server.But i
>> > have to process the file2 when file 1 has been completely moved into SQL
>> > Server.
>> >
>> > Is this possible in NiFi without using Wait\Notify processor?
>> >
>> > can anyone please guide me to solve this?
>
>


Re: How to process files sequentially?

2017-05-19 Thread prabhu Mahendran
Hi Koji,

Thanks for your mail.

In your template i have one query regarding if number of files taken by get
file is unknown then MergeContent Processor could not work right? because
you have specify maximum number of bins to be 5.

But in my case i am having dynamic number of file counts.In that case merge
content will failed to merge.

Please stop me if i'm understand anything wrong.

How to give dynamic number of entries/bin for MergeContent due to currently
there is no expression language supported?

On Fri, May 19, 2017 at 10:34 AM, Koji Kawamura 
wrote:

> Hi Prabhu,
>
> I think you can use EnforceOrder processor which is available since
> 1.2.0, without Wait/Notify processor.
>
> Here is a sample flow I tested how it can be used for use-cases like yours:
> https://gist.github.com/ijokarumawak/7e6158460cfcb0b5911acefbb455edf0
>
> Thanks,
> Koji
>
> On Fri, May 19, 2017 at 1:52 PM, prabhu Mahendran
>  wrote:
> > I have approximately 1000 files in local drive.I need to move that files
> > into SQL Server accordingly one after another.
> >
> > Since local drive having files like file1.csv,file2.csv,..upto
> > file1000.csv.I am sure that number of files in local drive may change
> > dynamically.
> >
> > I can able to created template for move that files into SQL Server.But i
> > have to process the file2 when file 1 has been completely moved into SQL
> > Server.
> >
> > Is this possible in NiFi without using Wait\Notify processor?
> >
> > can anyone please guide me to solve this?
>


Re: How to process files sequentially?

2017-05-19 Thread Koji Kawamura
Hi Prabhu,

I think you can use EnforceOrder processor which is available since
1.2.0, without Wait/Notify processor.

Here is a sample flow I tested how it can be used for use-cases like yours:
https://gist.github.com/ijokarumawak/7e6158460cfcb0b5911acefbb455edf0

Thanks,
Koji

On Fri, May 19, 2017 at 1:52 PM, prabhu Mahendran
 wrote:
> I have approximately 1000 files in local drive.I need to move that files
> into SQL Server accordingly one after another.
>
> Since local drive having files like file1.csv,file2.csv,..upto
> file1000.csv.I am sure that number of files in local drive may change
> dynamically.
>
> I can able to created template for move that files into SQL Server.But i
> have to process the file2 when file 1 has been completely moved into SQL
> Server.
>
> Is this possible in NiFi without using Wait\Notify processor?
>
> can anyone please guide me to solve this?


Nifi clusters : duplicate nodes shown in cluster overview

2017-05-19 Thread ddewaele
We have a 2 node cluster (centos-a / centos-b).  During on of your failover
tests, we noticed that when we rebooted centos-b, sometimes "duplicate" node
entries can be seen in the cluster.

We rebooted centos-b and when it came back online the cluster NiFi saw 2 out
of 3 nodes connected. 

centos-b was added twice (using different nodeIds).

1. centos-b : 05/19/2017 06:48:51 UTC : Node disconnected from cluster due
to Have not received a heartbeat from node in 44 seconds
2. centos-b : 05/19/2017 07:42:54 UTC : Received first heartbeat from
connecting node. Node connected.

Is this by design ? In this case (and I assume in most cases), an address /
apiPort combo should uniquely identify a particular node. Why does it get
assigned a new nodeId ?

As a result, we need to manually disconnected the duplicate disconnected
centos-b


Output of the cluster rest endpoint :

 
{
  "cluster": {
"nodes": [
  {
"nodeId": "62be0e80-306a-4037-80e5-b4def5fbc78e",
"address": "centos-b",
"apiPort": 8080,
"status": "DISCONNECTED",
"roles": [],
"events": [
  {
"timestamp": "05/19/2017 06:48:51 UTC",
"category": "WARNING",
"message": "Node disconnected from cluster due to Have not
received a heartbeat from node in 44 seconds"
  },
  {
"timestamp": "05/18/2017 13:33:56 UTC",
"category": "INFO",
"message": "Node Status changed from CONNECTING to CONNECTED"
  }
]
  },
  {
"nodeId": "d41d71f2-0ab4-4d6e-bbf2-793bd4faad06",
"address": "centos-a",
"apiPort": 8080,
"status": "CONNECTED",
"heartbeat": "05/19/2017 07:44:39 UTC",
"roles": [
  "Primary Node",
  "Cluster Coordinator"
],
"activeThreadCount": 0,
"queued": "0 / 0 bytes",
"events": [
  {
"timestamp": "05/18/2017 13:33:56 UTC",
"category": "INFO",
"message": "Node Status changed from CONNECTING to CONNECTED"
  }
],
"nodeStartTime": "05/18/2017 13:33:51 UTC"
  },
  {
"nodeId": "ddd371c7-2618-4079-8c61-ee30245d15cc",
"address": "centos-b",
"apiPort": 8080,
"status": "CONNECTED",
"heartbeat": "05/19/2017 07:44:36 UTC",
"roles": [],
"activeThreadCount": 0,
"queued": "0 / 0 bytes",
"events": [
  {
"timestamp": "05/19/2017 07:42:54 UTC",
"category": "INFO",
"message": "Received first heartbeat from connecting node. Node
connected."
  },
  {
"timestamp": "05/19/2017 07:42:47 UTC",
"category": "INFO",
"message": "Connection requested from existing node. Setting
status to connecting."
  }
],
"nodeStartTime": "05/19/2017 07:42:40 UTC"
  }
],
"generated": "07:44:39 UTC"
  }
}



--
View this message in context: 
http://apache-nifi-users-list.2361937.n4.nabble.com/Nifi-clusters-duplicate-nodes-shown-in-cluster-overview-tp1966.html
Sent from the Apache NiFi Users List mailing list archive at Nabble.com.


Re: how to set version for NIFI customized Processor?

2017-05-19 Thread 尹文才
Hi Bryan, I could set a specific version for my customized processor, but
there's a side effect that the version of NIFI components I referenced in
my processor are also updated to the same version.
I actually referenced DBCPService in my processor and I set the version to
a random version 3.0, and DBCPConnectionPool shows as 3.0 in NIFI. is there
any way to work around this problem? Thanks.

2017-05-19 9:04 GMT+08:00 尹文才 :

> Thanks Bryan and Joe, I managed to set the specific version for my
> processor with the properties.
>
> 2017-05-18 20:50 GMT+08:00 Bryan Bende :
>
>> As Joe mentioned, the default behavior is to use the Maven group,
>> artifact, and version, which will happen by default if you build your
>> NAR with the latest NAR Maven plugin (version 1.2.0 at this time).
>>
>> If you prefer to set different values than the Maven fields, you can
>> override them by specifying the following properties in your NAR's
>> pom.xml:
>>
>> 
>>org.apache.nifi.overridden
>>nifi-overridden-test-nar
>>2.0
>>  
>>
>> Again, you only need to do this if you want your NAR to show up in
>> NiFi differently than your Maven project.
>>
>>
>> On Thu, May 18, 2017 at 8:05 AM, Joe Witt  wrote:
>> > Just rebuild it with the latest nifi nar maven plugin and it will get
>> > the version info at that time.
>> >
>> > thanks
>> >
>> > On Thu, May 18, 2017 at 4:20 AM, 尹文才  wrote:
>> >> Thanks Joe, is it possible to set a specific version for a customized
>> NIFI
>> >> processor and show the version in that processor selection dialog?
>> >>
>> >> 2017-05-18 10:42 GMT+08:00 Joe Witt :
>> >>>
>> >>> Hello
>> >>>
>> >>> They will remain unversioned until they are built with the latest nar
>> >>> plugin.  This is described briefly in the migration guidance [1].
>> >>>
>> >>> For anything built with the older nifi nar plugin the resulting nar
>> >>> will not have suffiicient bundle data for the framework to have it
>> >>> support component versioning so that is why it shows as unversioned.
>> >>>
>> >>> [1] https://cwiki.apache.org/confluence/display/NIFI/Migration+
>> Guidance
>> >>>
>> >>> Thanks
>> >>> Joe
>> >>>
>> >>> On Wed, May 17, 2017 at 10:37 PM, 尹文才  wrote:
>> >>> > Hi guys, I have installed NIFI 1.2 and have played with it for a
>> while.
>> >>> > One thing I noticed for 1.2 is that when I placed my previously
>> written
>> >>> > customized Processor
>> >>> > into 1.2, I could see in the Processor select dialog there's a
>> Version
>> >>> > field
>> >>> > for each processor
>> >>> > and the value for my processor is unversioned. So does anyone know
>> how
>> >>> > to
>> >>> > set version for
>> >>> > customized processor? Thanks.
>> >>
>> >>
>>
>
>