unsubscribe

2023-03-08 Thread dan young
unsubscribe


Unsubscribe

2023-03-03 Thread dan young



Re: Securing Nifi 1.11.4

2021-01-09 Thread dan young
Did you grant access to the provenance events to all the cluster nodes too?

On Fri, Jan 8, 2021, 11:35 PM Cenk Aktas  wrote:

> Hello,
>
>
>
> We configured to admin UI on SSL with a self-signed certificate.
>
> Only one user is defined and it has all policies.
>
> We are not able to see any provenance records although it has the query
> provenance access.
>
>
>
> We also tried deleting all provenance data and restart, it didn’t work.
>
>
>
> Any suggestions?
>


Access Policy required to call REST endpoint?

2020-08-17 Thread dan young
Hello,

I have a secure cluster that we just brought up, and have a flow that
downloads the main Process groups via:

GET   /process-groups/{id}/download

Gets a process group for download

but we're seeing a 403 error, I think it's a policy issue, but not sure
which one that should be set on.  The GET via /nifi-api/controller works
fine...

Regards,

Dano


Re: cluster stick in "Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered"

2020-08-05 Thread dan young
On a related note, I noticed that the ACL are getting set, but also for
each znode under the /nifi, the Read ACL for world is being set.  Is there
a way to have nifi only set with the sasl?

zk: nifi1-5.X.net:2181(CONNECTED) 12] getAcl /nifi
'sasl,'n...@x.net
: cdrwa
'world,'anyone
: r

On Wed, Aug 5, 2020 at 1:56 PM Mark Payne  wrote:

> No worries, thanks for following up and letting us know!
>
> Thanks
> -Mark
>
>
> On Aug 5, 2020, at 3:42 PM, dan young  wrote:
>
> Hello,
>
> Sorry for all the noise...doohwas due to the realm in the
> jaas.conf being lowercase...i'm a knucklehead...
>
> Dano
>
> On Wed, Aug 5, 2020 at 1:12 PM Bryan Bende  wrote:
>
>> I don't see how this would relate to the problem, but shouldn't the ACL
>> be set to "Creator" when Sasl/Kerberos is setup correctly?
>>
>> In addition to the nifi configs you showed, you would also need a jaas
>> conf file specified in bootstrap.conf and in that file you would need the
>> jaas entry for the ZK client.
>>
>> On Wed, Aug 5, 2020 at 3:02 PM dan young  wrote:
>>
>>> Hello Mark,
>>>
>>> Attached is a dump from one of the nodesI replaced the domain
>>> related entries with X/x.  I'm not sure if it's relevant or not, but I did
>>> notice that in the log there's entries "Looking for keys for n...@x.net"
>>> the x (domain)  is lowercase whereas in the keytab file it's uppercase X.
>>> Also not sure if the Found unsupported keytype (1) is meaningful.  Not that
>>> when I delete the znode in zookeeper=, at least the initial znode is
>>> created /nifi, but we never see the other typical suspect, i.e Coordinator,
>>> Primary, etc...
>>>
>>> Seems to be something stuck in Curator???
>>>
>>> Regards.
>>>
>>> Dano
>>>
>>> On Wed, Aug 5, 2020 at 12:20 PM Mark Payne  wrote:
>>>
>>>> Dan,
>>>>
>>>> Can you grab a thread dump and provide that? Specifically, the “main”
>>>> thread is the important one with startup. The note that the role is already
>>>> registered is normal. It probably could be changed to a DEBUG level,
>>>> really. It should not be concerning. A thread dump, though, would show us
>>>> exactly where it’s at.
>>>>
>>>> Thanks
>>>> -Mark
>>>>
>>>>
>>>> On Aug 5, 2020, at 2:02 PM, dan young  wrote:
>>>>
>>>> Hello,
>>>> Running nifi 1.11.4, 3 X secure cluster mode and have enabled
>>>> kerberos/sasl, upon trying to startup the cluster, they seem to get stuck
>>>> in :
>>>>
>>>> 2020-08-05 17:10:18,907 WARN [main]
>>>> o.a.nifi.controller.StandardFlowService There is currently no Cluster
>>>> Coordinator. This often happens upon restart of NiFi
>>>>  when running an embedded ZooKeeper. Will register this node to become
>>>> the active Cluster Coordinator and will attempt to connect to cluster again
>>>> 2020-08-05 17:10:18,907 INFO [main]
>>>> o.a.n.c.l.e.CuratorLeaderElectionManager
>>>> CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
>>>> Election
>>>>  for role 'Cluster Coordinator' but this role is already registered
>>>>
>>>>
>>>>
>>>> I've checked zookeeper and I can see that the /nifi znode has been
>>>> created, although empty, and the ACL seem to look correct
>>>> zk: nifi1-5.X.net:2181 <http://nifi1-5.x.net:2181/>(CONNECTED) 3]
>>>> getAcl /nifi
>>>> 'sasl,'n...@x.net
>>>> : cdrwa
>>>> 'world,'anyone
>>>> : r
>>>>
>>>>
>>>> relevant Nifi config settings
>>>>
>>>> nifi.properties:
>>>>
>>>> nifi.zookeeper.auth.type=sasl
>>>> nifi.zookeeper.kerberos.removeHostFromPrincipal=true
>>>> nifi.zookeeper.kerberos.removeRealmFromPrincipal=false
>>>>
>>>> # kerberos #
>>>> nifi.kerberos.krb5.file=/etc/krb5.conf
>>>>
>>>> # kerberos service principal #
>>>> nifi.kerberos.service.principal=n...@x.net
>>>> nifi.kerberos.service.keytab.location=/opt/nifi/conf/nifi.keytab
>>>>
>>>>
>>>> state-management.xml
>>>> 
>>>> zk-provider
>>>>
>>>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
>>>> /nifi
>>>> 30 seconds
>>>> CreatorOnly
>>>> X:2181,Y:2181,Z:2181
>>>>  
>>>>
>>>>
>>>>
>>>> KRB5_TRACE=/dev/stdout kinit -k -t /opt/nifi/conf/nifi.keytab
>>>> n...@x.net
>>>> ...
>>>> ...
>>>>
>>>> klist
>>>> Ticket cache: FILE:/tmp/krb5cc_2004
>>>> Default principal: n...@x.net
>>>>
>>>> Valid starting   Expires  Service principal
>>>> 08/05/2020 17:57:02  08/06/2020 03:57:02  krbtgt/x@x.net
>>>> renew until 08/06/2020 17:57:02
>>>>
>>>>
>>>>
>>>>
>>>> As a side note, secure NiFi was working fine before the kerberos bit,
>>>> I've been beating my head against the wall with it for the day, but the
>>>> kerberos/zookeeper stuff seems to be working now
>>>> do we need to have Server-Server zookeeper auth working for this?
>>>>
>>>>
>>>> Appreciate any insight
>>>>
>>>> Regards,
>>>>
>>>> Dano
>>>>
>>>>
>>>>
>


Re: cluster stick in "Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered"

2020-08-05 Thread dan young
Hello,

Sorry for all the noise...doohwas due to the realm in the
jaas.conf being lowercase...i'm a knucklehead...

Dano

On Wed, Aug 5, 2020 at 1:12 PM Bryan Bende  wrote:

> I don't see how this would relate to the problem, but shouldn't the ACL be
> set to "Creator" when Sasl/Kerberos is setup correctly?
>
> In addition to the nifi configs you showed, you would also need a jaas
> conf file specified in bootstrap.conf and in that file you would need the
> jaas entry for the ZK client.
>
> On Wed, Aug 5, 2020 at 3:02 PM dan young  wrote:
>
>> Hello Mark,
>>
>> Attached is a dump from one of the nodesI replaced the domain related
>> entries with X/x.  I'm not sure if it's relevant or not, but I did notice
>> that in the log there's entries "Looking for keys for n...@x.net"  the x
>> (domain)  is lowercase whereas in the keytab file it's uppercase X.  Also
>> not sure if the Found unsupported keytype (1) is meaningful.  Not that when
>> I delete the znode in zookeeper=, at least the initial znode is created
>> /nifi, but we never see the other typical suspect, i.e Coordinator,
>> Primary, etc...
>>
>> Seems to be something stuck in Curator???
>>
>> Regards.
>>
>> Dano
>>
>> On Wed, Aug 5, 2020 at 12:20 PM Mark Payne  wrote:
>>
>>> Dan,
>>>
>>> Can you grab a thread dump and provide that? Specifically, the “main”
>>> thread is the important one with startup. The note that the role is already
>>> registered is normal. It probably could be changed to a DEBUG level,
>>> really. It should not be concerning. A thread dump, though, would show us
>>> exactly where it’s at.
>>>
>>> Thanks
>>> -Mark
>>>
>>>
>>> On Aug 5, 2020, at 2:02 PM, dan young  wrote:
>>>
>>> Hello,
>>> Running nifi 1.11.4, 3 X secure cluster mode and have enabled
>>> kerberos/sasl, upon trying to startup the cluster, they seem to get stuck
>>> in :
>>>
>>> 2020-08-05 17:10:18,907 WARN [main]
>>> o.a.nifi.controller.StandardFlowService There is currently no Cluster
>>> Coordinator. This often happens upon restart of NiFi
>>>  when running an embedded ZooKeeper. Will register this node to become
>>> the active Cluster Coordinator and will attempt to connect to cluster again
>>> 2020-08-05 17:10:18,907 INFO [main]
>>> o.a.n.c.l.e.CuratorLeaderElectionManager
>>> CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
>>> Election
>>>  for role 'Cluster Coordinator' but this role is already registered
>>>
>>>
>>>
>>> I've checked zookeeper and I can see that the /nifi znode has been
>>> created, although empty, and the ACL seem to look correct
>>> zk: nifi1-5.X.net:2181 <http://nifi1-5.x.net:2181>(CONNECTED) 3] getAcl
>>> /nifi
>>> 'sasl,'n...@x.net
>>> : cdrwa
>>> 'world,'anyone
>>> : r
>>>
>>>
>>> relevant Nifi config settings
>>>
>>> nifi.properties:
>>>
>>> nifi.zookeeper.auth.type=sasl
>>> nifi.zookeeper.kerberos.removeHostFromPrincipal=true
>>> nifi.zookeeper.kerberos.removeRealmFromPrincipal=false
>>>
>>> # kerberos #
>>> nifi.kerberos.krb5.file=/etc/krb5.conf
>>>
>>> # kerberos service principal #
>>> nifi.kerberos.service.principal=n...@x.net
>>> nifi.kerberos.service.keytab.location=/opt/nifi/conf/nifi.keytab
>>>
>>>
>>> state-management.xml
>>> 
>>> zk-provider
>>>
>>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
>>> /nifi
>>> 30 seconds
>>> CreatorOnly
>>> X:2181,Y:2181,Z:2181
>>>  
>>>
>>>
>>>
>>> KRB5_TRACE=/dev/stdout kinit -k -t /opt/nifi/conf/nifi.keytab n...@x.net
>>> ...
>>> ...
>>>
>>> klist
>>> Ticket cache: FILE:/tmp/krb5cc_2004
>>> Default principal: n...@x.net
>>>
>>> Valid starting   Expires  Service principal
>>> 08/05/2020 17:57:02  08/06/2020 03:57:02  krbtgt/x@x.net
>>> renew until 08/06/2020 17:57:02
>>>
>>>
>>>
>>>
>>> As a side note, secure NiFi was working fine before the kerberos bit,
>>> I've been beating my head against the wall with it for the day, but the
>>> kerberos/zookeeper stuff seems to be working now
>>> do we need to have Server-Server zookeeper auth working for this?
>>>
>>>
>>> Appreciate any insight
>>>
>>> Regards,
>>>
>>> Dano
>>>
>>>
>>>


Re: cluster stick in "Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered"

2020-08-05 Thread dan young
Hello Bryan,

Same issue.  I have a jaas.conf and config in the bootstrap.cof

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/opt/nifi/conf/nifi.keytab"
  storeKey=true
  useTicketCache=false
  principal="n...@x.net ";
};


java.arg.16=-Djavax.security.auth.useSubjectCredsOnly=true

# Zookeeper 3.5 now includes an Admin Server that starts on port 8080,
since NiFi is already using that port disable by default.
# Please see
https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver_config
for
configuration options.
java.arg.17=-Dzookeeper.admin.enableServer=false
java.arg.18=-Djava.security.auth.login.config=/opt/nifi/conf/jaas.conf
java.arg.19=-Dsun.security.krb5.debug=true



I just noticed that the realm here is in lowercase, let me change
that.maybe that's an issue
ReplyForward


On Wed, Aug 5, 2020 at 1:12 PM Bryan Bende  wrote:

> I don't see how this would relate to the problem, but shouldn't the ACL be
> set to "Creator" when Sasl/Kerberos is setup correctly?
>
> In addition to the nifi configs you showed, you would also need a jaas
> conf file specified in bootstrap.conf and in that file you would need the
> jaas entry for the ZK client.
>
> On Wed, Aug 5, 2020 at 3:02 PM dan young  wrote:
>
>> Hello Mark,
>>
>> Attached is a dump from one of the nodesI replaced the domain related
>> entries with X/x.  I'm not sure if it's relevant or not, but I did notice
>> that in the log there's entries "Looking for keys for n...@x.net"  the x
>> (domain)  is lowercase whereas in the keytab file it's uppercase X.  Also
>> not sure if the Found unsupported keytype (1) is meaningful.  Not that when
>> I delete the znode in zookeeper=, at least the initial znode is created
>> /nifi, but we never see the other typical suspect, i.e Coordinator,
>> Primary, etc...
>>
>> Seems to be something stuck in Curator???
>>
>> Regards.
>>
>> Dano
>>
>> On Wed, Aug 5, 2020 at 12:20 PM Mark Payne  wrote:
>>
>>> Dan,
>>>
>>> Can you grab a thread dump and provide that? Specifically, the “main”
>>> thread is the important one with startup. The note that the role is already
>>> registered is normal. It probably could be changed to a DEBUG level,
>>> really. It should not be concerning. A thread dump, though, would show us
>>> exactly where it’s at.
>>>
>>> Thanks
>>> -Mark
>>>
>>>
>>> On Aug 5, 2020, at 2:02 PM, dan young  wrote:
>>>
>>> Hello,
>>> Running nifi 1.11.4, 3 X secure cluster mode and have enabled
>>> kerberos/sasl, upon trying to startup the cluster, they seem to get stuck
>>> in :
>>>
>>> 2020-08-05 17:10:18,907 WARN [main]
>>> o.a.nifi.controller.StandardFlowService There is currently no Cluster
>>> Coordinator. This often happens upon restart of NiFi
>>>  when running an embedded ZooKeeper. Will register this node to become
>>> the active Cluster Coordinator and will attempt to connect to cluster again
>>> 2020-08-05 17:10:18,907 INFO [main]
>>> o.a.n.c.l.e.CuratorLeaderElectionManager
>>> CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
>>> Election
>>>  for role 'Cluster Coordinator' but this role is already registered
>>>
>>>
>>>
>>> I've checked zookeeper and I can see that the /nifi znode has been
>>> created, although empty, and the ACL seem to look correct
>>> zk: nifi1-5.X.net:2181 <http://nifi1-5.x.net:2181>(CONNECTED) 3] getAcl
>>> /nifi
>>> 'sasl,'n...@x.net
>>> : cdrwa
>>> 'world,'anyone
>>> : r
>>>
>>>
>>> relevant Nifi config settings
>>>
>>> nifi.properties:
>>>
>>> nifi.zookeeper.auth.type=sasl
>>> nifi.zookeeper.kerberos.removeHostFromPrincipal=true
>>> nifi.zookeeper.kerberos.removeRealmFromPrincipal=false
>>>
>>> # kerberos #
>>> nifi.kerberos.krb5.file=/etc/krb5.conf
>>>
>>> # kerberos service principal #
>>> nifi.kerberos.service.principal=n...@x.net
>>> nifi.kerberos.service.keytab.location=/opt/nifi/conf/nifi.keytab
>>>
>>>
>>> state-management.xml
>>> 
>>> zk-provider
>>>
>>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
>>> /nifi
>>> 30 seconds
>>> CreatorOnly
>>> X:2181,Y:2181,Z:2181
>>>  
>>>
>>>
>>>
>>> KRB5_TRACE=/dev/stdout kinit -k -t /opt/nifi/conf/nifi.keytab n...@x.net
>>> ...
>>> ...
>>>
>>> klist
>>> Ticket cache: FILE:/tmp/krb5cc_2004
>>> Default principal: n...@x.net
>>>
>>> Valid starting   Expires  Service principal
>>> 08/05/2020 17:57:02  08/06/2020 03:57:02  krbtgt/x@x.net
>>> renew until 08/06/2020 17:57:02
>>>
>>>
>>>
>>>
>>> As a side note, secure NiFi was working fine before the kerberos bit,
>>> I've been beating my head against the wall with it for the day, but the
>>> kerberos/zookeeper stuff seems to be working now
>>> do we need to have Server-Server zookeeper auth working for this?
>>>
>>>
>>> Appreciate any insight
>>>
>>> Regards,
>>>
>>> Dano
>>>
>>>
>>>


Re: cluster stick in "Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered"

2020-08-05 Thread dan young
I'll make that change, was going off the comments in the
state-management.xml

-Open
-CreatorOnly

Let me try Creator...

On Wed, Aug 5, 2020 at 1:12 PM Bryan Bende  wrote:

> I don't see how this would relate to the problem, but shouldn't the ACL be
> set to "Creator" when Sasl/Kerberos is setup correctly?
>
> In addition to the nifi configs you showed, you would also need a jaas
> conf file specified in bootstrap.conf and in that file you would need the
> jaas entry for the ZK client.
>
> On Wed, Aug 5, 2020 at 3:02 PM dan young  wrote:
>
>> Hello Mark,
>>
>> Attached is a dump from one of the nodesI replaced the domain related
>> entries with X/x.  I'm not sure if it's relevant or not, but I did notice
>> that in the log there's entries "Looking for keys for n...@x.net"  the x
>> (domain)  is lowercase whereas in the keytab file it's uppercase X.  Also
>> not sure if the Found unsupported keytype (1) is meaningful.  Not that when
>> I delete the znode in zookeeper=, at least the initial znode is created
>> /nifi, but we never see the other typical suspect, i.e Coordinator,
>> Primary, etc...
>>
>> Seems to be something stuck in Curator???
>>
>> Regards.
>>
>> Dano
>>
>> On Wed, Aug 5, 2020 at 12:20 PM Mark Payne  wrote:
>>
>>> Dan,
>>>
>>> Can you grab a thread dump and provide that? Specifically, the “main”
>>> thread is the important one with startup. The note that the role is already
>>> registered is normal. It probably could be changed to a DEBUG level,
>>> really. It should not be concerning. A thread dump, though, would show us
>>> exactly where it’s at.
>>>
>>> Thanks
>>> -Mark
>>>
>>>
>>> On Aug 5, 2020, at 2:02 PM, dan young  wrote:
>>>
>>> Hello,
>>> Running nifi 1.11.4, 3 X secure cluster mode and have enabled
>>> kerberos/sasl, upon trying to startup the cluster, they seem to get stuck
>>> in :
>>>
>>> 2020-08-05 17:10:18,907 WARN [main]
>>> o.a.nifi.controller.StandardFlowService There is currently no Cluster
>>> Coordinator. This often happens upon restart of NiFi
>>>  when running an embedded ZooKeeper. Will register this node to become
>>> the active Cluster Coordinator and will attempt to connect to cluster again
>>> 2020-08-05 17:10:18,907 INFO [main]
>>> o.a.n.c.l.e.CuratorLeaderElectionManager
>>> CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
>>> Election
>>>  for role 'Cluster Coordinator' but this role is already registered
>>>
>>>
>>>
>>> I've checked zookeeper and I can see that the /nifi znode has been
>>> created, although empty, and the ACL seem to look correct
>>> zk: nifi1-5.X.net:2181 <http://nifi1-5.x.net:2181>(CONNECTED) 3] getAcl
>>> /nifi
>>> 'sasl,'n...@x.net
>>> : cdrwa
>>> 'world,'anyone
>>> : r
>>>
>>>
>>> relevant Nifi config settings
>>>
>>> nifi.properties:
>>>
>>> nifi.zookeeper.auth.type=sasl
>>> nifi.zookeeper.kerberos.removeHostFromPrincipal=true
>>> nifi.zookeeper.kerberos.removeRealmFromPrincipal=false
>>>
>>> # kerberos #
>>> nifi.kerberos.krb5.file=/etc/krb5.conf
>>>
>>> # kerberos service principal #
>>> nifi.kerberos.service.principal=n...@x.net
>>> nifi.kerberos.service.keytab.location=/opt/nifi/conf/nifi.keytab
>>>
>>>
>>> state-management.xml
>>> 
>>> zk-provider
>>>
>>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
>>> /nifi
>>> 30 seconds
>>> CreatorOnly
>>> X:2181,Y:2181,Z:2181
>>>  
>>>
>>>
>>>
>>> KRB5_TRACE=/dev/stdout kinit -k -t /opt/nifi/conf/nifi.keytab n...@x.net
>>> ...
>>> ...
>>>
>>> klist
>>> Ticket cache: FILE:/tmp/krb5cc_2004
>>> Default principal: n...@x.net
>>>
>>> Valid starting   Expires  Service principal
>>> 08/05/2020 17:57:02  08/06/2020 03:57:02  krbtgt/x@x.net
>>> renew until 08/06/2020 17:57:02
>>>
>>>
>>>
>>>
>>> As a side note, secure NiFi was working fine before the kerberos bit,
>>> I've been beating my head against the wall with it for the day, but the
>>> kerberos/zookeeper stuff seems to be working now
>>> do we need to have Server-Server zookeeper auth working for this?
>>>
>>>
>>> Appreciate any insight
>>>
>>> Regards,
>>>
>>> Dano
>>>
>>>
>>>


Re: cluster stick in "Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered"

2020-08-05 Thread dan young
Hello Mark,

Attached is a dump from one of the nodesI replaced the domain related
entries with X/x.  I'm not sure if it's relevant or not, but I did notice
that in the log there's entries "Looking for keys for n...@x.net"  the x
(domain)  is lowercase whereas in the keytab file it's uppercase X.  Also
not sure if the Found unsupported keytype (1) is meaningful.  Not that when
I delete the znode in zookeeper=, at least the initial znode is created
/nifi, but we never see the other typical suspect, i.e Coordinator,
Primary, etc...

Seems to be something stuck in Curator???

Regards.

Dano

On Wed, Aug 5, 2020 at 12:20 PM Mark Payne  wrote:

> Dan,
>
> Can you grab a thread dump and provide that? Specifically, the “main”
> thread is the important one with startup. The note that the role is already
> registered is normal. It probably could be changed to a DEBUG level,
> really. It should not be concerning. A thread dump, though, would show us
> exactly where it’s at.
>
> Thanks
> -Mark
>
>
> On Aug 5, 2020, at 2:02 PM, dan young  wrote:
>
> Hello,
> Running nifi 1.11.4, 3 X secure cluster mode and have enabled
> kerberos/sasl, upon trying to startup the cluster, they seem to get stuck
> in :
>
> 2020-08-05 17:10:18,907 WARN [main]
> o.a.nifi.controller.StandardFlowService There is currently no Cluster
> Coordinator. This often happens upon restart of NiFi
>  when running an embedded ZooKeeper. Will register this node to become the
> active Cluster Coordinator and will attempt to connect to cluster again
> 2020-08-05 17:10:18,907 INFO [main]
> o.a.n.c.l.e.CuratorLeaderElectionManager
> CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
> Election
>  for role 'Cluster Coordinator' but this role is already registered
>
>
>
> I've checked zookeeper and I can see that the /nifi znode has been
> created, although empty, and the ACL seem to look correct
> zk: nifi1-5.X.net:2181 <http://nifi1-5.x.net:2181>(CONNECTED) 3] getAcl
> /nifi
> 'sasl,'n...@x.net
> : cdrwa
> 'world,'anyone
> : r
>
>
> relevant Nifi config settings
>
> nifi.properties:
>
> nifi.zookeeper.auth.type=sasl
> nifi.zookeeper.kerberos.removeHostFromPrincipal=true
> nifi.zookeeper.kerberos.removeRealmFromPrincipal=false
>
> # kerberos #
> nifi.kerberos.krb5.file=/etc/krb5.conf
>
> # kerberos service principal #
> nifi.kerberos.service.principal=n...@x.net
> nifi.kerberos.service.keytab.location=/opt/nifi/conf/nifi.keytab
>
>
> state-management.xml
> 
> zk-provider
>
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
> /nifi
> 30 seconds
> CreatorOnly
> X:2181,Y:2181,Z:2181
>  
>
>
>
> KRB5_TRACE=/dev/stdout kinit -k -t /opt/nifi/conf/nifi.keytab n...@x.net
> ...
> ...
>
> klist
> Ticket cache: FILE:/tmp/krb5cc_2004
> Default principal: n...@x.net
>
> Valid starting   Expires  Service principal
> 08/05/2020 17:57:02  08/06/2020 03:57:02  krbtgt/x@x.net
> renew until 08/06/2020 17:57:02
>
>
>
>
> As a side note, secure NiFi was working fine before the kerberos bit, I've
> been beating my head against the wall with it for the day, but the
> kerberos/zookeeper stuff seems to be working now
> do we need to have Server-Server zookeeper auth working for this?
>
>
> Appreciate any insight
>
> Regards,
>
> Dano
>
>
>


bootstrap-dump.log.gz
Description: GNU Zip compressed data


cluster stick in "Attempted to register Leader Election for role 'Cluster Coordinator' but this role is already registered"

2020-08-05 Thread dan young
Hello,
Running nifi 1.11.4, 3 X secure cluster mode and have enabled
kerberos/sasl, upon trying to startup the cluster, they seem to get stuck
in :

2020-08-05 17:10:18,907 WARN [main] o.a.nifi.controller.StandardFlowService
There is currently no Cluster Coordinator. This often happens upon restart
of NiFi
 when running an embedded ZooKeeper. Will register this node to become the
active Cluster Coordinator and will attempt to connect to cluster again
2020-08-05 17:10:18,907 INFO [main]
o.a.n.c.l.e.CuratorLeaderElectionManager
CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
Election
 for role 'Cluster Coordinator' but this role is already registered



I've checked zookeeper and I can see that the /nifi znode has been created,
although empty, and the ACL seem to look correct
zk: nifi1-5.X.net:2181(CONNECTED) 3] getAcl /nifi
'sasl,'n...@x.net
: cdrwa
'world,'anyone
: r


relevant Nifi config settings

nifi.properties:

nifi.zookeeper.auth.type=sasl
nifi.zookeeper.kerberos.removeHostFromPrincipal=true
nifi.zookeeper.kerberos.removeRealmFromPrincipal=false

# kerberos #
nifi.kerberos.krb5.file=/etc/krb5.conf

# kerberos service principal #
nifi.kerberos.service.principal=n...@x.net
nifi.kerberos.service.keytab.location=/opt/nifi/conf/nifi.keytab


state-management.xml

zk-provider

org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
/nifi
30 seconds
CreatorOnly
X:2181,Y:2181,Z:2181
 



KRB5_TRACE=/dev/stdout kinit -k -t /opt/nifi/conf/nifi.keytab n...@x.net
...
...

klist
Ticket cache: FILE:/tmp/krb5cc_2004
Default principal: n...@x.net

Valid starting   Expires  Service principal
08/05/2020 17:57:02  08/06/2020 03:57:02  krbtgt/x@x.net
renew until 08/06/2020 17:57:02




As a side note, secure NiFi was working fine before the kerberos bit, I've
been beating my head against the wall with it for the day, but the
kerberos/zookeeper stuff seems to be working now
do we need to have Server-Server zookeeper auth working for this?


Appreciate any insight

Regards,

Dano


Re: zookeeper SASL/Digest authentication....

2020-07-06 Thread dan young
"So right now the /nifi z-node exists with the ACL you showed earlier for
'digest,'nifi:the-passwd-digest' , but then '/nifi/components' doesn't
exist yet?" correctwe see the leaders, and under that there's the
Cluster Coordinator and the Primary Node children.  When I start the
GenerateTableFetch, that's when we get the error.  On other clusters we're
running  (with Open) we see the component id/UUID under the
/nifi/components/   I can do a get on them and see the table and
the Max-value columns value.



On Mon, Jul 6, 2020 at 1:06 PM Bryan Bende  wrote:

> So right now the /nifi z-node exists with the ACL you showed earlier for
> 'digest,'nifi:the-passwd-digest' , but then '/nifi/components' doesn't
> exist yet?
>
> The one difference from a code perspective is that /nifi and the cluster
> nodes are created by Curator, and the state provider is done using plain ZK
> client code, although no reason why that should work.
>
> I'm no ZK expert, but the code that is causing the error is a call to
> "create(path, data, acls, CreateMode.PERSISTENT)" where "acls" is a list
> with one element of  Ids.CREATOR_ALL_ACL which has
> /**
>  * This Id is only usable to set ACLs. It will get substituted with the
>  * Id's the client authenticated with.
>  */
> public final Id AUTH_IDS = new Id("auth", "");
>
> Any ZK client code should be seeing the same JAAS entry you configured, so
> not sure how it could be authenticating as different identities.
>
>
> On Mon, Jul 6, 2020 at 2:27 PM dan young  wrote:
>
>> Correct, the leader seems to work, but not the components it seems.
>> Is there some additional config setting I might be missing?
>>
>>
>>
>> 
>>
>>   local-provider
>>
>> org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider
>>   > name="Directory">/opt/nifi-configuration-resources/state/local
>>   false
>>   16
>>   2 mins
>>
>>
>>   zk-provider
>>
>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
>>   /nifi
>>   30 seconds
>>   CreatorOnly
>>   xx.xxx.x.xxx:2181,xx.xxx.x.xxx:2181,xx.xxx.x.xxx:2181
>>
>> 
>>
>>
>>
>>
>> 2020-07-06 18:25:04,830 ERROR [Timer-Driven Process Thread-3]
>> o.a.n.p.standard.GenerateTableFetch
>> GenerateTableFetch[id=2a056fd7-b63c-33b4-a5c4-bf767c1a2983]
>> GenerateTableFetch[id=2a056fd7-b63c-33b4-a5c4-bf767c1a2983] failed to
>> update State Manager, observed maximum values will not be recorded. Also,
>> any generated SQL statements may be duplicated.: java.io.IOException:
>> Failed to set cluster-wide state in ZooKeeper for component with ID
>> 2a056fd7-b63c-33b4-a5c4-bf767c1a2983
>> java.io.IOException: Failed to set cluster-wide state in ZooKeeper for
>> component with ID 2a056fd7-b63c-33b4-a5c4-bf767c1a2983
>> at
>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:343)
>> at
>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:283)
>> at
>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:228)
>> at
>> org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.setState(StandardStateManagerProvider.java:298)
>> at
>> org.apache.nifi.controller.state.StandardStateManager.setState(StandardStateManager.java:79)
>> at
>> org.apache.nifi.controller.lifecycle.TaskTerminationAwareStateManager.setState(TaskTerminationAwareStateManager.java:64)
>> at
>> org.apache.nifi.processors.standard.GenerateTableFetch.onTrigger(GenerateTableFetch.java:555)
>> at
>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
>> at
>> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
>> at
>> org.apache.nifi.controller.scheduling.QuartzSchedulingAgent$2.run(QuartzSchedulingAgent.java:151)
>> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecu

Re: zookeeper SASL/Digest authentication....

2020-07-06 Thread dan young
Correct, the leader seems to work, but not the components it seems. Is
there some additional config setting I might be missing?




   
  local-provider

org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider
  /opt/nifi-configuration-resources/state/local
  false
  16
  2 mins
   
   
  zk-provider

org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
  /nifi
  30 seconds
  CreatorOnly
  xx.xxx.x.xxx:2181,xx.xxx.x.xxx:2181,xx.xxx.x.xxx:2181
   





2020-07-06 18:25:04,830 ERROR [Timer-Driven Process Thread-3]
o.a.n.p.standard.GenerateTableFetch
GenerateTableFetch[id=2a056fd7-b63c-33b4-a5c4-bf767c1a2983]
GenerateTableFetch[id=2a056fd7-b63c-33b4-a5c4-bf767c1a2983] failed to
update State Manager, observed maximum values will not be recorded. Also,
any generated SQL statements may be duplicated.: java.io.IOException:
Failed to set cluster-wide state in ZooKeeper for component with ID
2a056fd7-b63c-33b4-a5c4-bf767c1a2983
java.io.IOException: Failed to set cluster-wide state in ZooKeeper for
component with ID 2a056fd7-b63c-33b4-a5c4-bf767c1a2983
at
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:343)
at
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:283)
at
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:228)
at
org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.setState(StandardStateManagerProvider.java:298)
at
org.apache.nifi.controller.state.StandardStateManager.setState(StandardStateManager.java:79)
at
org.apache.nifi.controller.lifecycle.TaskTerminationAwareStateManager.setState(TaskTerminationAwareStateManager.java:64)
at
org.apache.nifi.processors.standard.GenerateTableFetch.onTrigger(GenerateTableFetch.java:555)
at
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
at
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
at
org.apache.nifi.controller.scheduling.QuartzSchedulingAgent$2.run(QuartzSchedulingAgent.java:151)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.zookeeper.KeeperException$InvalidACLException:
KeeperErrorCode = InvalidACL for
/nifi/components/2a056fd7-b63c-33b4-a5c4-bf767c1a2983
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
at
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.createNode(ZooKeeperStateProvider.java:360)
at
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:321)
... 17 common frames omitted




On Mon, Jul 6, 2020 at 11:59 AM Bryan Bende  wrote:

> You set CreatorOnly in the ZK
> state manager ?
>
> On Mon, Jul 6, 2020 at 1:40 PM dan young  wrote:
>
>> Fat fingered... any insight into this error when the GenerateTableFectch?
>>
>> Failed to set cluster-wide state in Zookeeper...
>> ...
>> ...
>>
>> Caused by: org.apache.zookeeper.KeeperException$InvalidACLException:
>> KeeperErrorCode = InvalidACL for
>> /nifi/components/2a056fd7-b63c-33b4-a5c4-bf767c1a2983
>> at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
>> at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>> at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
>> at
>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.createNode(ZooKeeperStateProvider.java:360)
>> at
>> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:321)
>> ... 17 common frames omitted
>>
>>
>> On Mon, Jul 6, 2020 at 11:39 AM dan young  wrote:
>>
>>> Hello Bryan,
>>>
>&g

Re: zookeeper SASL/Digest authentication....

2020-07-06 Thread dan young
Fat fingered... any insight into this error when the GenerateTableFectch?

Failed to set cluster-wide state in Zookeeper...
...
...

Caused by: org.apache.zookeeper.KeeperException$InvalidACLException:
KeeperErrorCode = InvalidACL for
/nifi/components/2a056fd7-b63c-33b4-a5c4-bf767c1a2983
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
at
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.createNode(ZooKeeperStateProvider.java:360)
at
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:321)
... 17 common frames omitted


On Mon, Jul 6, 2020 at 11:39 AM dan young  wrote:

> Hello Bryan,
>
> Making some progressany insight into this error with
> GenerateTableFectch processor?
>
>
> On Mon, Jul 6, 2020 at 10:47 AM Bryan Bende  wrote:
>
>> Have you configured this in nifi.properties?
>>
>> nifi.zookeeper.auth.type=sasl
>>
>>
>> On Mon, Jul 6, 2020 at 12:43 PM dan young  wrote:
>>
>>> Hello,
>>>
>>> And a follow up on this, if I delete the znode in zookeeper, the leaders
>>> is written to the /nifi znode, but the ACL is open, 'world';'anyone  I
>>> do have the Access COntrol set to CreatorOnly in the state-management.xml.
>>> So one question, is the CreatorOnly only supported when we run in kerberos
>>> env?
>>>
>>> Dano
>>>
>>> On Mon, Jul 6, 2020 at 10:36 AM dan young  wrote:
>>>
>>>> Hello everyone,
>>>>
>>>> I'm trying to configure the zookeeper state provider in  NiFi to use
>>>> the Access Policy of CreatorOnly vs Open using DIGEST vs Kerberos.   I
>>>> believe I've setup zookeeper correctly for this, and partly Nifi, but when
>>>> I startup nifi cluster, we seem to get stuck with the following:
>>>>
>>>> 2020-07-06 16:06:20,826 WARN [Clustering Tasks Thread-1]
>>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>>> because there is no Cluster Coordinator currently elected
>>>> 2020-07-06 16:06:35,920 WARN [Clustering Tasks Thread-2]
>>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>>> because there is no Cluster Coordinator currently elected
>>>> 2020-07-06 16:06:50,923 WARN [Clustering Tasks Thread-2]
>>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>>> because there is no Cluster Coordinator currently elected
>>>> 2020-07-06 16:07:06,071 WARN [Clustering Tasks Thread-2]
>>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>>> because there is no Cluster Coordinator currently elected
>>>>
>>>> I can see the znode in zookeeper, and it appears to at least have the
>>>> correct permissions.  I created this znode in the CLI:
>>>>
>>>> addauth digest nifi:
>>>> create /nifi data digest:nifi:cdrwa
>>>>
>>>> The digest was generated via:
>>>>
>>>> java -cp
>>>> '/op/zookeeper/lib/zookeeper-3.5.8.jar:/opt/zookeeper/lib/slf4j-api-1.7.25.jar'
>>>> org.apache.auth.AuthenticationProvider nifi:
>>>>
>>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] getAcl /nifi
>>>> 'digest,'nifi:the-passwd-digest'
>>>> : cdrwa
>>>>
>>>>
>>>> after starting up Nifi, doing and ls /nifi, the znode is empty.
>>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] ls /nifi
>>>> []
>>>>
>>>> Seems like we can't write the leaders or components value under the
>>>> /nifi znode.
>>>>
>>>>
>>>> Looking at the nifi-app log
>>>>
>>>> 2020-07-06 16:05:46,554 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>>> org.apache.zookeeper.Login Client successfully logged in.
>>>> 2020-07-06 16:05:46,556 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>>> o.a.zookeeper.client.ZooKeeperSaslClient Client will u

Re: zookeeper SASL/Digest authentication....

2020-07-06 Thread dan young
Hello Bryan,

Making some progressany insight into this error with
GenerateTableFectch processor?


On Mon, Jul 6, 2020 at 10:47 AM Bryan Bende  wrote:

> Have you configured this in nifi.properties?
>
> nifi.zookeeper.auth.type=sasl
>
>
> On Mon, Jul 6, 2020 at 12:43 PM dan young  wrote:
>
>> Hello,
>>
>> And a follow up on this, if I delete the znode in zookeeper, the leaders
>> is written to the /nifi znode, but the ACL is open, 'world';'anyone  I
>> do have the Access COntrol set to CreatorOnly in the state-management.xml.
>> So one question, is the CreatorOnly only supported when we run in kerberos
>> env?
>>
>> Dano
>>
>> On Mon, Jul 6, 2020 at 10:36 AM dan young  wrote:
>>
>>> Hello everyone,
>>>
>>> I'm trying to configure the zookeeper state provider in  NiFi to use the
>>> Access Policy of CreatorOnly vs Open using DIGEST vs Kerberos.   I believe
>>> I've setup zookeeper correctly for this, and partly Nifi, but when I
>>> startup nifi cluster, we seem to get stuck with the following:
>>>
>>> 2020-07-06 16:06:20,826 WARN [Clustering Tasks Thread-1]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:06:35,920 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:06:50,923 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:07:06,071 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>>
>>> I can see the znode in zookeeper, and it appears to at least have the
>>> correct permissions.  I created this znode in the CLI:
>>>
>>> addauth digest nifi:
>>> create /nifi data digest:nifi:cdrwa
>>>
>>> The digest was generated via:
>>>
>>> java -cp
>>> '/op/zookeeper/lib/zookeeper-3.5.8.jar:/opt/zookeeper/lib/slf4j-api-1.7.25.jar'
>>> org.apache.auth.AuthenticationProvider nifi:
>>>
>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] getAcl /nifi
>>> 'digest,'nifi:the-passwd-digest'
>>> : cdrwa
>>>
>>>
>>> after starting up Nifi, doing and ls /nifi, the znode is empty.
>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] ls /nifi
>>> []
>>>
>>> Seems like we can't write the leaders or components value under the
>>> /nifi znode.
>>>
>>>
>>> Looking at the nifi-app log
>>>
>>> 2020-07-06 16:05:46,554 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>> org.apache.zookeeper.Login Client successfully logged in.
>>> 2020-07-06 16:05:46,556 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>> o.a.zookeeper.client.ZooKeeperSaslClient Client will use DIGEST-MD5 as SASL
>>> mechanism.
>>> 2020-07-06 16:05:46,900 INFO [main-EventThread]
>>> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
>>> 2020-07-06 16:05:47,347 INFO [main-EventThread]
>>> o.a.c.framework.imps.EnsembleTracker New config event received:
>>> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
>>> server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
>>> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
>>> 2020-07-06 16:05:47,354 INFO [main-EventThread]
>>> o.a.c.framework.imps.EnsembleTracker New config event received:
>>> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
>>> server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
>>> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
>>> 2020-07-06 16:05:47,357 INFO [Curator-Framework-0]
>>> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
>>> 2020-07-06 16:05:47,364 DEBUG [main] org.apache.zookeeper.ZooKeeper
>>> Closing session: 0x3002a05b0c60006
>>> 2020-07-06 16:05:47,469 INFO [main/ org.apache.zookeeper.ZooKeeper
>>> Session: 0x3002a05b0c60006 closed
>>>
>>>
>>>
>>> Any ideas on what configuration I could be missing or have wrong?  I
>>> have a jaas.conf file in the $NIFI_HOME/conf directory and have a
>>> java.arg.18--Djava.security.auth.login.config=
>>>
>>> One question I have, in the jaas.conf file, I put the passwd in there
>>> and not the digest I believe...I understand this would be passed around
>>> cleartext, but this is just for testing purposes currently
>>>
>>> Nifi 1.11.4
>>> external zookeeper 3.5.8
>>>
>>> Regards,
>>>
>>> Dano
>>>
>>>


Re: zookeeper SASL/Digest authentication....

2020-07-06 Thread dan young
arrgh...sorry Bryan...seems like I had the principal borked...i've fixed
that and the permissions look better now .  Going to test a processor that
stores the state via zk...



On Mon, Jul 6, 2020 at 10:47 AM Bryan Bende  wrote:

> Have you configured this in nifi.properties?
>
> nifi.zookeeper.auth.type=sasl
>
>
> On Mon, Jul 6, 2020 at 12:43 PM dan young  wrote:
>
>> Hello,
>>
>> And a follow up on this, if I delete the znode in zookeeper, the leaders
>> is written to the /nifi znode, but the ACL is open, 'world';'anyone  I
>> do have the Access COntrol set to CreatorOnly in the state-management.xml.
>> So one question, is the CreatorOnly only supported when we run in kerberos
>> env?
>>
>> Dano
>>
>> On Mon, Jul 6, 2020 at 10:36 AM dan young  wrote:
>>
>>> Hello everyone,
>>>
>>> I'm trying to configure the zookeeper state provider in  NiFi to use the
>>> Access Policy of CreatorOnly vs Open using DIGEST vs Kerberos.   I believe
>>> I've setup zookeeper correctly for this, and partly Nifi, but when I
>>> startup nifi cluster, we seem to get stuck with the following:
>>>
>>> 2020-07-06 16:06:20,826 WARN [Clustering Tasks Thread-1]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:06:35,920 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:06:50,923 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:07:06,071 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>>
>>> I can see the znode in zookeeper, and it appears to at least have the
>>> correct permissions.  I created this znode in the CLI:
>>>
>>> addauth digest nifi:
>>> create /nifi data digest:nifi:cdrwa
>>>
>>> The digest was generated via:
>>>
>>> java -cp
>>> '/op/zookeeper/lib/zookeeper-3.5.8.jar:/opt/zookeeper/lib/slf4j-api-1.7.25.jar'
>>> org.apache.auth.AuthenticationProvider nifi:
>>>
>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] getAcl /nifi
>>> 'digest,'nifi:the-passwd-digest'
>>> : cdrwa
>>>
>>>
>>> after starting up Nifi, doing and ls /nifi, the znode is empty.
>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] ls /nifi
>>> []
>>>
>>> Seems like we can't write the leaders or components value under the
>>> /nifi znode.
>>>
>>>
>>> Looking at the nifi-app log
>>>
>>> 2020-07-06 16:05:46,554 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>> org.apache.zookeeper.Login Client successfully logged in.
>>> 2020-07-06 16:05:46,556 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>> o.a.zookeeper.client.ZooKeeperSaslClient Client will use DIGEST-MD5 as SASL
>>> mechanism.
>>> 2020-07-06 16:05:46,900 INFO [main-EventThread]
>>> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
>>> 2020-07-06 16:05:47,347 INFO [main-EventThread]
>>> o.a.c.framework.imps.EnsembleTracker New config event received:
>>> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
>>> server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
>>> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
>>> 2020-07-06 16:05:47,354 INFO [main-EventThread]
>>> o.a.c.framework.imps.EnsembleTracker New config event received:
>>> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
>>> server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
>>> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
>>> 2020-07-06 16:05:47,357 INFO [Curator-Framework-0]
>>> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
>>> 2020-07-06 16:05:47,364 DEBUG [main] org.apache.zookeeper.ZooKeeper
>>> Closing session: 0x3002a05b0c60006
>>> 2020-07-06 16:05:47,469 INFO [main/ org.apache.zookeeper.ZooKeeper
>>> Session: 0x3002a05b0c60006 closed
>>>
>>>
>>>
>>> Any ideas on what configuration I could be missing or have wrong?  I
>>> have a jaas.conf file in the $NIFI_HOME/conf directory and have a
>>> java.arg.18--Djava.security.auth.login.config=
>>>
>>> One question I have, in the jaas.conf file, I put the passwd in there
>>> and not the digest I believe...I understand this would be passed around
>>> cleartext, but this is just for testing purposes currently
>>>
>>> Nifi 1.11.4
>>> external zookeeper 3.5.8
>>>
>>> Regards,
>>>
>>> Dano
>>>
>>>


Re: zookeeper SASL/Digest authentication....

2020-07-06 Thread dan young
yes, and set the nifi.kerberos.service.principal to nifi, but when I did
that I'm getting an  java.lang.StringIndexOutOfBoundsException: String
index out of range: -1

Let me look at the code for setting the principal, maybe I have that
borked



On Mon, Jul 6, 2020 at 10:47 AM Bryan Bende  wrote:

> Have you configured this in nifi.properties?
>
> nifi.zookeeper.auth.type=sasl
>
>
> On Mon, Jul 6, 2020 at 12:43 PM dan young  wrote:
>
>> Hello,
>>
>> And a follow up on this, if I delete the znode in zookeeper, the leaders
>> is written to the /nifi znode, but the ACL is open, 'world';'anyone  I
>> do have the Access COntrol set to CreatorOnly in the state-management.xml.
>> So one question, is the CreatorOnly only supported when we run in kerberos
>> env?
>>
>> Dano
>>
>> On Mon, Jul 6, 2020 at 10:36 AM dan young  wrote:
>>
>>> Hello everyone,
>>>
>>> I'm trying to configure the zookeeper state provider in  NiFi to use the
>>> Access Policy of CreatorOnly vs Open using DIGEST vs Kerberos.   I believe
>>> I've setup zookeeper correctly for this, and partly Nifi, but when I
>>> startup nifi cluster, we seem to get stuck with the following:
>>>
>>> 2020-07-06 16:06:20,826 WARN [Clustering Tasks Thread-1]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:06:35,920 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:06:50,923 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>> 2020-07-06 16:07:06,071 WARN [Clustering Tasks Thread-2]
>>> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
>>> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
>>> because there is no Cluster Coordinator currently elected
>>>
>>> I can see the znode in zookeeper, and it appears to at least have the
>>> correct permissions.  I created this znode in the CLI:
>>>
>>> addauth digest nifi:
>>> create /nifi data digest:nifi:cdrwa
>>>
>>> The digest was generated via:
>>>
>>> java -cp
>>> '/op/zookeeper/lib/zookeeper-3.5.8.jar:/opt/zookeeper/lib/slf4j-api-1.7.25.jar'
>>> org.apache.auth.AuthenticationProvider nifi:
>>>
>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] getAcl /nifi
>>> 'digest,'nifi:the-passwd-digest'
>>> : cdrwa
>>>
>>>
>>> after starting up Nifi, doing and ls /nifi, the znode is empty.
>>> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] ls /nifi
>>> []
>>>
>>> Seems like we can't write the leaders or components value under the
>>> /nifi znode.
>>>
>>>
>>> Looking at the nifi-app log
>>>
>>> 2020-07-06 16:05:46,554 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>> org.apache.zookeeper.Login Client successfully logged in.
>>> 2020-07-06 16:05:46,556 INFO [main-SendThread(xx.xxx.x.xx:2181)]
>>> o.a.zookeeper.client.ZooKeeperSaslClient Client will use DIGEST-MD5 as SASL
>>> mechanism.
>>> 2020-07-06 16:05:46,900 INFO [main-EventThread]
>>> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
>>> 2020-07-06 16:05:47,347 INFO [main-EventThread]
>>> o.a.c.framework.imps.EnsembleTracker New config event received:
>>> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
>>> server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
>>> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
>>> 2020-07-06 16:05:47,354 INFO [main-EventThread]
>>> o.a.c.framework.imps.EnsembleTracker New config event received:
>>> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
>>> server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
>>> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
>>> 2020-07-06 16:05:47,357 INFO [Curator-Framework-0]
>>> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
>>> 2020-07-06 16:05:47,364 DEBUG [main] org.apache.zookeeper.ZooKeeper
>>> Closing session: 0x3002a05b0c60006
>>> 2020-07-06 16:05:47,469 INFO [main/ org.apache.zookeeper.ZooKeeper
>>> Session: 0x3002a05b0c60006 closed
>>>
>>>
>>>
>>> Any ideas on what configuration I could be missing or have wrong?  I
>>> have a jaas.conf file in the $NIFI_HOME/conf directory and have a
>>> java.arg.18--Djava.security.auth.login.config=
>>>
>>> One question I have, in the jaas.conf file, I put the passwd in there
>>> and not the digest I believe...I understand this would be passed around
>>> cleartext, but this is just for testing purposes currently
>>>
>>> Nifi 1.11.4
>>> external zookeeper 3.5.8
>>>
>>> Regards,
>>>
>>> Dano
>>>
>>>


Re: zookeeper SASL/Digest authentication....

2020-07-06 Thread dan young
Hello,

And a follow up on this, if I delete the znode in zookeeper, the leaders is
written to the /nifi znode, but the ACL is open, 'world';'anyone  I do
have the Access COntrol set to CreatorOnly in the state-management.xml.  So
one question, is the CreatorOnly only supported when we run in kerberos env?

Dano

On Mon, Jul 6, 2020 at 10:36 AM dan young  wrote:

> Hello everyone,
>
> I'm trying to configure the zookeeper state provider in  NiFi to use the
> Access Policy of CreatorOnly vs Open using DIGEST vs Kerberos.   I believe
> I've setup zookeeper correctly for this, and partly Nifi, but when I
> startup nifi cluster, we seem to get stuck with the following:
>
> 2020-07-06 16:06:20,826 WARN [Clustering Tasks Thread-1]
> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
> because there is no Cluster Coordinator currently elected
> 2020-07-06 16:06:35,920 WARN [Clustering Tasks Thread-2]
> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
> because there is no Cluster Coordinator currently elected
> 2020-07-06 16:06:50,923 WARN [Clustering Tasks Thread-2]
> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
> because there is no Cluster Coordinator currently elected
> 2020-07-06 16:07:06,071 WARN [Clustering Tasks Thread-2]
> o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
> org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
> because there is no Cluster Coordinator currently elected
>
> I can see the znode in zookeeper, and it appears to at least have the
> correct permissions.  I created this znode in the CLI:
>
> addauth digest nifi:
> create /nifi data digest:nifi:cdrwa
>
> The digest was generated via:
>
> java -cp
> '/op/zookeeper/lib/zookeeper-3.5.8.jar:/opt/zookeeper/lib/slf4j-api-1.7.25.jar'
> org.apache.auth.AuthenticationProvider nifi:
>
> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] getAcl /nifi
> 'digest,'nifi:the-passwd-digest'
> : cdrwa
>
>
> after starting up Nifi, doing and ls /nifi, the znode is empty.
> [zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] ls /nifi
> []
>
> Seems like we can't write the leaders or components value under the /nifi
> znode.
>
>
> Looking at the nifi-app log
>
> 2020-07-06 16:05:46,554 INFO [main-SendThread(xx.xxx.x.xx:2181)]
> org.apache.zookeeper.Login Client successfully logged in.
> 2020-07-06 16:05:46,556 INFO [main-SendThread(xx.xxx.x.xx:2181)]
> o.a.zookeeper.client.ZooKeeperSaslClient Client will use DIGEST-MD5 as SASL
> mechanism.
> 2020-07-06 16:05:46,900 INFO [main-EventThread]
> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
> 2020-07-06 16:05:47,347 INFO [main-EventThread]
> o.a.c.framework.imps.EnsembleTracker New config event received:
> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
> server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
> 2020-07-06 16:05:47,354 INFO [main-EventThread]
> o.a.c.framework.imps.EnsembleTracker New config event received:
> {server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
> server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
> server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
> 2020-07-06 16:05:47,357 INFO [Curator-Framework-0]
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2020-07-06 16:05:47,364 DEBUG [main] org.apache.zookeeper.ZooKeeper
> Closing session: 0x3002a05b0c60006
> 2020-07-06 16:05:47,469 INFO [main/ org.apache.zookeeper.ZooKeeper
> Session: 0x3002a05b0c60006 closed
>
>
>
> Any ideas on what configuration I could be missing or have wrong?  I have
> a jaas.conf file in the $NIFI_HOME/conf directory and have a
> java.arg.18--Djava.security.auth.login.config=
>
> One question I have, in the jaas.conf file, I put the passwd in there and
> not the digest I believe...I understand this would be passed around
> cleartext, but this is just for testing purposes currently
>
> Nifi 1.11.4
> external zookeeper 3.5.8
>
> Regards,
>
> Dano
>
>


zookeeper SASL/Digest authentication....

2020-07-06 Thread dan young
Hello everyone,

I'm trying to configure the zookeeper state provider in  NiFi to use the
Access Policy of CreatorOnly vs Open using DIGEST vs Kerberos.   I believe
I've setup zookeeper correctly for this, and partly Nifi, but when I
startup nifi cluster, we seem to get stuck with the following:

2020-07-06 16:06:20,826 WARN [Clustering Tasks Thread-1]
o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
because there is no Cluster Coordinator currently elected
2020-07-06 16:06:35,920 WARN [Clustering Tasks Thread-2]
o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
because there is no Cluster Coordinator currently elected
2020-07-06 16:06:50,923 WARN [Clustering Tasks Thread-2]
o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
because there is no Cluster Coordinator currently elected
2020-07-06 16:07:06,071 WARN [Clustering Tasks Thread-2]
o.apache.nifi.controller.FlowController Failed to send heartbeat due to:
org.apache.nifi.cluster.protocol.ProtocolException: Cannot send heartbeat
because there is no Cluster Coordinator currently elected

I can see the znode in zookeeper, and it appears to at least have the
correct permissions.  I created this znode in the CLI:

addauth digest nifi:
create /nifi data digest:nifi:cdrwa

The digest was generated via:

java -cp
'/op/zookeeper/lib/zookeeper-3.5.8.jar:/opt/zookeeper/lib/slf4j-api-1.7.25.jar'
org.apache.auth.AuthenticationProvider nifi:

[zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] getAcl /nifi
'digest,'nifi:the-passwd-digest'
: cdrwa


after starting up Nifi, doing and ls /nifi, the znode is empty.
[zk: nifi1-5:2181,nifi2-5:2181,nifi3-5:2181(CONNECTED) 4] ls /nifi
[]

Seems like we can't write the leaders or components value under the /nifi
znode.


Looking at the nifi-app log

2020-07-06 16:05:46,554 INFO [main-SendThread(xx.xxx.x.xx:2181)]
org.apache.zookeeper.Login Client successfully logged in.
2020-07-06 16:05:46,556 INFO [main-SendThread(xx.xxx.x.xx:2181)]
o.a.zookeeper.client.ZooKeeperSaslClient Client will use DIGEST-MD5 as SASL
mechanism.
2020-07-06 16:05:46,900 INFO [main-EventThread]
o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2020-07-06 16:05:47,347 INFO [main-EventThread]
o.a.c.framework.imps.EnsembleTracker New config event received:
{server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
2020-07-06 16:05:47,354 INFO [main-EventThread]
o.a.c.framework.imps.EnsembleTracker New config event received:
{server.1=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181, version=0,
server.3=xx.xxx.x.xx:2888:3888:participant;0.0.0.0:2181,
server.2=xx.xxx.x.xxx:2888:3888:participant;0.0.0.0:2181}
2020-07-06 16:05:47,357 INFO [Curator-Framework-0]
o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2020-07-06 16:05:47,364 DEBUG [main] org.apache.zookeeper.ZooKeeper Closing
session: 0x3002a05b0c60006
2020-07-06 16:05:47,469 INFO [main/ org.apache.zookeeper.ZooKeeper Session:
0x3002a05b0c60006 closed



Any ideas on what configuration I could be missing or have wrong?  I have a
jaas.conf file in the $NIFI_HOME/conf directory and have a
java.arg.18--Djava.security.auth.login.config=

One question I have, in the jaas.conf file, I put the passwd in there and
not the digest I believe...I understand this would be passed around
cleartext, but this is just for testing purposes currently

Nifi 1.11.4
external zookeeper 3.5.8

Regards,

Dano


Zookeeper ACL question

2020-06-26 Thread dan young
Hello,

I was looking at setting up zookeeper with basic password ACL in a small
cluster.  Looking at the Nifi docs it mentions something
about Username/Password,vs using Kerberos, in the Zookeeper Access Control
under the state providers.
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#state_providers

Has anyone set up the basic user/password ACL and are able to share some
notes/how-tos?

Sorry, not all that familiar with zookeeper authentication.

this is on Nifi 1.11.4 and non embedded zookeeper 3.6.8


Re: Move flow into process group

2020-06-17 Thread dan young
Did you try to copy and then paste them?

On Wed, Jun 17, 2020, 9:12 AM PeterEttery 
wrote:

> Good day -
>
> I'm busy getting to grips with Nifi, using v1.11.4, and would like to move
> some processors into a group.
>
> I can select the processors on the canvas, but can't get them into a group:
>
> 1) The "Group" button on the "Operate" window (which is captioned "Multiple
> components selected") is disabled
> 2) I created a Process Group and tried to drag the processors onto it, but
> in never "captures" the drag/drop
>
> All processors are stopped.
>
> Any suggestions?
>
> Thanks,
> Peter
>
>
>
> --
> Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/
>


Re: UpdateAttribute processor question...

2020-04-22 Thread dan young
SOunds good Andy,

No biggie, I can keep on truckin' with what we've been doing.   Thanx for
the insight!

Regards,

Dano

On Wed, Apr 22, 2020 at 1:38 PM Andy LoPresto  wrote:

> Dan,
>
> Unfortunately I don’t believe there is a way to consolidate this in a
> single UA processor because currently the application of the attributes is
> not deterministically ordered, so b may not be available when c is
> evaluated and applied. The current work around is to use linear dependent
> processors as you are doing. I do think this is a valid feature request
> that could be introduced in the UA processor without breaking backwards
> compatibility if you’re interested in filing the ticket. Changing the
> internal data structure of the dynamic properties to be ordered _should_ be
> possible, but I think currently the API doesn’t request that order so it
> would require a code change there, with a default practice being “order as
> received".
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> He/Him
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Apr 22, 2020, at 12:04 PM, dan young  wrote:
>
> Hello,
>
> I haven't been able to figure out if this is doable with a single
> UpdateAttrbute processor, but is it possible to create an attribute that
> references a different attribute you're setting in the same UpdateAttribute
> processor?  ie.
>
> UpdateAttribute
> a = "foo"
> b = "bar"
> c = ${allAttributes("a", "b"):join(" _")}
>
> What I've been doing in the past is just have c in a downstream
> UpdateAttributewonder if there's a better way of doing this, or maybe
> in the Advance section of the UpdateAttribute.
>
>
> Regards,
>
> Dano
>
>
>


UpdateAttribute processor question...

2020-04-22 Thread dan young
Hello,

I haven't been able to figure out if this is doable with a single
UpdateAttrbute processor, but is it possible to create an attribute that
references a different attribute you're setting in the same UpdateAttribute
processor?  ie.

UpdateAttribute
a = "foo"
b = "bar"
c = ${allAttributes("a", "b"):join(" _")}

What I've been doing in the past is just have c in a downstream
UpdateAttributewonder if there's a better way of doing this, or maybe
in the Advance section of the UpdateAttribute.


Regards,

Dano


Re: what happen to the 1.11.2 download?

2020-02-25 Thread dan young
OK, great thank you.yes, will need that...thank you!

Dano

On Tue, Feb 25, 2020 at 9:24 AM Joe Witt  wrote:

> Dano
>
> You can obtain Apache NiFi 1.11.2 from the archives if you need it.
>
> http://archive.apache.org/dist/nifi/1.11.2/
>
> We released Apache NiFi 1.11.3 last night and I'll send an email on that
> later today (waiting for mirrors to catch up).
>
> The primary push for 1.11.3 was to resolve a resource leak in the
> distributed map cache mechanism which is popular as it is used by
> DetectDuplicate.  If you're using that you'll want to grab 1.11.3.
>
> Thanks
>
>
>
> On Tue, Feb 25, 2020 at 8:18 AM dan young  wrote:
>
>> Hello,
>>
>> I see the 1.11.2 download is not working, there's a 1.11.3 now.  Is there
>> a reason it was pulled and should we not be running it in production?
>>
>> Regards,
>>
>> Dano
>>
>


what happen to the 1.11.2 download?

2020-02-25 Thread dan young
Hello,

I see the 1.11.2 download is not working, there's a 1.11.3 now.  Is there a
reason it was pulled and should we not be running it in production?

Regards,

Dano


nifi toolkit 1.11.x expression language test.

2020-02-21 Thread dan young
Hello,

I'm using this as a reference to test some expression language on the
command line:

https://gist.github.com/mattyb149/13b9bceeace1f7db76f648dfb200b680

This works fine;
λ ~/software/nifi-tools groovy testEL-1.9.2.groovy -D dt "2020-02-21
12:00:00" '${dt:toDate("-MM-dd HH:mm:ss"):minus(4320):toDate()}'
Fri Feb 21 00:00:00 MST 2020

But with 1.11.1, we're getting the following error

λ ~/software/nifi-tools groovy testEL-1.11.1.groovy -D dt "2020-02-21
12:00:00" '${dt:toDate("-MM-dd HH:mm:ss"):minus(4320):toDate()}'
Caught: groovy.lang.MissingMethodException: No signature of method:
org.apache.nifi.attribute.expression.language.Query.evaluate() is
applicable for argument types: (LinkedHashMap) values: [[dt:2020-02-21
12:00:00]]
groovy.lang.MissingMethodException: No signature of method:
org.apache.nifi.attribute.expression.language.Query.evaluate() is
applicable for argument types: (LinkedHashMap) values: [[dt:2020-02-21
12:00:00]]
at testEL-1_11_1$_run_closure2.doCall(testEL-1.11.1.groovy:27)
at testEL-1_11_1.run(testEL-1.11.1.groovy:25)
λ ~/software/nifi-tools

Here's me script:

@Grab(group='org.apache.nifi', module='nifi-expression-language',
version='1.11.1')
import org.apache.nifi.attribute.expression.language.*

def cli = new CliBuilder(usage:'groovy testEL.groovy [options]
[expressions]',
  header:'Options:')
cli.help('print this message')
cli.D(args:2, valueSeparator:'=', argName:'attribute=value',
   'set value for given attribute')
def options = cli.parse(args)
if(!options.arguments()) {
  cli.usage()
  return 1
}

def attrMap = [:]
def currKey = null
options.Ds?.eachWithIndex {o,i ->
  if(i%2==0) {
currKey = o
  } else {
attrMap[currKey] = o
  }
}
options.arguments()?.each {
  def q = Query.compile(it)
  println q.evaluate(attrMap ?: null)
}

Any insight into why this isn't working in 1.11.x now?

Regards,

Dano


Re: zookeeper connection string question/clarification

2020-02-20 Thread dan young
ok, great thank you. Yes, we're using external zookeeper; 3.5.6.  I'm going
to test this change out on a dev cluster real quick.

On Thu, Feb 20, 2020 at 4:22 PM Andy LoPresto  wrote:

> Sorry, I should have elaborated that I was referencing the link from the
> MG. I realize you’re using external ZK and this is for embedded. Yes, I
> believe you will need to change the format of your connection string.
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Feb 20, 2020, at 3:20 PM, Andy LoPresto  wrote:
>
> Hi Dan,
>
> I believe the changes you’re looking for are here [1], copied below:
>
>
>- The Zookeeper dependency that NiFi uses for state management and
>cluster elections was upgraded to v3.5.5. From v3.5.x onwards, *Zookeeper
>changed the zookeeper.properties file format and as a result NiFi users
>using an existing embedded zookeeper will need to adjust their existing
>zookeeper.properties file accordingly*. More details here:
>
> https://zookeeper.apache.org/doc/r3.5.3-beta/zookeeperReconfig.html#sc_reconfig_clientport
>.
>For new deployments of the 1.10.0 release onwards, NiFi will be
>packaged with an updated template zookeeper.properties file.
>To update an existing zookeeper.properties file however, edit the
>conf/zookeeper.properties file:
>   1. Remove the clientPort=2181 line (or whatever your port number
>   may be)
>   2. Add the client port to the end of the server string eg:
>   server.1=localhost:2888:3888;2181
>
>
> [1] https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Feb 20, 2020, at 3:18 PM, dan young  wrote:
>
> Hello,
>
> Using Nifi 1.11.1 in cluster mode with external zookeeper.  Does the
> nifi.zookeeper.connect string in the nifi.properties need to change from
> say:
>
>  nifi.zookeeper.connect.string=
> 10.xxx.x.xxx:2181,10.xxx.x.xxx:2181,10.xxx.x.xxx:2181
>
>  nifi.zookeeper.connect.string=
> 10.xxx.x.xxx;2181,10.xxx.x.xxx;2181,10.xxx.x.xxx;2181
>
> Changing the : to ; between the host and client port?
>
>
>
>


Re: zookeeper connection string question/clarification

2020-02-20 Thread dan young
ok, I tried host;2181 and that doesn't seem to work.  So I think host:2181
is still valid

On Thu, Feb 20, 2020 at 4:22 PM Andy LoPresto  wrote:

> Sorry, I should have elaborated that I was referencing the link from the
> MG. I realize you’re using external ZK and this is for embedded. Yes, I
> believe you will need to change the format of your connection string.
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Feb 20, 2020, at 3:20 PM, Andy LoPresto  wrote:
>
> Hi Dan,
>
> I believe the changes you’re looking for are here [1], copied below:
>
>
>- The Zookeeper dependency that NiFi uses for state management and
>cluster elections was upgraded to v3.5.5. From v3.5.x onwards, *Zookeeper
>changed the zookeeper.properties file format and as a result NiFi users
>using an existing embedded zookeeper will need to adjust their existing
>zookeeper.properties file accordingly*. More details here:
>
> https://zookeeper.apache.org/doc/r3.5.3-beta/zookeeperReconfig.html#sc_reconfig_clientport
>.
>For new deployments of the 1.10.0 release onwards, NiFi will be
>packaged with an updated template zookeeper.properties file.
>To update an existing zookeeper.properties file however, edit the
>conf/zookeeper.properties file:
>   1. Remove the clientPort=2181 line (or whatever your port number
>   may be)
>   2. Add the client port to the end of the server string eg:
>   server.1=localhost:2888:3888;2181
>
>
> [1] https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Feb 20, 2020, at 3:18 PM, dan young  wrote:
>
> Hello,
>
> Using Nifi 1.11.1 in cluster mode with external zookeeper.  Does the
> nifi.zookeeper.connect string in the nifi.properties need to change from
> say:
>
>  nifi.zookeeper.connect.string=
> 10.xxx.x.xxx:2181,10.xxx.x.xxx:2181,10.xxx.x.xxx:2181
>
>  nifi.zookeeper.connect.string=
> 10.xxx.x.xxx;2181,10.xxx.x.xxx;2181,10.xxx.x.xxx;2181
>
> Changing the : to ; between the host and client port?
>
>
>
>


zookeeper connection string question/clarification

2020-02-20 Thread dan young
Hello,

Using Nifi 1.11.1 in cluster mode with external zookeeper.  Does the
nifi.zookeeper.connect string in the nifi.properties need to change from
say:

 nifi.zookeeper.connect.string=
10.xxx.x.xxx:2181,10.xxx.x.xxx:2181,10.xxx.x.xxx:2181

 nifi.zookeeper.connect.string=
10.xxx.x.xxx;2181,10.xxx.x.xxx;2181,10.xxx.x.xxx;2181

Changing the : to ; between the host and client port?


Re: timing on release of 1.11.2 ?

2020-02-18 Thread dan young
Great, thank you Joe!

Regards

Dano

On Tue, Feb 18, 2020, 8:41 AM Joe Witt  wrote:

> it is under vote now.  so within a day or so possibly
>
> thanks
>
> On Tue, Feb 18, 2020 at 9:26 AM dan young  wrote:
>
>> Howdy folks,
>>
>> Just curious what the word on the street for releasing 1.11.2
>>
>>
>> Regards,
>>
>> Dano
>>
>>


timing on release of 1.11.2 ?

2020-02-18 Thread dan young
Howdy folks,

Just curious what the word on the street for releasing 1.11.2


Regards,

Dano


Re: zookeeper error message - nifi 1.11.1/zookeeper 3.5.6

2020-02-12 Thread dan young
Thank you for your email. Looking at the zookeeper docs, with 3.5.0 it
looks like the format may have changed to support the dynamic
configuration. It may seem that zookeeper is sending back a format that
NiFi isn't expecting??? I.e the :participant

https://zookeeper.apache.org/doc/r3.5.6/zookeeperReconfig.html



On Wed, Feb 12, 2020, 5:53 PM 노대호Daeho Ro  wrote:

> In my memory,
>
> zookeepr 3.5.6 needs the new form of zookeeper string such as
>
> server.1=0.0.0.0:2888:3888;2181
>
>
> where the ip is yours.
>
> Hope this help you.
>
>
> 2020년 2월 13일 (목) 오전 1:55, dan young 님이 작성:
>
>> Sorry Joe,
>>
>> Yes, I'll file a JIRA...here's the email again
>>
>> We're seeing the following messages in nifi logs on our cluster nodes.
>> Using
>> Nifi 1.11.1 and zookeeper (not embedded) version 3.5.6
>>
>> Functionality seems not to be impacted, but wondering if there's
>> something else
>> going on or the version of zookeeper we're using is causing this.
>>
>> 2020-02-12 15:36:43,959 ERROR [main-EventThread]
>> o.a.c.framework.imps.EnsembleTracker Invalid config event received:
>> {server.1=10.190.3.170:2888:3888:participant, version=0,
>> server.3=10.190.3.91:2888:3888:participant, server.2=10.190.3.172:2888
>> :3888:participant}
>>
>> Regards,
>>
>> Dano
>>
>> On Wed, Feb 12, 2020 at 9:49 AM Joe Witt  wrote:
>>
>>> Dan,
>>>
>>> Not sure what others see but for me your email cuts off in the middle of
>>> a line.
>>>
>>> You might want to file a JIRA with your observation/logs.
>>>
>>> Thanks
>>>
>>> On Wed, Feb 12, 2020 at 11:46 AM dan young  wrote:
>>>
>>>> Hello,
>>>>
>>>> We're seeing the following messages in nifi logs on our cluster nodes.  
>>>> Using
>>>> Nifi 1.11.1 and zookeeper (not embedded) version 3.5.6
>>>>
>>>> Functionality seems not to be impacted, but wondering if there's something 
>>>> else
>>>> going on or the version of zookeeper we're using is causing this.
>>>>
>>>> 2020-02-12 15:36:43,959 ERROR [main-EventThread] 
>>>> o.a.c.framework.imps.EnsembleTracker Invalid config event received: 
>>>> {server.1=10.190.3.170:2888:3888:participant, version=0, 
>>>> server.3=10.190.3.91:2888:3888:participant, 
>>>> server.2=10.190.3.172:2888:3888:participant}
>>>>
>>>> Regards,
>>>>
>>>> Dano
>>>>
>>>>
>>>>


Re: zookeeper error message - nifi 1.11.1/zookeeper 3.5.6

2020-02-12 Thread dan young
Sorry Joe,

Yes, I'll file a JIRA...here's the email again

We're seeing the following messages in nifi logs on our cluster nodes.
Using
Nifi 1.11.1 and zookeeper (not embedded) version 3.5.6

Functionality seems not to be impacted, but wondering if there's something
else
going on or the version of zookeeper we're using is causing this.

2020-02-12 15:36:43,959 ERROR [main-EventThread]
o.a.c.framework.imps.EnsembleTracker Invalid config event received:
{server.1=10.190.3.170:2888:3888:participant, version=0,
server.3=10.190.3.91:2888:3888:participant, server.2=10.190.3.172:2888
:3888:participant}

Regards,

Dano

On Wed, Feb 12, 2020 at 9:49 AM Joe Witt  wrote:

> Dan,
>
> Not sure what others see but for me your email cuts off in the middle of a
> line.
>
> You might want to file a JIRA with your observation/logs.
>
> Thanks
>
> On Wed, Feb 12, 2020 at 11:46 AM dan young  wrote:
>
>> Hello,
>>
>> We're seeing the following messages in nifi logs on our cluster nodes.  Using
>> Nifi 1.11.1 and zookeeper (not embedded) version 3.5.6
>>
>> Functionality seems not to be impacted, but wondering if there's something 
>> else
>> going on or the version of zookeeper we're using is causing this.
>>
>> 2020-02-12 15:36:43,959 ERROR [main-EventThread] 
>> o.a.c.framework.imps.EnsembleTracker Invalid config event received: 
>> {server.1=10.190.3.170:2888:3888:participant, version=0, 
>> server.3=10.190.3.91:2888:3888:participant, 
>> server.2=10.190.3.172:2888:3888:participant}
>>
>> Regards,
>>
>> Dano
>>
>>
>>


zookeeper error message - nifi 1.11.1/zookeeper 3.5.6

2020-02-12 Thread dan young
Hello,

We're seeing the following messages in nifi logs on our cluster nodes.  Using
Nifi 1.11.1 and zookeeper (not embedded) version 3.5.6

Functionality seems not to be impacted, but wondering if there's something else
going on or the version of zookeeper we're using is causing this.

2020-02-12 15:36:43,959 ERROR [main-EventThread]
o.a.c.framework.imps.EnsembleTracker Invalid config event received:
{server.1=10.190.3.170:2888:3888:participant, version=0,
server.3=10.190.3.91:2888:3888:participant,
server.2=10.190.3.172:2888:3888:participant}

Regards,

Dano


Re: can't ush data to bigQuery

2019-09-26 Thread dan young
yea, the arrayswe use jq to flatten and format all our JSON

glad to see you got it!

dano


On Thu, Sep 26, 2019 at 6:54 AM Nicolas Delsaux 
wrote:

> Oh well, i've understood my last error : incorrect flow files (with JSOn
> arrays) were stuck in the queue.
>
> I removed them and ... to my delight, data seems to come in BigQuery !
> Le 26/09/2019 à 14:45, Nicolas Delsaux a écrit :
>
> I didn't knew that command ... i've edited some confidential values in the
> result, but here it is
>
>
> $ bq --project_id={{PROJECT_ID}} --format=prettyjson show -j
> 9e790299-dc77-46f4-8978-476f284fe5b5
> {
>   "configuration": {
> "jobType": "LOAD",
> "load": {
>   "createDisposition": "CREATE_IF_NEEDED",
>   "destinationTable": {
> "datasetId": "Consents",
> "projectId": "{{PROJECT_ID}}",
> "tableId": "{{TABLE_ID}}"
>   },
>   "ignoreUnknownValues": false,
>   "maxBadRecords": 0,
>   "schema": {
> "fields": [
>   {
> "fields": [
>   {
> "mode": "NULLABLE",
> "name": "id",
> "type": "STRING"
>   },
>   {
> "fields": [
>   {
> "mode": "NULLABLE",
> "name": "id",
> "type": "STRING"
>   },
>   {
> "mode": "NULLABLE",
> "name": "type",
> "type": "STRING"
>   },
>   {
> "mode": "NULLABLE",
> "name": "businessUnit",
> "type": "STRING"
>   }
> ],
> "mode": "NULLABLE",
> "name": "identity",
> "type": "RECORD"
>   },
>   {
> "mode": "NULLABLE",
> "name": "finality",
> "type": "STRING"
>   },
>   {
> "mode": "NULLABLE",
> "name": "consentDate",
> "type": "TIMESTAMP"
>   },
>   {
> "mode": "NULLABLE",
> "name": "expiryDate",
> "type": "TIMESTAMP"
>   },
>   {
> "mode": "NULLABLE",
> "name": "expired",
> "type": "BOOLEAN"
>   },
>   {
> "mode": "NULLABLE",
> "name": "createdBy",
> "type": "STRING"
>   },
>   {
> "mode": "NULLABLE",
> "name": "createdDate",
> "type": "TIMESTAMP"
>   },
>   {
> "fields": [
>   {
> "mode": "NULLABLE",
> "name": "id",
> "type": "STRING"
>   },
>   {
> "mode": "NULLABLE",
> "name": "application",
> "type": "STRING"
>   },
>   {
> "mode": "NULLABLE",
> "name": "type",
> "type": "STRING"
>   }
> ],
> "mode": "NULLABLE",
> "name": "sender",
> "type": "RECORD"
>   },
>   {
> "fields": [
>   {
> "mode": "NULLABLE",
> "name": "id",
> "type": "STRING"
>   },
>   {
> "mode": "NULLABLE",
> "name": "type",
> "type": "STRING"
>   }
> ],
> "mode": "NULLABLE",
> "name": "relatedEvent",
> "type": "RECORD"
>   }
> ],
> "mode": "NULLABLE",
> "name": "ContractualConsent",
> "type": "RECORD"
>   }
> ]
>   },
>   "sourceFormat": "NEWLINE_DELIMITED_JSON",
>   "writeDisposition": "WRITE_APPEND"
> }
>   },
>   "etag": "RqYxd6o2jzl6YiTARI5nxg==",
>   "id": "{{PROJECT_ID}}:EU.9e790299-dc77-46f4-8978-476f284fe5b5",
>   "jobReference": {
> "jobId": "9e790299-dc77-46f4-8978-476f284fe5b5",
> "location": "EU",
> "projectId": "{{PROJECT_ID}}"
>   },
>   "kind": "bigquery#job",
>   "selfLink":
> "https://bigquery.googleapis.com/bigquery/v2/projects/{{PROJECT_ID}}/jobs/9e790299-dc77-46f4-8978-476f284fe5b5?location=EU;
> 
> ,
>   "statistics": {
> "creationTime": "1569491661818",
> "endTime": "1569491662935",
> "startTime": "1569491662366"
>   },
>   "status": {
> "errorResult": {
>   

Re: Running NiFi on Google Cloud

2019-07-28 Thread dan young
Hello Márcio,

We've been running NiFi clusters for almost 3 years now at Looker on AWS.
We will be moving these over to GCP in the future. My main recommendation
is to ensure that you're using something like Ansible to help with the
deployment and configuration of the cluster. We use a lot of execute stream
command processors to run a variety of node workloads.

Other than that, a lot will be specific to your use case and mileage will
vary

Regards

Dano


On Fri, Jul 26, 2019, 10:27 PM Márcio Sugar  wrote:

> Hi,
>
> Please, is there any tutorial, guide or set of best practices that help
> with installing and using NiFi on Google Cloud (or any cloud provider, for
> that matter)?
>
> Thank you,
>
> Marcio
>
>


Re: UI/canvas question.

2019-07-26 Thread dan young
Great, thank you Mark!

On Fri, Jul 26, 2019, 7:39 AM Mark Payne  wrote:

> Dano,
>
> That number indicates the number of Terminated Threads. I.e., the threads
> that were left behind when you hit Terminate. They will go away on restart,
> or if the threads ever actually complete. For example, if the thread was
> busy doing something that takes an hour, and you terminate it, it may
> finish up after an hour and go away. But if it's in a blocking call that
> never returns, then you'll have to restart NiFi in order for that thread to
> ever die.
>
> Thanks
> -Mark
>
>
> On Jul 25, 2019, at 11:21 PM, dan young  wrote:
>
> Hello,
>
> We have a processor that we terminated via the UI, and now on the canvas I
> see what I believe is the active thread count and then a 1 between
> parenthesis (1) ..  See attached screen shot.  Also in on the processor
> that we killed, we see that an indicator in red with the (1).  What do
> these mean and how/when do these get cleaned up/cleared?  Do we need to
> restart the nodes in the cluster to clear this up?
>
> Regards,
>
> Dano
>
>  9.15.02 PM.png>
>
>
>


UI/canvas question.

2019-07-25 Thread dan young
Hello,

We have a processor that we terminated via the UI, and now on the canvas I
see what I believe is the active thread count and then a 1 between
parenthesis (1) ..  See attached screen shot.  Also in on the processor
that we killed, we see that an indicator in red with the (1).  What do
these mean and how/when do these get cleaned up/cleared?  Do we need to
restart the nodes in the cluster to clear this up?

Regards,

Dano


Re: DistributeLoad across a NiFi cluster

2019-07-09 Thread dan young
If you're going to upgrade, I would recommend jumping to the latest
version, 1.9.2 as of today. We ran into some issues in 1.8 with this
feature that was fixed in 1.9.x. We're running 1.9.2 now with this feature
now in production.

Regards

Dano

On Tue, Jul 9, 2019, 6:58 AM  wrote:

> The feature requires NiFi > 1.8.x… Pierre describes it very well in his
> blog :
> https://pierrevillard.com/2018/10/29/nifi-1-8-revolutionizing-the-list-fetch-pattern-and-more/
>
>
>
>
>
> *From: *James McMahon 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Tuesday, 9 July 2019 at 14:46
> *To: *"users@nifi.apache.org" 
> *Subject: *Re: DistributeLoad across a NiFi cluster
>
>
>
> Andrew, when I right click on the connection between the two I do not see
> a cluster distribution strategy in the queue connection. I am running
> 1.7.1.g. Am I overlooking something?
>
>
>
> On Tue, Jul 2, 2019 at 12:29 PM Andrew Grande  wrote:
>
> Jim,
>
>
>
> There's a better solution in NiFi. Right click on the connection between
> ListFile and FetchFile and select a cluster distribution strategy in
> options. That's it :)
>
>
>
> Andrew
>
>
>
> On Tue, Jul 2, 2019, 7:37 AM James McMahon  wrote:
>
> We would like to employ a DistributeLoad processor, restricted to run on
> the primary node of our cluster. Is there a recommended approach employed
> to efficiently distribute across nodes in the cluster?
>
>
>
> As I understand it, and using a FetchFile running in "all nodes" as the
> first processor following the DistributeLoad, I can have it distribute by
> round robin, next available, or load distribution service.  Can anyone
> provide a link to an example that employs the load distribution service? Is
> that the recommended distribution approach when running in clustered mode?
>
>
>
> I am interested in maintaining load balance across my cluster nodes when
> running at high flowfile volumes. Flow files will vary greatly in contents,
> so I'd like to design with an approach that helps me balance processing
> distribution.
>
>
>
> Thanks very much in advance. -Jim
>
>


1.9 release date?

2019-02-16 Thread dan young
Heya folks,

Any insight on 1.9 release date?  Looks like a lot of goodies and fixes
included...

Regards,

Dano


Re: flowfiles stuck in load balanced queue; nifi 1.8

2019-02-08 Thread dan young
Looking forward to this fix!  Thanx for all the hard work on NiFi

Regards,

Dano


On Fri, Feb 8, 2019 at 7:58 AM Mark Payne  wrote:

> Chad,
>
> Upon restart, they will continue on, there is no known data loss situation.
>
> That said, we are preparing to assemble version 1.9 of NiFi now, so I would
> guess that it will be voted on, perhaps as early as next week. So it may
> (or may not)
> make sense for you, depending on your situation, to wait for the 1.9
> release.
>
> Thanks
> -Mark
>
>
> On Feb 8, 2019, at 9:55 AM, Woodhead, Chad  wrote:
>
> My team is about to start using load balanced queues in 1.8 and one thing
> I wanted to understand before we do is if we run into this issue and we
> follow Dan’s workaround of disconnecting the node and then restarting the
> node node, do the flowfiles end up moving through the rest of the flow or
> do they get lost/dropped?
>
> -Chad
>
> *From: *dan young 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Thursday, January 17, 2019 at 7:49 PM
> *To: *NiFi Mailing List 
> *Subject: *Re: flowfiles stuck in load balanced queue; nifi 1.8
>
> **External Message* - Use caution before opening links or attachments*
>
> Ok, sounds great. This is really frustrating and I don't want to go back
> to RPG if possible, although that has been rock solid. Will keep an eye out
> for 1.9!
>
> Regards
>
> Dano
>
> On Thu, Jan 17, 2019, 5:17 PM Mark Payne 
> Hey Dan,
>
> This can happen even within a process group, it is just much more likely
> when the destination of the connection is a Port or a Funnel because those
> components don’t really do any work, just push the FlowFile to the next
> connection and that makes them super fast.
>
>
> There are a few different PR’s that are awaiting review (unrelated to
> this) that I’d like to see merged in very soon and then I think it’s
> probably time to start talking about a 1.9.0 release. There are several bug
> fixes, especially related to the load balance connections, and enough new
> features that I think it’s worth considering a release soon.
> Sent from my iPhone
>
>
> On Jan 17, 2019, at 6:59 PM, dan young  wrote:
>
> Hello Mark,
>
> We're seeing "stuck" flow files again, this time within a PG...see
> attached screen shots :(
>
>
>
> On Fri, Dec 28, 2018 at 8:43 AM Mark Payne  wrote:
>
> Dan, et al,
>
> Great news! I was able to replicate this issue finally, by creating a
> Load-Balanced connection
> between two Process Groups/Ports instead of between two processors. The
> fact that it's between
> two Ports does not, in and of itself, matter. But there is a race
> condition, and Ports do no actual
> Processing of the FlowFile (simply pull it from one queue and transfer it
> to another). As a result, because
> it is extremely fast, it is more likely to trigger the race condition.
>
> So I created a JIRA [1] and have submitted a PR for it.
>
> Interestingly, while there is no real workaround that is fool-proof, until
> this fix is in and released, you could
> choose to update your flow so that the connection between Process Groups
> is not load balanced and instead
> the connection between the Input Port and the first Processor is load
> balanced. Again, this is not fool-proof,
> because it could affect the Load Balanced Connection even if it is
> connected to a Processor, but it is less likely
> to do so, so you would likely see the issue occur far less often.
>
> Thank you so much for sticking with us all as we diagnose this and figure
> it all out - would not have been able to
> figure it out without you spending the time to debug the issue!
>
> Thanks
> -Mark
>
> [1] https://issues.apache.org/jira/browse/NIFI-5919
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_NIFI-2D5919=DwMFaQ=gJN2jf8AyP5Q6Np0yWY19w=MJ04HXP0mOz9-J4odYRNRx3ln4A_OnHTjJvmsZOEG64=4ZPdhFEwAyA9zzYEieqAm66w18b44zDcysJlLnED1PM=c-UX3UlTwh4CyR8ZcPp1Kcehq2xlniW4R2kXeEuQjR0=>
>
>
>
> On Dec 26, 2018, at 10:31 PM, dan young  wrote:
>
> Hello Mark,
>
> I just stopped the destination processor, and then disconnected the node
> in question (nifi1-1). Once I disconnected the node, the flow file in the
> load balance connection disappeared from the queue.  After that, I
> reconnected the node (with the downstream processor disconnected) and once
> the node successfully rejoined the cluster, the flowfile showed up in the
> queue again. After this, I started the connected downstream processor, but
> the flowfile stays in the queue. The only way to clear the queue is if I
> actually restart the node.  If I disconnect the node, and then restart that
> node, the fl

Re: flowfiles stuck in load balanced queue; nifi 1.8

2019-01-17 Thread dan young
Ok, sounds great. This is really frustrating and I don't want to go back to
RPG if possible, although that has been rock solid. Will keep an eye out
for 1.9!

Regards

Dano

On Thu, Jan 17, 2019, 5:17 PM Mark Payne  Hey Dan,
>
> This can happen even within a process group, it is just much more likely
> when the destination of the connection is a Port or a Funnel because those
> components don’t really do any work, just push the FlowFile to the next
> connection and that makes them super fast.
>
> There are a few different PR’s that are awaiting review (unrelated to
> this) that I’d like to see merged in very soon and then I think it’s
> probably time to start talking about a 1.9.0 release. There are several bug
> fixes, especially related to the load balance connections, and enough new
> features that I think it’s worth considering a release soon.
>
> Sent from my iPhone
>
> On Jan 17, 2019, at 6:59 PM, dan young  wrote:
>
> Hello Mark,
>
> We're seeing "stuck" flow files again, this time within a PG...see
> attached screen shots :(
>
>
>
> On Fri, Dec 28, 2018 at 8:43 AM Mark Payne  wrote:
>
>> Dan, et al,
>>
>> Great news! I was able to replicate this issue finally, by creating a
>> Load-Balanced connection
>> between two Process Groups/Ports instead of between two processors. The
>> fact that it's between
>> two Ports does not, in and of itself, matter. But there is a race
>> condition, and Ports do no actual
>> Processing of the FlowFile (simply pull it from one queue and transfer it
>> to another). As a result, because
>> it is extremely fast, it is more likely to trigger the race condition.
>>
>> So I created a JIRA [1] and have submitted a PR for it.
>>
>> Interestingly, while there is no real workaround that is fool-proof,
>> until this fix is in and released, you could
>> choose to update your flow so that the connection between Process Groups
>> is not load balanced and instead
>> the connection between the Input Port and the first Processor is load
>> balanced. Again, this is not fool-proof,
>> because it could affect the Load Balanced Connection even if it is
>> connected to a Processor, but it is less likely
>> to do so, so you would likely see the issue occur far less often.
>>
>> Thank you so much for sticking with us all as we diagnose this and figure
>> it all out - would not have been able to
>> figure it out without you spending the time to debug the issue!
>>
>> Thanks
>> -Mark
>>
>> [1] https://issues.apache.org/jira/browse/NIFI-5919
>>
>>
>> On Dec 26, 2018, at 10:31 PM, dan young  wrote:
>>
>> Hello Mark,
>>
>> I just stopped the destination processor, and then disconnected the node
>> in question (nifi1-1). Once I disconnected the node, the flow file in the
>> load balance connection disappeared from the queue.  After that, I
>> reconnected the node (with the downstream processor disconnected) and once
>> the node successfully rejoined the cluster, the flowfile showed up in the
>> queue again. After this, I started the connected downstream processor, but
>> the flowfile stays in the queue. The only way to clear the queue is if I
>> actually restart the node.  If I disconnect the node, and then restart that
>> node, the flowfile is no longer present in the queue.
>>
>> Regards,
>>
>> Dano
>>
>>
>> On Wed, Dec 26, 2018 at 6:13 PM Mark Payne  wrote:
>>
>>> Ok, I just wanted to confirm that when you said “once it rejoins the
>>> cluster that flow file is gone” that you mean “the flowfile did not exist
>>> on the system” and NOT “the queue size was 0 by the time that I looked at
>>> the UI.” I.e., is it possible that the FlowFile did exist, was restored,
>>> and then was processed before you looked at the UI? Or the FlowFile
>>> definitely did not exist after the node was restarted? That’s why I was
>>> suggesting that you restart with the connection’s source and destination
>>> stopped. Just to make sure that the FlowFile didn’t just get processed
>>> quickly on restart.
>>>
>>> Sent from my iPhone
>>>
>>> On Dec 26, 2018, at 7:55 PM, dan young  wrote:
>>>
>>> Heya Mark,
>>>
>>> If we restart the node, that "stuck" flowfile will disappear. This is
>>> the only way so far to clear out the flowfile. I usually disconnect the
>>> node, then once it's disconnected I restart nifi, and then once it rejoins
>>> the cluster that flow file is gone. If we try to empty the queue, it will

Re: Preferred schema registry

2019-01-15 Thread dan young
We used the AvroSchemaRegistry

Dano

On Tue, Jan 15, 2019, 12:51 PM Mike Thomsen  What schema registry are others using in production use cases? We tried
> out the HortonWorks registry, but it seemed to stop accepting updates once
> we hit "v3" of our schema (we didn't name it v3, that's the version it
> showed in the UI). So I'd like to know what others are doing for their
> registry use since we're looking at either Confluent or going back to the
> AvroSchemaRegistry.
>
> Thanks,
>
> Mike
>


Re: flowfiles stuck in load balanced queue; nifi 1.8

2019-01-05 Thread dan young
Heya Mark,

Just a FYI, so far so good...after switching over to your recommendation we
haven't seen any "stuck" flowfiles.  Things looking good so far and will
look forward to 1.9.


Regards,

Dano

On Fri, Dec 28, 2018 at 8:43 AM Mark Payne  wrote:

> Dan, et al,
>
> Great news! I was able to replicate this issue finally, by creating a
> Load-Balanced connection
> between two Process Groups/Ports instead of between two processors. The
> fact that it's between
> two Ports does not, in and of itself, matter. But there is a race
> condition, and Ports do no actual
> Processing of the FlowFile (simply pull it from one queue and transfer it
> to another). As a result, because
> it is extremely fast, it is more likely to trigger the race condition.
>
> So I created a JIRA [1] and have submitted a PR for it.
>
> Interestingly, while there is no real workaround that is fool-proof, until
> this fix is in and released, you could
> choose to update your flow so that the connection between Process Groups
> is not load balanced and instead
> the connection between the Input Port and the first Processor is load
> balanced. Again, this is not fool-proof,
> because it could affect the Load Balanced Connection even if it is
> connected to a Processor, but it is less likely
> to do so, so you would likely see the issue occur far less often.
>
> Thank you so much for sticking with us all as we diagnose this and figure
> it all out - would not have been able to
> figure it out without you spending the time to debug the issue!
>
> Thanks
> -Mark
>
> [1] https://issues.apache.org/jira/browse/NIFI-5919
>
>
> On Dec 26, 2018, at 10:31 PM, dan young  wrote:
>
> Hello Mark,
>
> I just stopped the destination processor, and then disconnected the node
> in question (nifi1-1). Once I disconnected the node, the flow file in the
> load balance connection disappeared from the queue.  After that, I
> reconnected the node (with the downstream processor disconnected) and once
> the node successfully rejoined the cluster, the flowfile showed up in the
> queue again. After this, I started the connected downstream processor, but
> the flowfile stays in the queue. The only way to clear the queue is if I
> actually restart the node.  If I disconnect the node, and then restart that
> node, the flowfile is no longer present in the queue.
>
> Regards,
>
> Dano
>
>
> On Wed, Dec 26, 2018 at 6:13 PM Mark Payne  wrote:
>
>> Ok, I just wanted to confirm that when you said “once it rejoins the
>> cluster that flow file is gone” that you mean “the flowfile did not exist
>> on the system” and NOT “the queue size was 0 by the time that I looked at
>> the UI.” I.e., is it possible that the FlowFile did exist, was restored,
>> and then was processed before you looked at the UI? Or the FlowFile
>> definitely did not exist after the node was restarted? That’s why I was
>> suggesting that you restart with the connection’s source and destination
>> stopped. Just to make sure that the FlowFile didn’t just get processed
>> quickly on restart.
>>
>> Sent from my iPhone
>>
>> On Dec 26, 2018, at 7:55 PM, dan young  wrote:
>>
>> Heya Mark,
>>
>> If we restart the node, that "stuck" flowfile will disappear. This is the
>> only way so far to clear out the flowfile. I usually disconnect the node,
>> then once it's disconnected I restart nifi, and then once it rejoins the
>> cluster that flow file is gone. If we try to empty the queue, it will just
>> say that there no flow files in the queue.
>>
>>
>> On Wed, Dec 26, 2018, 5:22 PM Mark Payne >
>>> Hey Dan,
>>>
>>> Thanks, this is super useful! So, the following section is the damning
>>> part of the JSON:
>>>
>>>   {
>>> "totalFlowFileCount": 1,
>>> "totalByteCount": 975890,
>>> "nodeIdentifier": "nifi1-1:9443",
>>> "localQueuePartition": {
>>>   "totalFlowFileCount": 0,
>>>   "totalByteCount": 0,
>>>   "activeQueueFlowFileCount": 0,
>>>   "activeQueueByteCount": 0,
>>>   "swapFlowFileCount": 0,
>>>   "swapByteCount": 0,
>>>   "swapFiles": 0,
>>>   "inFlightFlowFileCount": 0,
>>>   "inFlightByteCount": 0,
>>>   "allActiveQueueFlowFilesPenalized": false,
>>> 

Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-28 Thread dan young
I've converted over our flows based on your recommendation, will monitor
and report back if I see any issues

On Fri, Dec 28, 2018 at 8:43 AM Mark Payne  wrote:

> Dan, et al,
>
> Great news! I was able to replicate this issue finally, by creating a
> Load-Balanced connection
> between two Process Groups/Ports instead of between two processors. The
> fact that it's between
> two Ports does not, in and of itself, matter. But there is a race
> condition, and Ports do no actual
> Processing of the FlowFile (simply pull it from one queue and transfer it
> to another). As a result, because
> it is extremely fast, it is more likely to trigger the race condition.
>
> So I created a JIRA [1] and have submitted a PR for it.
>
> Interestingly, while there is no real workaround that is fool-proof, until
> this fix is in and released, you could
> choose to update your flow so that the connection between Process Groups
> is not load balanced and instead
> the connection between the Input Port and the first Processor is load
> balanced. Again, this is not fool-proof,
> because it could affect the Load Balanced Connection even if it is
> connected to a Processor, but it is less likely
> to do so, so you would likely see the issue occur far less often.
>
> Thank you so much for sticking with us all as we diagnose this and figure
> it all out - would not have been able to
> figure it out without you spending the time to debug the issue!
>
> Thanks
> -Mark
>
> [1] https://issues.apache.org/jira/browse/NIFI-5919
>
>
> On Dec 26, 2018, at 10:31 PM, dan young  wrote:
>
> Hello Mark,
>
> I just stopped the destination processor, and then disconnected the node
> in question (nifi1-1). Once I disconnected the node, the flow file in the
> load balance connection disappeared from the queue.  After that, I
> reconnected the node (with the downstream processor disconnected) and once
> the node successfully rejoined the cluster, the flowfile showed up in the
> queue again. After this, I started the connected downstream processor, but
> the flowfile stays in the queue. The only way to clear the queue is if I
> actually restart the node.  If I disconnect the node, and then restart that
> node, the flowfile is no longer present in the queue.
>
> Regards,
>
> Dano
>
>
> On Wed, Dec 26, 2018 at 6:13 PM Mark Payne  wrote:
>
>> Ok, I just wanted to confirm that when you said “once it rejoins the
>> cluster that flow file is gone” that you mean “the flowfile did not exist
>> on the system” and NOT “the queue size was 0 by the time that I looked at
>> the UI.” I.e., is it possible that the FlowFile did exist, was restored,
>> and then was processed before you looked at the UI? Or the FlowFile
>> definitely did not exist after the node was restarted? That’s why I was
>> suggesting that you restart with the connection’s source and destination
>> stopped. Just to make sure that the FlowFile didn’t just get processed
>> quickly on restart.
>>
>> Sent from my iPhone
>>
>> On Dec 26, 2018, at 7:55 PM, dan young  wrote:
>>
>> Heya Mark,
>>
>> If we restart the node, that "stuck" flowfile will disappear. This is the
>> only way so far to clear out the flowfile. I usually disconnect the node,
>> then once it's disconnected I restart nifi, and then once it rejoins the
>> cluster that flow file is gone. If we try to empty the queue, it will just
>> say that there no flow files in the queue.
>>
>>
>> On Wed, Dec 26, 2018, 5:22 PM Mark Payne >
>>> Hey Dan,
>>>
>>> Thanks, this is super useful! So, the following section is the damning
>>> part of the JSON:
>>>
>>>   {
>>> "totalFlowFileCount": 1,
>>> "totalByteCount": 975890,
>>> "nodeIdentifier": "nifi1-1:9443",
>>> "localQueuePartition": {
>>>   "totalFlowFileCount": 0,
>>>   "totalByteCount": 0,
>>>   "activeQueueFlowFileCount": 0,
>>>   "activeQueueByteCount": 0,
>>>   "swapFlowFileCount": 0,
>>>   "swapByteCount": 0,
>>>   "swapFiles": 0,
>>>   "inFlightFlowFileCount": 0,
>>>   "inFlightByteCount": 0,
>>>   "allActiveQueueFlowFilesPenalized": false,
>>>   "anyActiveQueueFlowFilesPenalized": false
>>> },
>>

Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-28 Thread dan young
Thank you Mark! This is great news and promising... I'll look into
refactoring our flows per your suggestion...

Regards

Dano

On Fri, Dec 28, 2018, 8:43 AM Mark Payne  Dan, et al,
>
> Great news! I was able to replicate this issue finally, by creating a
> Load-Balanced connection
> between two Process Groups/Ports instead of between two processors. The
> fact that it's between
> two Ports does not, in and of itself, matter. But there is a race
> condition, and Ports do no actual
> Processing of the FlowFile (simply pull it from one queue and transfer it
> to another). As a result, because
> it is extremely fast, it is more likely to trigger the race condition.
>
> So I created a JIRA [1] and have submitted a PR for it.
>
> Interestingly, while there is no real workaround that is fool-proof, until
> this fix is in and released, you could
> choose to update your flow so that the connection between Process Groups
> is not load balanced and instead
> the connection between the Input Port and the first Processor is load
> balanced. Again, this is not fool-proof,
> because it could affect the Load Balanced Connection even if it is
> connected to a Processor, but it is less likely
> to do so, so you would likely see the issue occur far less often.
>
> Thank you so much for sticking with us all as we diagnose this and figure
> it all out - would not have been able to
> figure it out without you spending the time to debug the issue!
>
> Thanks
> -Mark
>
> [1] https://issues.apache.org/jira/browse/NIFI-5919
>
>
> On Dec 26, 2018, at 10:31 PM, dan young  wrote:
>
> Hello Mark,
>
> I just stopped the destination processor, and then disconnected the node
> in question (nifi1-1). Once I disconnected the node, the flow file in the
> load balance connection disappeared from the queue.  After that, I
> reconnected the node (with the downstream processor disconnected) and once
> the node successfully rejoined the cluster, the flowfile showed up in the
> queue again. After this, I started the connected downstream processor, but
> the flowfile stays in the queue. The only way to clear the queue is if I
> actually restart the node.  If I disconnect the node, and then restart that
> node, the flowfile is no longer present in the queue.
>
> Regards,
>
> Dano
>
>
> On Wed, Dec 26, 2018 at 6:13 PM Mark Payne  wrote:
>
>> Ok, I just wanted to confirm that when you said “once it rejoins the
>> cluster that flow file is gone” that you mean “the flowfile did not exist
>> on the system” and NOT “the queue size was 0 by the time that I looked at
>> the UI.” I.e., is it possible that the FlowFile did exist, was restored,
>> and then was processed before you looked at the UI? Or the FlowFile
>> definitely did not exist after the node was restarted? That’s why I was
>> suggesting that you restart with the connection’s source and destination
>> stopped. Just to make sure that the FlowFile didn’t just get processed
>> quickly on restart.
>>
>> Sent from my iPhone
>>
>> On Dec 26, 2018, at 7:55 PM, dan young  wrote:
>>
>> Heya Mark,
>>
>> If we restart the node, that "stuck" flowfile will disappear. This is the
>> only way so far to clear out the flowfile. I usually disconnect the node,
>> then once it's disconnected I restart nifi, and then once it rejoins the
>> cluster that flow file is gone. If we try to empty the queue, it will just
>> say that there no flow files in the queue.
>>
>>
>> On Wed, Dec 26, 2018, 5:22 PM Mark Payne >
>>> Hey Dan,
>>>
>>> Thanks, this is super useful! So, the following section is the damning
>>> part of the JSON:
>>>
>>>   {
>>> "totalFlowFileCount": 1,
>>> "totalByteCount": 975890,
>>> "nodeIdentifier": "nifi1-1:9443",
>>> "localQueuePartition": {
>>>   "totalFlowFileCount": 0,
>>>   "totalByteCount": 0,
>>>   "activeQueueFlowFileCount": 0,
>>>   "activeQueueByteCount": 0,
>>>   "swapFlowFileCount": 0,
>>>   "swapByteCount": 0,
>>>   "swapFiles": 0,
>>>   "inFlightFlowFileCount": 0,
>>>   "inFlightByteCount": 0,
>>>   "allActiveQueueFlowFilesPenalized": false,
>>>   "anyActiveQueueFlowFilesPenalized": false
>>> },
>>>

Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-26 Thread dan young
Hello Mark,

I just stopped the destination processor, and then disconnected the node in
question (nifi1-1). Once I disconnected the node, the flow file in the load
balance connection disappeared from the queue.  After that, I reconnected
the node (with the downstream processor disconnected) and once the node
successfully rejoined the cluster, the flowfile showed up in the queue
again. After this, I started the connected downstream processor, but the
flowfile stays in the queue. The only way to clear the queue is if I
actually restart the node.  If I disconnect the node, and then restart that
node, the flowfile is no longer present in the queue.

Regards,

Dano


On Wed, Dec 26, 2018 at 6:13 PM Mark Payne  wrote:

> Ok, I just wanted to confirm that when you said “once it rejoins the
> cluster that flow file is gone” that you mean “the flowfile did not exist
> on the system” and NOT “the queue size was 0 by the time that I looked at
> the UI.” I.e., is it possible that the FlowFile did exist, was restored,
> and then was processed before you looked at the UI? Or the FlowFile
> definitely did not exist after the node was restarted? That’s why I was
> suggesting that you restart with the connection’s source and destination
> stopped. Just to make sure that the FlowFile didn’t just get processed
> quickly on restart.
>
> Sent from my iPhone
>
> On Dec 26, 2018, at 7:55 PM, dan young  wrote:
>
> Heya Mark,
>
> If we restart the node, that "stuck" flowfile will disappear. This is the
> only way so far to clear out the flowfile. I usually disconnect the node,
> then once it's disconnected I restart nifi, and then once it rejoins the
> cluster that flow file is gone. If we try to empty the queue, it will just
> say that there no flow files in the queue.
>
>
> On Wed, Dec 26, 2018, 5:22 PM Mark Payne 
>> Hey Dan,
>>
>> Thanks, this is super useful! So, the following section is the damning
>> part of the JSON:
>>
>>   {
>> "totalFlowFileCount": 1,
>> "totalByteCount": 975890,
>> "nodeIdentifier": "nifi1-1:9443",
>> "localQueuePartition": {
>>   "totalFlowFileCount": 0,
>>   "totalByteCount": 0,
>>   "activeQueueFlowFileCount": 0,
>>   "activeQueueByteCount": 0,
>>   "swapFlowFileCount": 0,
>>   "swapByteCount": 0,
>>   "swapFiles": 0,
>>   "inFlightFlowFileCount": 0,
>>   "inFlightByteCount": 0,
>>   "allActiveQueueFlowFilesPenalized": false,
>>   "anyActiveQueueFlowFilesPenalized": false
>> },
>> "remoteQueuePartitions": [
>>   {
>> "totalFlowFileCount": 0,
>> "totalByteCount": 0,
>> "activeQueueFlowFileCount": 0,
>> "activeQueueByteCount": 0,
>> "swapFlowFileCount": 0,
>> "swapByteCount": 0,
>> "swapFiles": 0,
>> "inFlightFlowFileCount": 0,
>> "inFlightByteCount": 0,
>> "nodeIdentifier": "nifi2-1:9443"
>>   },
>>   {
>> "totalFlowFileCount": 0,
>> "totalByteCount": 0,
>> "activeQueueFlowFileCount": 0,
>> "activeQueueByteCount": 0,
>> "swapFlowFileCount": 0,
>> "swapByteCount": 0,
>> "swapFiles": 0,
>> "inFlightFlowFileCount": 0,
>> "inFlightByteCount": 0,
>> "nodeIdentifier": "nifi3-1:9443"
>>   }
>> ]
>>   }
>>
>> It indicates that node nifi1-1 is showing a queue size of 1 FlowFile, 975890
>> bytes. But it also shows that the FlowFile is not in the "local partition"
>> or either of the two "remote partitions." So that leaves us with two
>> possibilities:
>>
>> 1) The Queue's Count is wrong, because it somehow did not get decremented
>> (perhaps a threading bug?)
>>
>> Or
>>
>> 2) The Count is correct and the FlowFile exists, but somehow the
>> reference to the FlowFile was

Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-26 Thread dan young
Heya Mark,

If we restart the node, that "stuck" flowfile will disappear. This is the
only way so far to clear out the flowfile. I usually disconnect the node,
then once it's disconnected I restart nifi, and then once it rejoins the
cluster that flow file is gone. If we try to empty the queue, it will just
say that there no flow files in the queue.


On Wed, Dec 26, 2018, 5:22 PM Mark Payne  Hey Dan,
>
> Thanks, this is super useful! So, the following section is the damning
> part of the JSON:
>
>   {
> "totalFlowFileCount": 1,
> "totalByteCount": 975890,
> "nodeIdentifier": "nifi1-1:9443",
> "localQueuePartition": {
>   "totalFlowFileCount": 0,
>   "totalByteCount": 0,
>   "activeQueueFlowFileCount": 0,
>   "activeQueueByteCount": 0,
>   "swapFlowFileCount": 0,
>   "swapByteCount": 0,
>   "swapFiles": 0,
>   "inFlightFlowFileCount": 0,
>   "inFlightByteCount": 0,
>   "allActiveQueueFlowFilesPenalized": false,
>   "anyActiveQueueFlowFilesPenalized": false
> },
> "remoteQueuePartitions": [
>   {
> "totalFlowFileCount": 0,
> "totalByteCount": 0,
> "activeQueueFlowFileCount": 0,
> "activeQueueByteCount": 0,
> "swapFlowFileCount": 0,
> "swapByteCount": 0,
> "swapFiles": 0,
> "inFlightFlowFileCount": 0,
> "inFlightByteCount": 0,
> "nodeIdentifier": "nifi2-1:9443"
>   },
>   {
> "totalFlowFileCount": 0,
> "totalByteCount": 0,
> "activeQueueFlowFileCount": 0,
> "activeQueueByteCount": 0,
> "swapFlowFileCount": 0,
> "swapByteCount": 0,
> "swapFiles": 0,
> "inFlightFlowFileCount": 0,
> "inFlightByteCount": 0,
> "nodeIdentifier": "nifi3-1:9443"
>   }
> ]
>   }
>
> It indicates that node nifi1-1 is showing a queue size of 1 FlowFile, 975890
> bytes. But it also shows that the FlowFile is not in the "local partition"
> or either of the two "remote partitions." So that leaves us with two
> possibilities:
>
> 1) The Queue's Count is wrong, because it somehow did not get decremented
> (perhaps a threading bug?)
>
> Or
>
> 2) The Count is correct and the FlowFile exists, but somehow the reference
> to the FlowFile was lost by the FlowFile Queue (again, perhaps a threading
> bug?)
>
> If possible, I would for you to stop both the source and destination of
> that connection and then restart node nifi1-1. Once it has restarted, check
> if the FlowFile is still in the connection. That will tell us which of the
> two above scenarios is taking place. If the FlowFile exists upon restart,
> then the Queue somehow lost the handle to it. If the FlowFile does not
> exist in the connection upon restart (I'm guessing this will be the case),
> then it indicates that somehow the count is incorrect.
>
> Many thanks
> -Mark
>
> --
> *From:* dan young 
> *Sent:* Wednesday, December 26, 2018 9:18 AM
> *To:* NiFi Mailing List
> *Subject:* Re: flowfiles stuck in load balanced queue; nifi 1.8
>
> Heya Mark,
>
> So I added a Log Attribute Processor and routed the connection that had
> the "stuck" flowfile to it.   I ran a get diagnostics to the Log Attribute
> processor before I started it, and then ran another diagnostics after I
> started it.  The flowfile stayed in the load balanced connection/queue.
> I've attached both files.  Please LMK if this helps.
>
> Regards,
>
> Dano
>
>
> On Mon, Dec 24, 2018 at 10:35 AM Mark Payne  wrote:
>
> Dan,
>
> You would want to get diagnostics for the processor that is the
> source/destination of the connection - not the FlowFile. But if you
> connection is connecting 2 process groups then both its source and
> destination are Ports, not Processors. So the easiest thing to do would be
> to drop a “dummy processor” into the flow between the 2 groups, drag the
>

Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-23 Thread dan young
I forgot to mention that we're using the OpenId Connect SSO .  Is there a
way to run these command via curl when we have the cluster configured this
way?  If so would anyone be able to provide some insight/examples.

Happy Holidays!

Regards,

Dano

On Sun, Dec 23, 2018 at 3:53 PM dan young  wrote:

> This is what I'm seeing in the logs when I try to access
> the nifi-api/flow/about for example...
>
> 2018-12-23 22:51:45,579 INFO [NiFi Web Server-24201]
> o.a.n.w.s.NiFiAuthenticationFilter Authentication success for
> d...@looker.com
>
> 2018-12-23 22:52:01,375 INFO [NiFi Web Server-24136]
> o.a.n.w.a.c.AccessDeniedExceptionMapper identity[anonymous], groups[none]
> does not have permission to access the requested resource. Unknown user
> with identity 'anonymous'. Returning Unauthorized response.
>
> On Sun, Dec 23, 2018 at 3:50 PM dan young  wrote:
>
>> Hello Mark,
>>
>> I have a queue again with a "stuck/phantom" flowfile again.  When I try
>> to call the nifi-api/processors//diagnostics against a
>> processor, in the UI after I authenticate, I get a "Unknown user with
>> identity 'anonymous'. Contact the system administrator." We're running a
>> secure 3x node cluster. I tried this via the browser and also via the
>> command line with curl on one of the nodes. One clarification point, what
>> processor id should I be trying to gather the diagnostics on? the the queue
>> is in between two processor groups.
>>
>> Maybe the issue with the Unknown User has to do with some policy I don't
>> have set correctly?
>>
>> Happy Holidays!
>>
>> Regards,
>> Dano
>>
>>
>>
>>
>> On Wed, Dec 19, 2018 at 6:51 AM Mark Payne  wrote:
>>
>>> Hey Josef, Dano,
>>>
>>> Firstly, let me assure you that while I may be the only one from the
>>> NiFi side who's been engaging on debugging
>>> this, I am far from the only one who cares about it! :) This is a pretty
>>> big new feature that was added to the latest
>>> release, so understandably there are probably not yet a lot of people
>>> who understand the code well enough to
>>> debug. I have tried replicating the issue, but have not been successful.
>>> I have a 3-node cluster that ran for well over
>>> a month without a restart, and i've also tried restarting it every few
>>> hours for a couple of days. It has about 8 different
>>> load-balanced connections, with varying data sizes and volumes. I've not
>>> been able to get into this situation, though,
>>> unfortunately.
>>>
>>> But yes, I think that we've seen this issue arise from each of the two
>>> of you and one other on the mailing list, so it
>>> is certainly something that we need to nail down ASAP. Unfortunately,
>>> debugging an issue that involves communication
>>> between multiple nodes is often difficult to fully understand, so it may
>>> not be a trivial task to debug.
>>>
>>> Dano, if you are able to get to the diagnostics, as Josef mentioned,
>>> that is likely to be pretty helpful. Off the top of my head,
>>> there are a few possibilities that are coming to mind, as to what kind
>>> of bug could cause such behavior:
>>>
>>> 1) Perhaps there really is no flowfile in the queue, but we somehow
>>> miscalculated the size of the queue. The diagnostics
>>> info would tell us whether or not this is the case. It will look into
>>> the queues themselves to determine how many FlowFiles are
>>> destined for each node in the cluster, rather than just returning the
>>> pre-calculated count. Failing that, you could also stop the source
>>> and destination of the queue, restart the node, and then see if the
>>> FlowFile is entirely gone from the queue on restart, or if it remains
>>> in the queue. If it is gone, then that likely indicates that the
>>> pre-computed count is somehow off.
>>>
>>> 2) We are having trouble communicating with the node that we are trying
>>> to send the data to. I would expect some sort of ERROR
>>> log messages in this case.
>>>
>>> 3) The node is properly sending the FlowFile to where it needs to go,
>>> but for some reason the receiving node is then re-distributing it
>>> to another node in the cluster, which then re-distributes it again, so
>>> that it never ends in the correct destination. I think this is unlikely
>>> and would be easy to verify by looking at the "Summary" table [1] and
>>> doing the "Cluster view" and constantly refres

Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-23 Thread dan young
This is what I'm seeing in the logs when I try to access
the nifi-api/flow/about for example...

2018-12-23 22:51:45,579 INFO [NiFi Web Server-24201]
o.a.n.w.s.NiFiAuthenticationFilter Authentication success for d...@looker.com

2018-12-23 22:52:01,375 INFO [NiFi Web Server-24136]
o.a.n.w.a.c.AccessDeniedExceptionMapper identity[anonymous], groups[none]
does not have permission to access the requested resource. Unknown user
with identity 'anonymous'. Returning Unauthorized response.

On Sun, Dec 23, 2018 at 3:50 PM dan young  wrote:

> Hello Mark,
>
> I have a queue again with a "stuck/phantom" flowfile again.  When I try to
> call the nifi-api/processors//diagnostics against a
> processor, in the UI after I authenticate, I get a "Unknown user with
> identity 'anonymous'. Contact the system administrator." We're running a
> secure 3x node cluster. I tried this via the browser and also via the
> command line with curl on one of the nodes. One clarification point, what
> processor id should I be trying to gather the diagnostics on? the the queue
> is in between two processor groups.
>
> Maybe the issue with the Unknown User has to do with some policy I don't
> have set correctly?
>
> Happy Holidays!
>
> Regards,
> Dano
>
>
>
>
> On Wed, Dec 19, 2018 at 6:51 AM Mark Payne  wrote:
>
>> Hey Josef, Dano,
>>
>> Firstly, let me assure you that while I may be the only one from the NiFi
>> side who's been engaging on debugging
>> this, I am far from the only one who cares about it! :) This is a pretty
>> big new feature that was added to the latest
>> release, so understandably there are probably not yet a lot of people who
>> understand the code well enough to
>> debug. I have tried replicating the issue, but have not been successful.
>> I have a 3-node cluster that ran for well over
>> a month without a restart, and i've also tried restarting it every few
>> hours for a couple of days. It has about 8 different
>> load-balanced connections, with varying data sizes and volumes. I've not
>> been able to get into this situation, though,
>> unfortunately.
>>
>> But yes, I think that we've seen this issue arise from each of the two of
>> you and one other on the mailing list, so it
>> is certainly something that we need to nail down ASAP. Unfortunately,
>> debugging an issue that involves communication
>> between multiple nodes is often difficult to fully understand, so it may
>> not be a trivial task to debug.
>>
>> Dano, if you are able to get to the diagnostics, as Josef mentioned, that
>> is likely to be pretty helpful. Off the top of my head,
>> there are a few possibilities that are coming to mind, as to what kind of
>> bug could cause such behavior:
>>
>> 1) Perhaps there really is no flowfile in the queue, but we somehow
>> miscalculated the size of the queue. The diagnostics
>> info would tell us whether or not this is the case. It will look into the
>> queues themselves to determine how many FlowFiles are
>> destined for each node in the cluster, rather than just returning the
>> pre-calculated count. Failing that, you could also stop the source
>> and destination of the queue, restart the node, and then see if the
>> FlowFile is entirely gone from the queue on restart, or if it remains
>> in the queue. If it is gone, then that likely indicates that the
>> pre-computed count is somehow off.
>>
>> 2) We are having trouble communicating with the node that we are trying
>> to send the data to. I would expect some sort of ERROR
>> log messages in this case.
>>
>> 3) The node is properly sending the FlowFile to where it needs to go, but
>> for some reason the receiving node is then re-distributing it
>> to another node in the cluster, which then re-distributes it again, so
>> that it never ends in the correct destination. I think this is unlikely
>> and would be easy to verify by looking at the "Summary" table [1] and
>> doing the "Cluster view" and constantly refreshing for a few seconds
>> to see if the queue changes on any node in the cluster.
>>
>> 4) For some entirely unknown reason, there exists a bug that causes the
>> node to simply see the FlowFile and just skip over it
>> entirely.
>>
>> For additional logging, we can enable DEBUG logging on
>> org.apache.nifi.controller.queue.clustered.client.async.nio.
>> NioAsyncLoadBalanceClientTask:
>> > level="DEBUG" />
>>
>> With that DEBUG logging turned on, it may or may not generate a lot of
>> DEBUG logs. If it does not, then that in and of itself tells us something.
&

Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-23 Thread dan young
et the diagnostics on your cluster?
>
> I guess at the end we have to open a Jira ticket to narrow it down.
>
> Cheers Josef
>
>
> One thing that I would recommend, to get more information, is to go to the
> REST endpoint (in your browser is fine)
> /nifi-api/processors//diagnostics
>
> Where  is the UUID of either the source or the destination
> of the Connection in question. This gives us
> a lot of information about the internals of Connection. The easiest way to
> get that Processor ID is to just click on the
> processor on the canvas and look at the Operate palette on the left-hand
> side. You can copy & paste from there. If you
> then send the diagnostics information to us, we can analyze that to help
> understand what's happening.
>
>
>
> *From: *dan young 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Wednesday, 19 December 2018 at 05:28
> *To: *NiFi Mailing List 
> *Subject: *flowfiles stuck in load balanced queue; nifi 1.8
>
> We're seeing this more frequently where flowfiles seem to be stuck in a
> load balanced queue.  The only resolution is to disconnect the node and
> then restart that node.  After this, the flowfile disappears from the
> queue.  Any ideas on what might be going on here or what additional
> information I might be able to provide to debug this?
>
> I've attached another thread dump and some screen shots
>
>
> Regards,
>
> Dano
>
>
>


Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-19 Thread dan young
Ok, will keep you posted.

On Wed, Dec 19, 2018, 8:22 AM Mark Payne  Hey Dan,
>
> Yes, we will want to get the diagnostics when you're in that state. It's
> probably not worth trying to turn on DEBUG logging
> unless you are in that state either. The thread dump shows all the threads
> with no work to do. Which is what I would expect.
> The question is: "Why does it not think there's work to do?" So the
> diagnostics and DEBUG logs will hopefully answer that,
> once you get back into that state again.
>
> Thanks
> -Mark
>
>
> On Dec 19, 2018, at 10:16 AM, dan young  wrote:
>
> Hello Mark,
>
> I'll try to grab that diagnostics...I assume we want to grab it when we
> see the stuck Flowfile in a queue, correct?
>
> Also, does the nifi thread dump provide anything? This was from the node
> that seemed to have the stuck Flowfile...
>
> Dano
>
> On Wed, Dec 19, 2018, 6:51 AM Mark Payne 
>> Hey Josef, Dano,
>>
>> Firstly, let me assure you that while I may be the only one from the NiFi
>> side who's been engaging on debugging
>> this, I am far from the only one who cares about it! :) This is a pretty
>> big new feature that was added to the latest
>> release, so understandably there are probably not yet a lot of people who
>> understand the code well enough to
>> debug. I have tried replicating the issue, but have not been successful.
>> I have a 3-node cluster that ran for well over
>> a month without a restart, and i've also tried restarting it every few
>> hours for a couple of days. It has about 8 different
>> load-balanced connections, with varying data sizes and volumes. I've not
>> been able to get into this situation, though,
>> unfortunately.
>>
>> But yes, I think that we've seen this issue arise from each of the two of
>> you and one other on the mailing list, so it
>> is certainly something that we need to nail down ASAP. Unfortunately,
>> debugging an issue that involves communication
>> between multiple nodes is often difficult to fully understand, so it may
>> not be a trivial task to debug.
>>
>> Dano, if you are able to get to the diagnostics, as Josef mentioned, that
>> is likely to be pretty helpful. Off the top of my head,
>> there are a few possibilities that are coming to mind, as to what kind of
>> bug could cause such behavior:
>>
>> 1) Perhaps there really is no flowfile in the queue, but we somehow
>> miscalculated the size of the queue. The diagnostics
>> info would tell us whether or not this is the case. It will look into the
>> queues themselves to determine how many FlowFiles are
>> destined for each node in the cluster, rather than just returning the
>> pre-calculated count. Failing that, you could also stop the source
>> and destination of the queue, restart the node, and then see if the
>> FlowFile is entirely gone from the queue on restart, or if it remains
>> in the queue. If it is gone, then that likely indicates that the
>> pre-computed count is somehow off.
>>
>> 2) We are having trouble communicating with the node that we are trying
>> to send the data to. I would expect some sort of ERROR
>> log messages in this case.
>>
>> 3) The node is properly sending the FlowFile to where it needs to go, but
>> for some reason the receiving node is then re-distributing it
>> to another node in the cluster, which then re-distributes it again, so
>> that it never ends in the correct destination. I think this is unlikely
>> and would be easy to verify by looking at the "Summary" table [1] and
>> doing the "Cluster view" and constantly refreshing for a few seconds
>> to see if the queue changes on any node in the cluster.
>>
>> 4) For some entirely unknown reason, there exists a bug that causes the
>> node to simply see the FlowFile and just skip over it
>> entirely.
>>
>> For additional logging, we can enable DEBUG logging on
>> org.apache.nifi.controller.queue.clustered.client.async.nio.
>> NioAsyncLoadBalanceClientTask:
>> > level="DEBUG" />
>>
>> With that DEBUG logging turned on, it may or may not generate a lot of
>> DEBUG logs. If it does not, then that in and of itself tells us something.
>> If it does generate a lot of DEBUG logs, then it would be good to see
>> what it's dumping out in the logs.
>>
>> And a big Thank You to you guys for staying engaged on this and your
>> willingness to dig in!
>>
>> Thanks!
>> -Mark
>>
>> [1]
>> https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Summary_Page
>>
>>

Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-19 Thread dan young
f information about the internals of Connection. The easiest way to
> get that Processor ID is to just click on the
> processor on the canvas and look at the Operate palette on the left-hand
> side. You can copy & paste from there. If you
> then send the diagnostics information to us, we can analyze that to help
> understand what's happening.
>
>
>
> *From: *dan young 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Wednesday, 19 December 2018 at 05:28
> *To: *NiFi Mailing List 
> *Subject: *flowfiles stuck in load balanced queue; nifi 1.8
>
> We're seeing this more frequently where flowfiles seem to be stuck in a
> load balanced queue.  The only resolution is to disconnect the node and
> then restart that node.  After this, the flowfile disappears from the
> queue.  Any ideas on what might be going on here or what additional
> information I might be able to provide to debug this?
>
> I've attached another thread dump and some screen shots
>
>
> Regards,
>
> Dano
>
>
>


Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

2018-12-15 Thread dan young
We're not seeing . any WARN or ERRORs in the logs  either

On Fri, Nov 16, 2018 at 1:06 AM  wrote:

> Hi Mark
>
>
>
> We see the issue again, even after a fresh started cluster (where we
> started everything at the same time). The files stuck for multiple
> seconds/minutes in the queue and the light blue loadbalancing icon on the
> right side shows that it is actually loadbalancing the whole time (even if
> it is just 1 or 2 files). The log (with default log levels) show no WARN or
> ERRORs…
>
>
>
> Thanks in advance, Josef
>
>
>
>
>
> *From: *Mark Payne 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Monday, 12 November 2018 at 17:19
> *To: *"users@nifi.apache.org" 
> *Subject: *Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue
>
>
>
> Hey Dan,
>
>
>
> Have looked through the logs to see if there are any WARN or ERROR's
> indicating what's going on?
>
>
>
> Thanks
>
> -Mark
>
>
>
>
>
> On Nov 12, 2018, at 9:06 AM, dan young  wrote:
>
>
>
> Hello,
>
>
>
> We have two processor groups connected via the new  Load Balancing/Round
> Robin queue.  It seems that a flowfile is "stuck" in this queue.  I've been
> watching it for some time now.  Is there any way to trouble shoot what is
> stuck in the queue and why?  or maybe remove it?  I've tried to stop the PG
> and empty the queue, but always says emptied 0 out of 1 flowflies...
>
>
>
> Regards,
>
>
> Dano
>
>
>
>
>
>
>
> 
>
>
>


Re: Load Balancing connection issue

2018-12-03 Thread dan young
We've been seeing issues
, three times now, where it seems like a flowfile is stuck in a load
balanced queue. We're not able to empty the queue or view the flowfile that
appears to be in the queue. The only resolution for use right now is to
detach the node where the Flowfile is in the queue, then restart that node.
After that, the flowfile is gone.

I've enabled the expire flow files after X time, but that doesn't appear to
help...

Regards

Dano

On Mon, Dec 3, 2018, 4:42 AM Kien Truong  Hi all,
>
> We're testing the new load-balance connection feature of NIFI 1.8.
>
> After some cluster-wide restarts, some connections with load-balancing
> enabled seem to stuck.
>
> The connection is always shown as actively balancing, however, the size
> of the queue show very little changes, also the queue contents cannot be
> viewed.
>
> NIFI always returns that the queue has 0 flow files when trying to view
> queue content, despite showing a non-zero number in the UI and in REST.
>
> When this happen, if we disable the load-balancing, we will be able to
> view the queue content immediately, but the problem return if we enable
> the load balancing feature again.
>
>
> In addition, we sometime see negative queue size exception in the log
> when this happen, but not always.
>
>
> Regards,
>
> Kien
>
>
>


Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

2018-11-12 Thread dan young
I ended up disconnecting the Node that had the flowfile in the queue and
restarted it, and then rejoinednthe cluster. That took care of it, I'll
continue to monitor it. I also added in the expire flow files after X
minutes.

Regards

Dano


On Mon, Nov 12, 2018, 8:37 AM  Same Issue here! Had to reboot the whole cluster to fix that issue. The
> files stuck a few seconds/minutes in the queue until they get processed. In
> my case I assume that it was caused by a one-by-one reboot of the cluster
> nodes, means we normally reboot only one of 8 nodes and wait a few seconds
> until we reboot the next one to get as much performance as possible. It
> must be a bug in this new Load Balancing/Round Robin queue… any comments
> from the devs?
>
>
>
> Cheers Josef
>
>
>
> *From: *dan young 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Monday, 12 November 2018 at 15:06
> *To: *NiFi Mailing List 
> *Subject: *NiFi 1.8 and stuck flowfile in Load Balanced enabled queue
>
>
>
> Hello,
>
>
>
> We have two processor groups connected via the new  Load Balancing/Round
> Robin queue.  It seems that a flowfile is "stuck" in this queue.  I've been
> watching it for some time now.  Is there any way to trouble shoot what is
> stuck in the queue and why?  or maybe remove it?  I've tried to stop the PG
> and empty the queue, but always says emptied 0 out of 1 flowflies...
>
>
>
> Regards,
>
>
> Dano
>
>
>
>
>
>
>


NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

2018-11-12 Thread dan young
Hello,

We have two processor groups connected via the new  Load Balancing/Round
Robin queue.  It seems that a flowfile is "stuck" in this queue.  I've been
watching it for some time now.  Is there any way to trouble shoot what is
stuck in the queue and why?  or maybe remove it?  I've tried to stop the PG
and empty the queue, but always says emptied 0 out of 1 flowflies...

Regards,

Dano


Available SQL fn in QueryRecord??

2018-10-26 Thread dan young
Hello,

Is there a list or somewhere we can find all the available SQL functions
available in QueryRecord?

Regards

Dano


NiFi 1.8 release date?

2018-10-16 Thread dan young
Any ideas on when 1.8 might be dropping?

Regards,

Dano


Re: Anyone using HashAttribute?

2018-09-05 Thread dan young
Heya Andy,

yes, that seems legit...we'll make it work on our side...

Keep up the awesome work on NiFi, powers all of our ETL here now :)

Dano

On Wed, Sep 5, 2018 at 5:14 PM Andy LoPresto  wrote:

> Dan,
>
> Does the proposal I submitted meet your requirements?
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Sep 5, 2018, at 4:09 PM, dan young  wrote:
>
> We're using it as well, in the same/similar fashion as being discussed in
> the thread...
>
> Dano
>
> On Wed, Sep 5, 2018, 10:07 AM Brandon DeVries  wrote:
>
>> Andy,
>>
>> We use it pretty much how Joe is... to create a unique composite key.  It
>> seems as though that shouldn't be a difficult functionality to add.
>> Possibly, you could flip your current dynamic key/value properties.  Make
>> the key the name of the attribute you want to create, and the value is the
>> attribute / attributes (newline delimited) that you want to include in the
>> hash.  This does mean you can't use "${algorithm.name}" in the name of
>> the created hash attribute, but I don't know if you'd consider that a big
>> loss.  In any case, I'm sure there are other solutions, this is just a
>> thought.
>>
>> Brandon
>>
>> On Wed, Sep 5, 2018 at 10:27 AM Joe Percivall 
>> wrote:
>>
>>> Hey Andy,
>>>
>>> We're currently using the HashAttribute processor. The use-case is that
>>> we have various events that come in but sometimes those events are just
>>> updates of previous ones. We store everything in ElasticSearch. So for
>>> certain events, we'll calculate a hash based on a couple of attributes in
>>> order to have a composite unique key to upsert as the ES _id. This allows
>>> us to easily just insert/update events that are the same (as determined by
>>> the hashed composite key).
>>>
>>> As for the configuration of the processors, we're essentially just
>>> specifying exact attributes as dynamic properties of HashAttribute. Then
>>> passing that FF to PutElasticSearchHttp with the resulting attribute from
>>> HashAttribute as the "Identifier Attribute".
>>>
>>> Joe
>>>
>>> On Mon, Sep 3, 2018 at 9:52 PM Andy LoPresto 
>>> wrote:
>>>
>>>> I opened PRs for 2980 [1] and 2983 [2] which add more performant,
>>>> consistent, and full-featured processors to calculate cryptographic hashes
>>>> of flowfile content and flowfile attributes. I would like to deprecate and
>>>> drop support for HashAttribute, as it performs a convoluted calculation
>>>> that was probably useful in an old scenario, but doesn’t “hash attributes”
>>>> like the name implies. As it blocks the new implementation from using that
>>>> name and following our naming convention, I am hoping to find anyone still
>>>> using the old implementation and understand their use case. Thanks for your
>>>> help.
>>>>
>>>> [1] https://github.com/apache/nifi/pull/2980
>>>> [2] https://github.com/apache/nifi/pull/2983
>>>>
>>>>
>>>>
>>>> Andy LoPresto
>>>> alopre...@apache.org
>>>> *alopresto.apa...@gmail.com *
>>>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>>>
>>>>
>>>
>>> --
>>> *Joe Percivall*
>>> linkedin.com/in/Percivall
>>> e: jperciv...@apache.com
>>>
>>
>


Re: Anyone using HashAttribute?

2018-09-05 Thread dan young
We're using it as well, in the same/similar fashion as being discussed in
the thread...

Dano

On Wed, Sep 5, 2018, 10:07 AM Brandon DeVries  wrote:

> Andy,
>
> We use it pretty much how Joe is... to create a unique composite key.  It
> seems as though that shouldn't be a difficult functionality to add.
> Possibly, you could flip your current dynamic key/value properties.  Make
> the key the name of the attribute you want to create, and the value is the
> attribute / attributes (newline delimited) that you want to include in the
> hash.  This does mean you can't use "${algorithm.name}" in the name of
> the created hash attribute, but I don't know if you'd consider that a big
> loss.  In any case, I'm sure there are other solutions, this is just a
> thought.
>
> Brandon
>
> On Wed, Sep 5, 2018 at 10:27 AM Joe Percivall 
> wrote:
>
>> Hey Andy,
>>
>> We're currently using the HashAttribute processor. The use-case is that
>> we have various events that come in but sometimes those events are just
>> updates of previous ones. We store everything in ElasticSearch. So for
>> certain events, we'll calculate a hash based on a couple of attributes in
>> order to have a composite unique key to upsert as the ES _id. This allows
>> us to easily just insert/update events that are the same (as determined by
>> the hashed composite key).
>>
>> As for the configuration of the processors, we're essentially just
>> specifying exact attributes as dynamic properties of HashAttribute. Then
>> passing that FF to PutElasticSearchHttp with the resulting attribute from
>> HashAttribute as the "Identifier Attribute".
>>
>> Joe
>>
>> On Mon, Sep 3, 2018 at 9:52 PM Andy LoPresto 
>> wrote:
>>
>>> I opened PRs for 2980 [1] and 2983 [2] which add more performant,
>>> consistent, and full-featured processors to calculate cryptographic hashes
>>> of flowfile content and flowfile attributes. I would like to deprecate and
>>> drop support for HashAttribute, as it performs a convoluted calculation
>>> that was probably useful in an old scenario, but doesn’t “hash attributes”
>>> like the name implies. As it blocks the new implementation from using that
>>> name and following our naming convention, I am hoping to find anyone still
>>> using the old implementation and understand their use case. Thanks for your
>>> help.
>>>
>>> [1] https://github.com/apache/nifi/pull/2980
>>> [2] https://github.com/apache/nifi/pull/2983
>>>
>>>
>>>
>>> Andy LoPresto
>>> alopre...@apache.org
>>> *alopresto.apa...@gmail.com *
>>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>>
>>>
>>
>> --
>> *Joe Percivall*
>> linkedin.com/in/Percivall
>> e: jperciv...@apache.com
>>
>


Re: Title: Feature needed - ConvertJSONToCSV processor

2018-08-07 Thread dan young
You could look at using ExecuteStreamCommand + JQ. We use this pattern a
lot...

Regards

Dano

On Tue, Aug 7, 2018, 5:09 AM Mahendra prabhu 
wrote:

> Expected behaivor is similiar to this task created -
> https://issues.apache.org/jira/browse/NIFI-4398 I couldnt achieve this
> using ConvertRecord. Can you guide to proceed further to handle nested json
> elements ?
>
> Thanks in advance
>
> On Tue, Aug 7, 2018 at 4:31 PM Mahendra prabhu 
> wrote:
>
>> Hi Pierre,
>>
>> I need to move nested json data into SQL table after using expression
>> language. So I thought to use this processor, and do transformation for the
>> resultant CSV and move to SQL.
>>
>> Regards,
>> Prabhu Mahendran
>>
>> On Tue, Aug 7, 2018 at 3:59 PM Pierre Villard <
>> pierre.villard...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I believe it can be closed because it's doable with ConvertRecord
>>> processor (which is the recommended approach).
>>> Does it answer your requirement ?
>>>
>>> Thanks,
>>> Pierre
>>>
>>> 2018-08-07 11:43 GMT+02:00 Mahendra prabhu :
>>>
 Hi Folks,

 Is there is plan to include this feature or dropped -
 https://issues.apache.org/jira/browse/NIFI-1583 ?

 Is this PR closed because of licensing or incomplete rebase of code ?

 Regards,
 Prabhu Mahendran

>>>
>>>


Re: [ClojureVerse] Summaryb

2018-06-14 Thread dan young
On Wed, Jun 13, 2018, 9:19 PM ClojureVerse summary <
notificati...@clojureverse.org> wrote:

> A brief summary since your last visit on May 31
> [image: ClojureVerse] 
>
>
>
> Since your last visit
> *16*  *1*
>  *81* 
> *New Topics*  *Unread Messages*
>  *New Posts*
>  Popular Topics
>
> General Questions
> 
>
> June 11
> *Async/Generator functions in CLJS - (requesting feedback!)*
> 
>
> shaunlebron
>
> I’ve been drafting a proposal for using JavaScript’s parking primitives
> (e.g. yield, await) in ClojureScript, and it’s ready for more feedback!
>
> 15
>
> 21
> Read More
> 
>
>
> Blogs 
>
> June 11
> *Datomic Ion launch day questions answered by Rich*
> 
> Alexandr Kozyrev
>
> alexandrkozyrev
>
> http://dustingetzcom.hyperfiddle.net/:datomic-ion-launch-day-questions/
>
> 2
>
> 0
> Read More
> 
>
>
> Your Projects and Libraries
> 
>
> June 10
> *Reagent/Re-frame control state libraries*
> 
> Colin Kahn
>
> colinkahn
>
> Hi,
>
> I wanted to share some very alpha libraries based around the idea of
> defining “control states” in your single page applications. The first is
> re-state , which is a re-frame
> interceptor. The other is a much simpler library that works just with
> reagent called control-state .
>
> 2
>
> 0
> Read More
> 
>
>
> Watercooler 
>
> June 9
> *I saw Clojure code in Sketch.systems, was I wrong?*
> 
> Jon
>
> jiyinyiyong
>
> Checkout the video on https://sketch.systems/ at about 00:30.
> Interesting. Very curious where they are using Clojure(Script)?
>
> 4
>
> 1
> Read More
> 
>
>
> General Questions
> 
>
> June 11
> *Clojure at Zenrobotics*
> 
> Cam Peterson
>
> campeterson
>
> Hey all.
>
> Some years ago Zenrobotics of Finland posted Clojure jobs. I am to
> remember that it was one of the largest Clojure codebases (according to the
> post).
>
> 5
>
> 3
> Read More 
>
>
> Popular Posts
>
>
> Hi! I’m familiar with the story! I worked for ZenRobotics (ZR) as a
> software developer 2013-2015.
>
> For some context, the main product of ZenRobotics is the ZenRobotics
> Recycler (ZRR), a robot system for sorting waste. If you’re curious, check
> out this marketing video .
> For more details, these slides
> 
> have some architecture diagrams.
>
> Miikka
>
> Clojure at Zenrobotics
> 
> Read More 
>
>
> That is why I’m trying to understand your motivations.
>
> shadow-cljs is using the JVM and that is never going to change. It’s
> dependencies include clojure, closure-compiler (java, not the GWT
> variant) and undertow (java webserver) which are unlikely to ever appear
> on npm. If you want to eliminate Java then shadow-cljs might not be the
> correct choice for you.
> Thomas Heller
>
> thheller
>
> Again, will shadow-cljs support loading cljs from node_modules?
> 
> Read More
> 
>
>
> Another exciting development: https://github.com/replikativ/datahike
> Like DataScript, but durable. Still, not suitable for storing the DB
> directly as pretty printed EDN, but it has an import/export mechanism which
> allows you to store the datoms under version control. You could run the
> import on application startup and hook the export up as a pre-commit hook.
> You get to use 

Nifi toolkit grape dependecies

2018-05-22 Thread dan young
Hello,

I'm trying to run the Nifi Expression testing groovy tool with 1.6.0 and am
getting the following dep errorNifi 1.4 and 1.5 works fine.  I tried to
also download via grape in the groovy cli the jackson-core and it doesn;t
work for the 2.9.4 version  2.9.5 works fineany ideas?

I can use the 1.5 w/o  issue



λ ~/software/nifi-tools groovy testEL-1.6.groovy
'${now():format("MMdd")}'

org.codehaus.groovy.control.MultipleCompilationErrorsException: startup
failed:

General error during conversion: Error grabbing Grapes -- [download failed:
com.fasterxml.jackson.core#jackson-core;2.9.4!jackson-core.jar(bundle)]


java.lang.RuntimeException: Error grabbing Grapes -- [download failed:
com.fasterxml.jackson.core#jackson-core;2.9.4!jackson-core.jar(bundle)]

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

at
org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:83)

at
org.codehaus.groovy.reflection.CachedConstructor.doConstructorInvoke(CachedConstructor.java:77)

at
org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrap.callConstructor(ConstructorSite.java:84)

at
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:59)

at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:238)

at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:250)

at groovy.grape.GrapeIvy.getDependencies(GrapeIvy.groovy:424)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at
org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSite.invoke(PogoMetaMethodSite.java:169)

at
org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:59)

at
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)

at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:157)

at groovy.grape.GrapeIvy.resolve(GrapeIvy.groovy:571)

at groovy.grape.GrapeIvy$resolve$1.callCurrent(Unknown Source)

at
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)

at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:157)

at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:193)

at groovy.grape.GrapeIvy.resolve(GrapeIvy.groovy:538)

at groovy.grape.GrapeIvy$resolve$0.callCurrent(Unknown Source)

at
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)

at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:157)

at
org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:185)

at groovy.grape.GrapeIvy.grab(GrapeIvy.groovy:256)

at groovy.grape.Grape.grab(Grape.java:167)

at
groovy.grape.GrabAnnotationTransformation.visit(GrabAnnotationTransformation.java:376)

at
org.codehaus.groovy.transform.ASTTransformationVisitor$3.call(ASTTransformationVisitor.java:346)

at
org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:966)

at
org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:626)

at
org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:602)

at
org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:579)

at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:323)

at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:293)

at groovy.lang.GroovyShell.parseClass(GroovyShell.java:677)

at groovy.lang.GroovyShell.run(GroovyShell.java:506)

at groovy.lang.GroovyShell.run(GroovyShell.java:496)

at groovy.ui.GroovyMain.processOnce(GroovyMain.java:597)

at groovy.ui.GroovyMain.run(GroovyMain.java:329)

at groovy.ui.GroovyMain.process(GroovyMain.java:315)

at groovy.ui.GroovyMain.processArgs(GroovyMain.java:134)

at groovy.ui.GroovyMain.main(GroovyMain.java:114)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at
org.codehaus.groovy.tools.GroovyStarter.rootLoader(GroovyStarter.java:116)

at 

Re: First steps with Nifi

2018-05-11 Thread dan young
Awesome! NiFi is an amazing tool. As Joe mentioned the docs are a great
place to start along with the Hortonworks forums. Although we don't use
HDP, the content is still very relevant.

Start simple and iterate...Install on your laptop and start moving bits!

Regards

Dano

On Fri, May 11, 2018, 7:54 AM Andrés Ivaldi  wrote:

> Hello, I'm new with Nifi,
>
> I'm working on a POC where flow must be created dinamically and result
> processed with spark, which would be the best way to start?
>
> I've never used Nifi before.
>
> Regards,
>
>
> --
> Ing. Ivaldi Andres
>


NiFi 1.6

2018-04-05 Thread dan young
any updates on when 1.6 is going to drop?

dano


ExecuteStreamCommand - 1.5

2018-02-02 Thread dan young
Just wanted to shout out and thank you for adding the "nonzero status" to
the ExecuteStreamCommand in NiFi 1.5!   Has really simplified my exception
handling

kudos!

Regards,

Dano


Re: all of our schedule tasks not running/being scheduled....

2018-01-30 Thread dan young
Sounds great, will do.

Dan

On Tue, Jan 30, 2018 at 6:53 AM Joe Witt <joe.w...@gmail.com> wrote:

> Dan
>
> I'd add that when it doubt get a thread dump out.  If ever the system
> seems to be behavior incorrectly run
>
> bin/nifi.sh dump
>
> wait 30 seconds
>
> bin/nifi.sh dump
>
> And ideally send the full contents of the logs directory in a tar.gz
>
> tar czvf nifilogs.tar.gz logs
>
> Thanks
> Joe
>
> On Tue, Jan 30, 2018 at 8:43 AM, dan young <danoyo...@gmail.com> wrote:
> > Hello Koji,
> >
> > I don't see any OOM errors in the logs, I'll keep an eye on the avail.
> > thread count.  Thank you.
> >
> > Regards,
> > Dan
> >
> > On Mon, Jan 29, 2018 at 10:49 PM Koji Kawamura <ijokaruma...@gmail.com>
> > wrote:
> >>
> >> Hi Dan,
> >>
> >> If all available Timer Driven Thread are being used (or hang
> >> unexpectedly for some reason), then no processor can be scheduled.
> >> The number at the left top the NiFi UI under the NiFi logo shows the
> >> number of threads currently working.
> >> If you see something more than 0, then I'd recommend to take some
> >> thread dumps to figure out what running thread is doing.
> >>
> >> Other than that, I've encountered unexpected behavior with a NiFi
> >> cluster if a node encountered OutOfMemory error.
> >> The cluster started to behave incorrectly as it can not replicate REST
> >> requests among nodes. I'd search any ERR logs in nifi-app.log.
> >>
> >> Thanks,
> >> Koji
> >>
> >> On Tue, Jan 30, 2018 at 1:10 PM, dan young <danoyo...@gmail.com> wrote:
> >> > Hello,
> >> >
> >> > We're running a secure 3 node 1.4 cluster.  Has anyone seen any
> >> > behaviour
> >> > where the cluster just stops scheduling the running of
> flowfiles/tasks?
> >> > i.e. cron/timer, just don't run when they're suppose to.  I've tried
> to
> >> > stop
> >> > and restart a processor that is say set to run ever 900sec, but
> nothing
> >> > happens.  Then only thing I can do is to cycle through restarting each
> >> > node
> >> > in the cluster and then we're good for a few daysthis is something
> >> > that
> >> > just started happening and has occurred twice in the last week or
> >> > so.
> >> > Anything I should keep an eye out for or look for in the logs?
> >> >
> >> > Regards,
> >> >
> >> > Dan
>


Re: all of our schedule tasks not running/being scheduled....

2018-01-30 Thread dan young
Hello Koji,

I don't see any OOM errors in the logs, I'll keep an eye on the avail.
thread count.  Thank you.

Regards,
Dan

On Mon, Jan 29, 2018 at 10:49 PM Koji Kawamura <ijokaruma...@gmail.com>
wrote:

> Hi Dan,
>
> If all available Timer Driven Thread are being used (or hang
> unexpectedly for some reason), then no processor can be scheduled.
> The number at the left top the NiFi UI under the NiFi logo shows the
> number of threads currently working.
> If you see something more than 0, then I'd recommend to take some
> thread dumps to figure out what running thread is doing.
>
> Other than that, I've encountered unexpected behavior with a NiFi
> cluster if a node encountered OutOfMemory error.
> The cluster started to behave incorrectly as it can not replicate REST
> requests among nodes. I'd search any ERR logs in nifi-app.log.
>
> Thanks,
> Koji
>
> On Tue, Jan 30, 2018 at 1:10 PM, dan young <danoyo...@gmail.com> wrote:
> > Hello,
> >
> > We're running a secure 3 node 1.4 cluster.  Has anyone seen any behaviour
> > where the cluster just stops scheduling the running of flowfiles/tasks?
> > i.e. cron/timer, just don't run when they're suppose to.  I've tried to
> stop
> > and restart a processor that is say set to run ever 900sec, but nothing
> > happens.  Then only thing I can do is to cycle through restarting each
> node
> > in the cluster and then we're good for a few daysthis is something
> that
> > just started happening and has occurred twice in the last week or so.
> > Anything I should keep an eye out for or look for in the logs?
> >
> > Regards,
> >
> > Dan
>


all of our schedule tasks not running/being scheduled....

2018-01-29 Thread dan young
Hello,

We're running a secure 3 node 1.4 cluster.  Has anyone seen any behaviour
where the cluster just stops scheduling the running of flowfiles/tasks?
i.e. cron/timer, just don't run when they're suppose to.  I've tried to
stop and restart a processor that is say set to run ever 900sec, but
nothing happens.  Then only thing I can do is to cycle through restarting
each node in the cluster and then we're good for a few daysthis is
something that just started happening and has occurred twice in the last
week or so.  Anything I should keep an eye out for or look for in the
logs?

Regards,

Dan


Re: unable to search provenance repository

2018-01-17 Thread dan young
Joe,

looked at dmesg and see a few of these on one of the nodes...I wonder if
it's an IO issue.  I'll do some digging around

[5492401.012087] INFO: task java:15838 blocked for more than 120 seconds.






On Wed, Jan 17, 2018 at 8:43 AM Joe Witt <joe.w...@gmail.com> wrote:

> Well, I think i'm hoping it was the restart and not the change of
> shard size because one I can understand and the other I cannot :)
>
> If you think you run into this again can you please get thread dumps
> and share them?
>
> Thanks!
>
> On Wed, Jan 17, 2018 at 10:41 AM, dan young <danoyo...@gmail.com> wrote:
> > Heya Joe,
> >
> > I'm not sure if this had anything to do with it, but I increased the
> > nifi.provenance.repository.index.shard.size to 4GB from the default
> 500MB,
> > did a rolling restart of the nodes, and now the search seems to be
> working.
> > Not sure if it was that or the restartbut I'll keep an eye on it.
> >
> > Regards,
> >
> > Dano
> >
> >
> > On Tue, Jan 16, 2018 at 7:57 PM Joe Witt <joe.w...@gmail.com> wrote:
> >>
> >> dan - i've seen behavior that frustrated me like this at times but it
> >> was almost always based on me not realizing the timezone settings or
> >> something else related to my query time versus the time on the system.
> >> I believe you can activate more detailed logging for that class and
> >> others to see information about the queries themselves.  I'll try some
> >> of this tomorrow and share more findings if able.
> >>
> >> If no luck then please be sure to file a JIRA.
> >>
> >> Thanks
> >>
> >> On Tue, Jan 16, 2018 at 7:26 PM, dan young <danoyo...@gmail.com> wrote:
> >> > Hello,
> >> >
> >> > We're running a secure 3 node 1.4 cluster, and for some reason we're
> not
> >> > able to search processor provenance event.  i.e. trying to search a
> for
> >> > a
> >> > particular event by filename, but the results always come back empty,
> >> > even
> >> > though when I view the provenance event history I can see the event.
> >> > Has
> >> > anyone seen this, or can offer any suggestions on how to debug why I'm
> >> > not
> >> > able to search?
> >> >
> >> > I've tried shutting the cluster down and deleting all the
> repositories a
> >> > couple of times
> >> >
> >> > Here's my relevant config:
> >> >
> >> >
> >> >
> nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
> >> >
> >> > nifi.provenance.repository.debug.frequency=1_000_000
> >> >
> >> > nifi.provenance.repository.encryption.key.provider.implementation=
> >> >
> >> > nifi.provenance.repository.encryption.key.provider.location=
> >> >
> >> > nifi.provenance.repository.encryption.key.id=
> >> >
> >> > nifi.provenance.repository.encryption.key=
> >> >
> >> >
> >> > # Persistent Provenance Repository Properties
> >> >
> >> >
> >> >
> nifi.provenance.repository.directory.default=/opt/nifi-common/provenance_repository
> >> >
> >> > nifi.provenance.repository.max.storage.time=24 hours
> >> >
> >> > nifi.provenance.repository.max.storage.size=5 GB
> >> >
> >> > nifi.provenance.repository.rollover.time=5 min
> >> >
> >> > nifi.provenance.repository.rollover.size=100 MB
> >> >
> >> > nifi.provenance.repository.query.threads=2
> >> >
> >> > nifi.provenance.repository.index.threads=2
> >> >
> >> > nifi.provenance.repository.compress.on.rollover=true
> >> >
> >> > nifi.provenance.repository.always.sync=false
> >> >
> >> > nifi.provenance.repository.journal.count=16
> >> >
> >> > nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID,
> >> > Filename,
> >> > ProcessorID, Relationship
> >> >
> >> > nifi.provenance.repository.indexed.attributes=
> >> >
> >> > nifi.provenance.repository.index.shard.size=500 MB
>


Re: unable to search provenance repository

2018-01-17 Thread dan young
Heya Joe,

I'm not sure if this had anything to do with it, but I increased
the nifi.provenance.repository.index.shard.size to 4GB from the default
500MB, did a rolling restart of the nodes, and now the search seems to be
working.  Not sure if it was that or the restartbut I'll keep an eye on
it.

Regards,

Dano


On Tue, Jan 16, 2018 at 7:57 PM Joe Witt <joe.w...@gmail.com> wrote:

> dan - i've seen behavior that frustrated me like this at times but it
> was almost always based on me not realizing the timezone settings or
> something else related to my query time versus the time on the system.
> I believe you can activate more detailed logging for that class and
> others to see information about the queries themselves.  I'll try some
> of this tomorrow and share more findings if able.
>
> If no luck then please be sure to file a JIRA.
>
> Thanks
>
> On Tue, Jan 16, 2018 at 7:26 PM, dan young <danoyo...@gmail.com> wrote:
> > Hello,
> >
> > We're running a secure 3 node 1.4 cluster, and for some reason we're not
> > able to search processor provenance event.  i.e. trying to search a for a
> > particular event by filename, but the results always come back empty,
> even
> > though when I view the provenance event history I can see the event.  Has
> > anyone seen this, or can offer any suggestions on how to debug why I'm
> not
> > able to search?
> >
> > I've tried shutting the cluster down and deleting all the repositories a
> > couple of times
> >
> > Here's my relevant config:
> >
> >
> nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
> >
> > nifi.provenance.repository.debug.frequency=1_000_000
> >
> > nifi.provenance.repository.encryption.key.provider.implementation=
> >
> > nifi.provenance.repository.encryption.key.provider.location=
> >
> > nifi.provenance.repository.encryption.key.id=
> >
> > nifi.provenance.repository.encryption.key=
> >
> >
> > # Persistent Provenance Repository Properties
> >
> >
> nifi.provenance.repository.directory.default=/opt/nifi-common/provenance_repository
> >
> > nifi.provenance.repository.max.storage.time=24 hours
> >
> > nifi.provenance.repository.max.storage.size=5 GB
> >
> > nifi.provenance.repository.rollover.time=5 min
> >
> > nifi.provenance.repository.rollover.size=100 MB
> >
> > nifi.provenance.repository.query.threads=2
> >
> > nifi.provenance.repository.index.threads=2
> >
> > nifi.provenance.repository.compress.on.rollover=true
> >
> > nifi.provenance.repository.always.sync=false
> >
> > nifi.provenance.repository.journal.count=16
> >
> > nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID,
> Filename,
> > ProcessorID, Relationship
> >
> > nifi.provenance.repository.indexed.attributes=
> >
> > nifi.provenance.repository.index.shard.size=500 MB
>


unable to search provenance repository

2018-01-16 Thread dan young
Hello,

We're running a secure 3 node 1.4 cluster, and for some reason we're not
able to search processor provenance event.  i.e. trying to search a for a
particular event by filename, but the results always come back empty, even
though when I view the provenance event history I can see the event.  Has
anyone seen this, or can offer any suggestions on how to debug why I'm not
able to search?

I've tried shutting the cluster down and deleting all the repositories a
couple of times

Here's my relevant config:

nifi.provenance.repository.implementation=
org.apache.nifi.provenance.WriteAheadProvenanceRepository

nifi.provenance.repository.debug.frequency=1_000_000

nifi.provenance.repository.encryption.key.provider.implementation=

nifi.provenance.repository.encryption.key.provider.location=

nifi.provenance.repository.encryption.key.id=

nifi.provenance.repository.encryption.key=


# Persistent Provenance Repository Properties

nifi.provenance.repository.directory.default=
/opt/nifi-common/provenance_repository

nifi.provenance.repository.max.storage.time=24 hours

nifi.provenance.repository.max.storage.size=5 GB

nifi.provenance.repository.rollover.time=5 min

nifi.provenance.repository.rollover.size=100 MB

nifi.provenance.repository.query.threads=2

nifi.provenance.repository.index.threads=2

nifi.provenance.repository.compress.on.rollover=true

nifi.provenance.repository.always.sync=false

nifi.provenance.repository.journal.count=16

nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID,
Filename, ProcessorID, Relationship

nifi.provenance.repository.indexed.attributes=

nifi.provenance.repository.index.shard.size=500 MB


Re: unable to start InvokeHTTP processor in secure Nifi 1.4.0 cluster....

2017-12-06 Thread dan young
Heya Josh,

Awesome!  This seemed to get me past at least starting the InvokeHTTP.  I
will try the flow out later this morning.  Thank you for the follow-up!

Regards,

Dano


On Tue, Dec 5, 2017 at 10:39 PM Josh Anderton <josh.ander...@gmail.com>
wrote:

> Hi Dan/Joe,
>
> I have encountered the same issue and after a bit of digging it appears as
> if during the update to OkHttp3 a bug was introduced in the
> setSslFactoryMethod.  The issue is that the method attempts to prepare a
> keystore even if properties for the keystore are not defined in the
> SSLContextFactory.  The exception is being thrown around line 571 of
> InvokeHTTP as a keystore is attempted to be initialized without a keystore
> type.
>
> The good news is that there appears to be an easy workaround (not fully
> tested yet) which is to define a keystore in your SSLContextFactory, you
> can even use the same properties already defined for your truststore and I
> believe your processor will start working.
>
> Please let me know if I have misdiagnosed or if there are issues with the
> workaround.
>
> Thanks,
> Josh
>
> On Tue, Dec 5, 2017 at 9:42 AM, dan young <danoyo...@gmail.com> wrote:
>
>> Hello Joe,
>>
>> Here's the JIRA. LMK if you need additional details.
>>
>> https://issues.apache.org/jira/browse/NIFI-4655
>>
>> Regards,
>>
>> Dano
>>
>> On Mon, Dec 4, 2017 at 10:46 AM Joe Witt <joe.w...@gmail.com> wrote:
>>
>>> Dan
>>>
>>> Please share as much of your config for the processor as you can.
>>> Also, please file a JIRA for this.  There is definitely a bug that
>>> needs to be addressed if you can make an NPE happen.
>>>
>>> Thanks
>>>
>>> On Mon, Dec 4, 2017 at 12:27 PM, dan young <danoyo...@gmail.com> wrote:
>>> > Hello,
>>> >
>>> >
>>> > I'm working on migrating some flows over to a secure cluster with
>>> OIDC. When
>>> > I try to start an InvokeHTTP processor, I'm getting the following
>>> errors in
>>> > the logs.  Is there some permission/policy that I need to set for this
>>> to
>>> > work?  or is this something else?
>>> >
>>> >
>>> > Nifi 1.4.0
>>> >
>>> >
>>> > 2017-12-04 17:20:03,972 ERROR [StandardProcessScheduler Thread-8]
>>> > o.a.nifi.processors.standard.InvokeHTTP
>>> > InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5]
>>> > InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5] failed to invoke
>>> > @OnScheduled method due to java.lang.RuntimeException: Failed while
>>> > executing one of processor's OnScheduled task.; processor will not be
>>> > scheduled to run for 30 seconds: java.lang.RuntimeException: Failed
>>> while
>>> > executing one of processor's OnScheduled task.
>>> >
>>> > java.lang.RuntimeException: Failed while executing one of processor's
>>> > OnScheduled task.
>>> >
>>> > at
>>> >
>>> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1483)
>>> >
>>> > at
>>> >
>>> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:103)
>>> >
>>> > at
>>> >
>>> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1302)
>>> >
>>> > at
>>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> >
>>> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> >
>>> > at
>>> >
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>> >
>>> > at
>>> >
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>> >
>>> > at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>> >
>>> > at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>> >
>>> > at java.lang.Thread.run(Thread.java:748)
>>> >
>>> > Caused by: java.util.concurrent.ExecutionException:
>&

Re: unable to start InvokeHTTP processor in secure Nifi 1.4.0 cluster....

2017-12-05 Thread dan young
Hello Joe,

Here's the JIRA. LMK if you need additional details.

https://issues.apache.org/jira/browse/NIFI-4655

Regards,

Dano

On Mon, Dec 4, 2017 at 10:46 AM Joe Witt <joe.w...@gmail.com> wrote:

> Dan
>
> Please share as much of your config for the processor as you can.
> Also, please file a JIRA for this.  There is definitely a bug that
> needs to be addressed if you can make an NPE happen.
>
> Thanks
>
> On Mon, Dec 4, 2017 at 12:27 PM, dan young <danoyo...@gmail.com> wrote:
> > Hello,
> >
> >
> > I'm working on migrating some flows over to a secure cluster with OIDC.
> When
> > I try to start an InvokeHTTP processor, I'm getting the following errors
> in
> > the logs.  Is there some permission/policy that I need to set for this to
> > work?  or is this something else?
> >
> >
> > Nifi 1.4.0
> >
> >
> > 2017-12-04 17:20:03,972 ERROR [StandardProcessScheduler Thread-8]
> > o.a.nifi.processors.standard.InvokeHTTP
> > InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5]
> > InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5] failed to invoke
> > @OnScheduled method due to java.lang.RuntimeException: Failed while
> > executing one of processor's OnScheduled task.; processor will not be
> > scheduled to run for 30 seconds: java.lang.RuntimeException: Failed while
> > executing one of processor's OnScheduled task.
> >
> > java.lang.RuntimeException: Failed while executing one of processor's
> > OnScheduled task.
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1483)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:103)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1302)
> >
> > at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> >
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> >
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >
> > at java.lang.Thread.run(Thread.java:748)
> >
> > Caused by: java.util.concurrent.ExecutionException:
> > java.lang.reflect.InvocationTargetException
> >
> > at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> >
> > at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1466)
> >
> > ... 9 common frames omitted
> >
> > Caused by: java.lang.reflect.InvocationTargetException: null
> >
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >
> > at java.lang.reflect.Method.invoke(Method.java:498)
> >
> > at
> >
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
> >
> > at
> >
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
> >
> > at
> >
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
> >
> > at
> >
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1306)
> >
> > at
> >
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1302)
> >
> > ... 6 common frames omitted
> >
> > Caused by: java.lang.NullPointerException: null
>


unable to start InvokeHTTP processor in secure Nifi 1.4.0 cluster....

2017-12-04 Thread dan young
Hello,


I'm working on migrating some flows over to a secure cluster with OIDC.
When I try to start an InvokeHTTP processor, I'm getting the following
errors in the logs.  Is there some permission/policy that I need to set for
this to work?  or is this something else?


Nifi 1.4.0


2017-12-04 17:20:03,972 ERROR [StandardProcessScheduler Thread-8]
o.a.nifi.processors.standard.InvokeHTTP
InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5]
InvokeHTTP[id=ae055c76-88b8-3c86-bd1e-06ca4dcb43d5] failed to invoke
@OnScheduled method due to java.lang.RuntimeException: Failed while
executing one of processor's OnScheduled task.; processor will not be
scheduled to run for 30 seconds: java.lang.RuntimeException: Failed while
executing one of processor's OnScheduled task.

java.lang.RuntimeException: Failed while executing one of processor's
OnScheduled task.

at
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1483)

at
org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:103)

at
org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1302)

at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)

at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.util.concurrent.ExecutionException:
java.lang.reflect.InvocationTargetException

at java.util.concurrent.FutureTask.report(FutureTask.java:122)

at java.util.concurrent.FutureTask.get(FutureTask.java:206)

at
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1466)

... 9 common frames omitted

Caused by: java.lang.reflect.InvocationTargetException: null

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)

at
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)

at
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)

at
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)

at
org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1306)

at
org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1302)

... 6 common frames omitted

Caused by: java.lang.NullPointerException: null


Re: Authorization and Multi-Tenancy functionnalities Evaluation -> Unable to locate initial admin error

2017-11-17 Thread dan young
Thanx Bryan,

On a side note, after beating my skull against a sharp pen all for a few
hours and losing about a pint of blood... I was able to get this working.
One thing to note that wasn't clear to me initially, is that you need to
add all the cluster nodes into both the userGroupProvider
and accessPolicyProvider. Once I did this everything came together

So glad to see this OpenID added!


Regards,

Dano

On Sat, Nov 11, 2017 at 11:40 AM Bryan Bende  wrote:

> Hello,
>
> The default authorizers.xml that comes with 1.4.0 has a new style of
> configuration which requires you to enter the initial admin identity
> in two places.
>
> First in the userGroupProvider in 
>
> Second in the accessPolicyProvider in 
>
> Those two values need to be the same, you are basically telling the
> accessPolicyProvider which user from the userGroupProvider is the
> initial admin.
>
> Thanks,
>
> Bryan
>
> On Sat, Nov 11, 2017 at 12:41 AM, Cédric  wrote:
> > Hello,
> >
> > I would like to know what is the easiest way to evaluate Authorization
> and
> > Multi-Tenancy functionnalities ?
> >
> > I've tried installation with the following steps but I've a "Unable to
> > locate initial admin" at the end.
> >
> > Steps :
> > - Download nifi-1.4.0-bin.zip and unzip in nifi-1.4.0
> >
> > - download nifi-toolkit-1.4.0-bin.zip and unzip in nifi-toolkit-1.4.0
> >
> > - cd nifi-toolkit-1.4.0
> >
> > # .\bin\tls-toolkit.bat standalone -n localhost -C "CN=bbende,
> > OU=ApacheNiFi" -o ../target
> >
> > 2017/11/11 06:18:11 INFO [main]
> > org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine:
> No
> > nifiPropertiesFile specified, using embedded one.
> > 2017/11/11 06:18:12 INFO [main]
> > org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Running
> > standalone certificate generation with output directory ..\target
> > 2017/11/11 06:18:12 INFO [main]
> > org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Using
> existing
> > CA certificate ..\target\nifi-cert.pem and key ..\target\nifi-key.key
> > 2017/11/11 06:18:12 INFO [main]
> > org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Writing new
> ssl
> > configuration to ..\target\localhost
> > 2017/11/11 06:18:13 INFO [main]
> > org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully
> > generated TLS configuration for localhost 1 in ..\target\localhost
> > 2017/11/11 06:18:13 INFO [main]
> > org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generating
> new
> > client certificate ..\target\CN=bbende_OU=ApacheNiFi.p12
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> >
> **
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > WARNING
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> >
> **
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > Unlimited JCE Policy is not installed which means we cannot utilize a
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > PKCS12 password longer than 7 characters.
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > Autogenerated password has been reduced to 7 characters.
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > Please strongly consider installing Unlimited JCE Policy at
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> >
> http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > Another alternative is to add a stronger password with the openssl tool
> to
> > the
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > resulting client certificate: ..\target\CN=bbende_OU=ApacheNiFi.p12
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > openssl pkcs12 -in '..\target\CN=bbende_OU=ApacheNiFi.p12' -out
> > '/tmp/CN=bbende_OU=ApacheNiFi.p12'
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > openssl pkcs12 -export -in '/tmp/CN=bbende_OU=ApacheNiFi.p12' -out
> > '..\target\CN=bbende_OU=ApacheNiFi.p12'
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > rm -f '/tmp/CN=bbende_OU=ApacheNiFi.p12'
> > 2017/11/11 06:18:13 WARN [main]
> org.apache.nifi.toolkit.tls.util.TlsHelper:
> > 2017/11/11 06:18:13 WARN [main]
> 

Re: How to start a flow with attributes from the state store?

2017-10-20 Thread dan young
Same here, works great!

On Fri, Oct 20, 2017 at 7:27 AM Mike Thomsen  wrote:

> I can vouch for this method. I have two flows for a client that use
> GenerateFlowFile to build a JSON DSL query for ElasticSearch and are
> executed on a timer. Works quite well with InvokeHttp.
>
> On Thu, Oct 19, 2017 at 11:41 PM, Mark Rachelski 
> wrote:
>
>> Thank you Bryan,
>>
>> That should fit my purposes well.
>>
>> BTW: That processor is not in the User Guide.
>>
>> As a follow on question, is there an easy way to ask NiFi for all
>> processors that can be used at the beginning of a flow? There are a lot of
>> other tagging that is done. But I spent a few hours last night googling for
>> an answer before posting this question into the mail group.
>>
>> Mark.
>>
>> On Fri, Oct 20, 2017 at 9:33 AM Bryan Bende  wrote:
>>
>>> Hi Mark,
>>>
>>> You can use GeneratFlowFile as the initial processor to trigger your
>>> flow.
>>>
>>> Make sure to change the run schedule appropriately otherwise you will
>>> get a lot of flow files generated.
>>>
>>> -Bryan
>>>
>>> On Thu, Oct 19, 2017 at 10:08 PM Mark Rachelski 
>>> wrote:
>>>
 I have a scenario where I need to make an HTTP request to an API but
 taking context into account from previous invocations. Specifically, one
 query string parameter is a time where the API returns all records from
 that time or later. Every day, I would issue a new request using the
 previous time requested.

 I have worked out that I can store the last time requested in the state
 store. And use an UpdateAttributes process to retrieve it or initialize it
 on the first run. I can then feed that into the InvokeHTTP processor and
 build a dynamic URL from that attribute.

 But my main problem is that I don't know what beginning processor to
 use in this flow. UpdateAttribute needs an inbound connection. And there
 are no obvious 'dummy' beginning processors that I can find in the vast
 array. The only thing I need from the beginning processor is the schedule
 tab.

 Any ideas on what my first processor in this flow should be?

 Thank you in advance for any help,
 Mark.

>>> --
>>> Sent from Gmail Mobile
>>>
>>


Re: Re: Nifi 1.4: problem with QueryRecord Precessor

2017-10-04 Thread dan young
@hotmail.com>
> > An: "users@nifi.apache.org" <users@nifi.apache.org>
> >
> > Betreff: Re: Nifi 1.4: problem with QueryRecord Precessor
> > Dan, Uwe,
> >
> > This is something that I will be looking into shortly. It is a known
> issue
> > that exists in both 1.3
> > and 1.4. This occurs whenever there is some other sort of failure, when
> the
> > processor attempts
> > to route the FlowFile to 'failure'. This occurs most often when there is
> a
> > problem obtaining the
> > schema for the Record. Can you check your logs and see what other error
> may
> > be present
> > in the logs?
> >
> > Thanks
> > -Mark
> >
> >
> >
> > On Oct 4, 2017, at 10:39 AM, dan young <danoyo...@gmail.com> wrote:
> >
> >
> > It might be...maybe others can share experience with 1.4...
> >
> >
> > On Wed, Oct 4, 2017, 8:37 AM Uwe Geercken <uwe.geerc...@web.de> wrote:
> >>
> >> thanks Dano.
> >>
> >> Any other comments? Is this a bug in 1.4 then?
> >>
> >> Rgds,
> >>
> >> Uwe
> >>
> >> Gesendet: Mittwoch, 04. Oktober 2017 um 16:33 Uhr
> >> Von: "dan young" <danoyo...@gmail.com>
> >> An: nifi <users@nifi.apache.org>
> >> Betreff: Re: Nifi 1.4: problem with QueryRecord Precessor
> >>
> >> We're seeing the same thing. Works fine in 1.3
> >>
> >> Dano
> >>
> >>
> >> On Wed, Oct 4, 2017, 8:13 AM Uwe Geercken <uwe.geerc...@web.de> wrote:
> >>>
> >>> Hello,
> >>>
> >>> I have created a flow: GetFile >> QueryRecord >> Putfile. GetFile reads
> >>> an avro file. QueryRecord has one property/sql and the result is
> routed to
> >>> PutFile.
> >>>
> >>> When I run the processor, I get following error:
> >>>
> >>> failed to process session due to java.lang.IllegalStateException
> >>> 
> >>>  already in use or an active callback or an inputstream
> >>> created by ProcessSession.read(FlowFile) has not been closed.
> >>> 
> >>>
> >>> Can somebody help?
> >>>
> >>> Rgds,
> >>>
> >>> Uwe
>


Re: Re: Nifi 1.4: problem with QueryRecord Precessor

2017-10-04 Thread dan young
t; <marka...@hotmail.com>
> > An: "users@nifi.apache.org" <users@nifi.apache.org>
> >
> > Betreff: Re: Nifi 1.4: problem with QueryRecord Precessor
> > Dan, Uwe,
> >
> > This is something that I will be looking into shortly. It is a known
> issue
> > that exists in both 1.3
> > and 1.4. This occurs whenever there is some other sort of failure, when
> the
> > processor attempts
> > to route the FlowFile to 'failure'. This occurs most often when there is
> a
> > problem obtaining the
> > schema for the Record. Can you check your logs and see what other error
> may
> > be present
> > in the logs?
> >
> > Thanks
> > -Mark
> >
> >
> >
> > On Oct 4, 2017, at 10:39 AM, dan young <danoyo...@gmail.com> wrote:
> >
> >
> > It might be...maybe others can share experience with 1.4...
> >
> >
> > On Wed, Oct 4, 2017, 8:37 AM Uwe Geercken <uwe.geerc...@web.de> wrote:
> >>
> >> thanks Dano.
> >>
> >> Any other comments? Is this a bug in 1.4 then?
> >>
> >> Rgds,
> >>
> >> Uwe
> >>
> >> Gesendet: Mittwoch, 04. Oktober 2017 um 16:33 Uhr
> >> Von: "dan young" <danoyo...@gmail.com>
> >> An: nifi <users@nifi.apache.org>
> >> Betreff: Re: Nifi 1.4: problem with QueryRecord Precessor
> >>
> >> We're seeing the same thing. Works fine in 1.3
> >>
> >> Dano
> >>
> >>
> >> On Wed, Oct 4, 2017, 8:13 AM Uwe Geercken <uwe.geerc...@web.de> wrote:
> >>>
> >>> Hello,
> >>>
> >>> I have created a flow: GetFile >> QueryRecord >> Putfile. GetFile reads
> >>> an avro file. QueryRecord has one property/sql and the result is
> routed to
> >>> PutFile.
> >>>
> >>> When I run the processor, I get following error:
> >>>
> >>> failed to process session due to java.lang.IllegalStateException
> >>> 
> >>>  already in use or an active callback or an inputstream
> >>> created by ProcessSession.read(FlowFile) has not been closed.
> >>> 
> >>>
> >>> Can somebody help?
> >>>
> >>> Rgds,
> >>>
> >>> Uwe
>


Re: Nifi 1.4: problem with QueryRecord Precessor

2017-10-04 Thread dan young
Let me check a few things..

On Wed, Oct 4, 2017 at 8:55 AM Mark Payne <marka...@hotmail.com> wrote:

> Dan, Uwe,
>
> This is something that I will be looking into shortly. It is a known issue
> that exists in both 1.3
> and 1.4. This occurs whenever there is some other sort of failure, when
> the processor attempts
> to route the FlowFile to 'failure'. This occurs most often when there is a
> problem obtaining the
> schema for the Record. Can you check your logs and see what other error
> may be present
> in the logs?
>
> Thanks
> -Mark
>
>
> On Oct 4, 2017, at 10:39 AM, dan young <danoyo...@gmail.com> wrote:
>
> It might be...maybe others can share experience with 1.4...
>
> On Wed, Oct 4, 2017, 8:37 AM Uwe Geercken <uwe.geerc...@web.de> wrote:
>
>> thanks Dano.
>>
>> Any other comments? Is this a bug in 1.4 then?
>>
>> Rgds,
>>
>> Uwe
>>
>> *Gesendet:* Mittwoch, 04. Oktober 2017 um 16:33 Uhr
>> *Von:* "dan young" <danoyo...@gmail.com>
>> *An:* nifi <users@nifi.apache.org>
>> *Betreff:* Re: Nifi 1.4: problem with QueryRecord Precessor
>>
>> We're seeing the same thing. Works fine in 1.3
>>
>> Dano
>>
>> On Wed, Oct 4, 2017, 8:13 AM Uwe Geercken <uwe.geerc...@web.de> wrote:
>>
>>> Hello,
>>>
>>> I have created a flow: GetFile >> QueryRecord >> Putfile. GetFile reads
>>> an avro file. QueryRecord has one property/sql and the result is routed to
>>> PutFile.
>>>
>>> When I run the processor, I get following error:
>>>
>>> failed to process session due to java.lang.IllegalStateException
>>> 
>>>  already in use or an active callback or an inputstream
>>> created by ProcessSession.read(FlowFile) has not been closed.
>>> 
>>>
>>> Can somebody help?
>>>
>>> Rgds,
>>>
>>> Uwe
>>>
>>
>


Re: Nifi 1.4: problem with QueryRecord Precessor

2017-10-04 Thread dan young
We're seeing the same thing. Works fine in 1.3

Dano

On Wed, Oct 4, 2017, 8:13 AM Uwe Geercken  wrote:

> Hello,
>
> I have created a flow: GetFile >> QueryRecord >> Putfile. GetFile reads an
> avro file. QueryRecord has one property/sql and the result is routed to
> PutFile.
>
> When I run the processor, I get following error:
>
> failed to process session due to java.lang.IllegalStateException
> 
>  already in use or an active callback or an inputstream
> created by ProcessSession.read(FlowFile) has not been closed.
> 
>
> Can somebody help?
>
> Rgds,
>
> Uwe
>


Re: Parameterizing the nifi flow

2017-08-13 Thread dan young
We've done something similar as Carlos outlines here.  Works really great

Dano

On Sun, Aug 13, 2017, 9:39 AM Carlos Manuel Fernandes (DSI) <
carlos.antonio.fernan...@cgd.pt> wrote:

> Hi Vikram,
>
>
>
> I had the same requirements like you , and my solution is a service:
>
>
>
> HandleHttpRequest (prepared to handle
> http://host:8085/ods?sourceTable=tableName=tableName=Y)
> ->
>
> ExecuteScript (Script reads the http parameters and/or read  extra
> parameters from a database table and make  the  synchronization) ->
>
> HandleHttpResponse (return 200 if ok )
>
>
>
> After this service are ready, you can create a scheduled invoker , like
> this:
>
>
>
> ExecuteScript (get all the tables names you need to synchronize, in my
> case based on a query : select source_table,target_Table,,truncate from
> ods_tables ) ->
>
> invokeHttp (
> http://host:8085/ods?sourceTable=${source_table}=${target_table}=${truncate})
> ->
>
> putEmail (sending the result of the synchronization).
>
>
>
>
>
> With this you don’t need to repeat flows, I hope this help.
>
>
>
> Carlos Fernandes
>
>
>
>
>
>
>
>
>
> *From:* Andy LoPresto [mailto:alopre...@apache.org]
> *Sent:* sábado, 12 de agosto de 2017 01:36
> *To:* users@nifi.apache.org
> *Subject:* Re: Parameterizing the nifi flow
>
>
>
> The variable registry is a great tool for parameterizing values that
> differ between environments/deployments. In this case it sounds like
> setting up a flow that read from a flowfile attribute/content to determine
> the source table’s name is a better fit. You can create a master list of
> all source tables and store it as a plaintext file, then on your schedule,
> read the contents of that file, split the content by line, and send each
> 100+ resulting flowfiles into the part of the flow that reads from the
> database. Each would provide the name of the source table to read at that
> time.
>
>
>
> Andy LoPresto
>
> alopre...@apache.org
>
> *alopresto.apa...@gmail.com *
>
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
>
>
> On Aug 11, 2017, at 3:26 PM, Andrew Grande  wrote:
>
>
>
> Hi,
>
> Read up on the variable registry in the docs, that sounds like a good fit.
> I don't remember if it were available in 1.1 though.
>
> Andrew
>
>
>
> On Fri, Aug 11, 2017, 5:12 PM More, Vikram (CONT) <
> vikram.m...@capitalone.com> wrote:
>
> Hi,
>
>
>
> I have a nifi flow which pulls/extracts from source database table and
> loads into target database table. This flow will run several times in a day
> to get delta records from source table (more like batch process running
> every 3-4 hrs). Now I need to replicate this same process for 100+
> different source tables. So rather than creating 100+ nifi flows for each
> separate table, can I create main flow (let's say template) and pass
> parameter like source extract sql, target load sql to main flow. And repeat
> these steps for each source table . Has anyone tried parameterizing the
> nifi flows, can you please advice . We are using NiFi 1.1.0
>
>
>
> Appreciate any thoughts here.
>
>
>
>
>
> Thanks & Regards,
>
> *Vikram*
>
>
>
>
> --
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>
>
>


NiFi 1.2?

2017-03-22 Thread dan young
Any update on when 1.2 might be released into the wild?

Regards,

Dano


Re: Most popular processors

2017-03-17 Thread dan young
Hey Pere,

Here's a list of the processors we use the most often, not in any order...

UpdateAttribute
InvokeHTTP
ExecuteScriptCommand
ExecuteScript
PutS3Object
PutSNS
GetFile/PutFile
RouteOnAttribute
HandleHttpRequest/Response
GenerateFlowFile
EvaluateJsonPath
GenerateTableFetch
ExecuteSQL
ConvertAvroToJSON
CompressContent

That's off the top of my head


We use ProcessorGroups and Remote Processor Groups also.


Regards,

Dano


On Fri, Mar 17, 2017 at 9:21 AM Pere Urbón Bayes 
wrote:

> Hi,
>   my name is Pere Urbon and I am working in small book / crash course on
> building data processing systems, ETL's, etc with Apache NiFi.
>
> I was wondering if there is some sense of the most used processors for
> each category? I know the question is really hard to answer exactly, but
> probably just a reasonable guess might be ok.
>
> What do you think?
>
> - purbon
>