Re: ListenUDP processor dropping packets

2020-09-23 Thread Josef.Zahner1
Hi Henrique

Did you lift up the OS UDP Receive Buffer Size? Per default it’s way too small 
under linux.

sysctl -w net.core.rmem_max=16777216

With netstat -su (“Udp:” Section) you can check whether you have UDP buffer 
issues…

Cheers Josef

From: Henrique Nascimento 
Reply to: "users@nifi.apache.org" 
Date: Thursday, 24 September 2020 at 01:28
To: "users@nifi.apache.org" 
Subject: ListenUDP processor dropping packets


Hi all,

Thanks in advance for any help.

I have a ListenUdp processor receiving Syslog messages, but when i compare the 
results of the processor with a linux TCPDUMP i noticed that the processor 
drops a lot of the data, with no warning/error. (Well, its UDP...)

I already tried a lot of configurations, actually i'm trying these numbers:

Concurrent tasks = 6

Receive Buffer Size = 65507 B

Max Size of Message Queue = 1000

Max Size of Socket Buffer = 512 MB

Max Batch Size = 20

Running Nifi 1.11.4.

Docs that helped me:

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.ListenUDP/

https://bryanbende.com/development/2016/05/09/optimizing-performance-of-apache-nifis-network-listening-processors



Pls, any ideas of how can i better tune this processor?
--


Henrique Nascimento


smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi 1.12.0 - KeyStores with multiple certificates are not supported

2020-08-20 Thread Josef.Zahner1
Hi Andy & Kotaro

Thank you for your comments. So this means we can’t upgrade to nifi 1.12.0 :-( 
(except if we change the certs, which is no option at the moment).

@Andy: I’m aware of the wildcard certificate notes from the documentation. We 
don’t have a wildcard certificate with a ‘*’ sign in it. We are using SAN with 
multiple explicit nodenames like nifi1.domain.com, nifi2.domain.com. Does this 
causes the same issues as you mentioned with a real wildcard or would that be 
fine? This isn’t clear for me when reading the documentation.

We can’t use the NiFiToolkit as we have to use our own Corporate CA which is at 
the moment not automatically provisionable, so the CSRs need to be done 
manually and it would be a huge work to create and maintain the certificates.

Cheers Josef



From: Andy LoPresto 
Reply to: "users@nifi.apache.org" 
Date: Thursday, 20 August 2020 at 01:06
To: "users@nifi.apache.org" 
Subject: Re: NiFi 1.12.0 - KeyStores with multiple certificates are not 
supported

Hi Josef and Kotaro,

Thanks for identifying this scenario. I am away from the office for a bit but 
will try to review Kotaro’s changes in the linked PR. The regression is within 
Jetty’s code, and requires a new API to be invoked. NiFi does not have an 
existing method to configure a specific key to use within the keystore, and 
thus has always encouraged the use of a keystore with a single certificate and 
key (PrivateKeyEntry).

However, I will note that the initial scenario described by Josef seems to use 
a wildcard certificate, and this is explicitly mentioned in the documentation 
as not supported and discouraged [1].


Wildcard certificates (i.e. two nodes 
node1.nifi.apache.org and 
node2.nifi.apache.org being assigned the same 
certificate with a CN or SAN entry of 
*.nifi.apache.org) are not officially supported and not 
recommended. There are numerous disadvantages to using wildcard certificates, 
and a cluster working with wildcard certificates has occurred in previous 
versions out of lucky accidents, not intentional support. Wildcard SAN entries 
are acceptable if each cert maintains an additional unique SAN entry and CN 
entry.

I understand the challenges around automating key and certificate management 
and regenerating/expiring certificates appropriately. The TLS Toolkit exists to 
assist with this process, and there are ongoing improvements being made. 
However, fully supporting wildcard certificates would require substantial 
refactoring in the core framework and is not planned for any immediate 
attention.

[1] 
https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#wildcard_certificates


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
He/Him
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69


On Aug 19, 2020, at 11:13 AM, Kotaro Terada 
mailto:kota...@apache.org>> wrote:

Hi Josef and teams,

I encountered the same problem, and I have created a patch to fix it [1].

I guess the only way to fix the problem is to apply the patch and rebuild NiFi, 
since the current implementation unfortunately doesn't seem to support 
keystores with multiple certificates. Could someone please give support to 
review the PR and proceed to fix it?

[1] https://issues.apache.org/jira/browse/NIFI-7730

Thanks,
Kotaro


On Thu, Aug 20, 2020 at 12:51 AM 
mailto:josef.zahn...@swisscom.com>> wrote:
Hi guys

As we are waiting for some fixed bugs in NiFI 1.12.0, we upgraded today from 
1.11.4 to the newest version on one of our secured test single VM instances. 
However, NiFi crashed during startup, error message below. It tells us that 
KeyStores with multiple certificates are not supported. As you know we have to 
use two keystores (keystore & truststore):

  1.  Keystore with PrivateKey and Signed Cert -> only one Cert, the one 
belongs to the PrivateKey (picture far below)
  2.  Truststore Keystore with CA Certs -> Multiple CA certs as we have 
imported the cacerts from linux

I see two potential issues now, but I didn’t found the time to execute further 
tests.

We don’t have multiple certs in the keystore with the privateKey as you can see 
in the picture far below, but of course we have SAN (Subject Alternative Names) 
as we have ton’s of NiFi instances running and it’s more than annoying to 
configure/generate a keypair for each instance. So the workaround was to insert 
all our NiFi instances as SAN and that way we were able to use one single 
keystore for all our NiFi instances (some of them are even clustered, some 
not). However my assumption is that the mentioned workaround potentially breaks 
now NiFi, this was working until NiFi 1.11.4. We know from security perspective 
the workaround is/was not ideal, but we don’t have the manpower to generate 
manually that many certs every 1-2 years when the certs are expiring and it’s 
anyway completely separated from 

Re: Change Version not possible due to setting a Parameter Context for a Process Group

2020-08-17 Thread Josef.Zahner1
Good point Russell, we upgraded NiFi Registry last time in March 2020, since 
then two new versions came out, that’s correct. We plan to upgrade to NiFi 
Registry 0.7.0 together with NiFi v.1.12.0.

To be honest, I don’t expect that the issue is on Registry side. But I’m 
curious what other people think. I’m not even able to troubleshoot the issue…

I searched for Jira Bugtickets and found some context related issues which will 
be fixed on NiFi 1.12.0 
(NIFI-7536 / NIFI-7627). 
However nothing exactly matched to my issue.

Cheers Josef

From: Russell Bateman 
Reply to: "users@nifi.apache.org" 
Date: Monday, 17 August 2020 at 15:27
To: "users@nifi.apache.org" 
Subject: Re: Change Version not possible due to setting a Parameter Context for 
a Process Group

Forgive me for asking, but I'm curious. NiFi Registry 0.5.0 is fully two 
versions behind the current one. Wouldn't using the latest be a precursor to 
sorting out problems? Bugs may have been fixed. Or, is there something good 
known to be expected from the older Registry that the newest one doesn't offer 
in terms of migrating from variables to parameters?
On 8/17/20 6:04 AM, 
josef.zahn...@swisscom.com wrote:
Hi guys

We are using NiFi 1.11.4 with NiFi Registry 0.5.0. We are trying to migrate 
from variables to parameters.

As soon as we are adding a “Process Group Parameter Context” to an existing 
process group and change something else to be able to commit it to the NiFi 
Registry, it seems that the commit gets corrupt. We can’t pull the new version 
on the other NiFI cluster, error message is below:

«Failed to update flow to new version due to 
org.apache.nifi.web.util.LifecycleManagementException: Failed to update Flow on 
all nodes in cluster due to An unexpected error has occurred. Please check the 
logs for additional details.»


[A screenshot of a social media post Description 
automatically generated]

Where can we find the log messages mentioned in the error messages? 
nifi-app.log shows not more than what the GUI shows. How can we troubleshoot 
this or is there any known bug?

We already added some Parameter Context to other Process Groups in the past 
(may be in another NiFi Version?), so in general it was working.

Thanks in advance, Josef




smime.p7s
Description: S/MIME Cryptographic Signature


Change Version not possible due to setting a Parameter Context for a Process Group

2020-08-17 Thread Josef.Zahner1
Hi guys

We are using NiFi 1.11.4 with NiFi Registry 0.5.0. We are trying to migrate 
from variables to parameters.

As soon as we are adding a “Process Group Parameter Context” to an existing 
process group and change something else to be able to commit it to the NiFi 
Registry, it seems that the commit gets corrupt. We can’t pull the new version 
on the other NiFI cluster, error message is below:

«Failed to update flow to new version due to 
org.apache.nifi.web.util.LifecycleManagementException: Failed to update Flow on 
all nodes in cluster due to An unexpected error has occurred. Please check the 
logs for additional details.»


[A screenshot of a social media post  Description automatically generated]

Where can we find the log messages mentioned in the error messages? 
nifi-app.log shows not more than what the GUI shows. How can we troubleshoot 
this or is there any known bug?

We already added some Parameter Context to other Process Groups in the past 
(may be in another NiFi Version?), so in general it was working.

Thanks in advance, Josef


smime.p7s
Description: S/MIME Cryptographic Signature


Re: External Access using InvokeHTTP_Test processor and StandardSSLContextService

2020-08-06 Thread Josef.Zahner1
It tells you most probably that the CA cert from the remote HTTPS server hasn’t 
been found in the truststore you’ve defined to access the site. So please check 
again the CA cert and the truststore…

Cheers Josef


From: "White, Daniel" 
Reply to: "users@nifi.apache.org" 
Date: Thursday, 6 August 2020 at 13:07
To: "users@nifi.apache.org" 
Subject: External Access using InvokeHTTP_Test processor and 
StandardSSLContextService

Confidential

Hi All,

We’ve setup the truststore from the NiFi processor. However we get the 
following error when trying to connect to an external HTTPS location

The error I get is: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target

Any ideas? Assume this is a cert issue on the Nifi server.

Thanks

Dan White
Lead Technical Architect
Legal & General Investment Management
One Coleman Street, London, EC2R 5AA
Tel: +44 203 124 4048
Mob: +44 7980 027 656
www.lgim.com

This e-mail (and any attachments) may contain privileged and/or confidential 
information. If you are not the intended recipient please do not disclose, 
copy, distribute, disseminate or take any action in reliance on it. If you have 
received this message in error please reply and tell us and then delete it. 
Should you wish to communicate with us by e-mail we cannot guarantee the 
security of any data outside our own computer systems.

Any information contained in this message may be subject to applicable terms 
and conditions and must not be construed as giving investment advice within or 
outside the United Kingdom or Republic of Ireland.

Telephone Conversations may be recorded for your protection and to ensure 
quality of service

Legal & General Investment Management Limited (no 2091894), LGIM Real Assets 
(Operator) Limited (no 05522016), LGIM (International) Limited (no 7716001) 
Legal & General Unit Trust Managers (no 1009418), GO ETF Solutions LLP 
(OC329482) and LGIM Corporate Director Limited (no 7105051) are authorised and 
regulated by the Financial Conduct Authority. All are registered in England & 
Wales with a registered office at One Coleman Street, London, EC2R 5AA

Legal & General Assurance (Pensions Management) Limited (no 1006112) is 
authorised by the Prudential Regulation Authority and regulated by the 
Financial Conduct Authority and the Prudential Regulation Authority. It is 
registered in England & Wales with a registered office at One Coleman Street, 
London, EC2R 5AA.

Legal & General Property Limited (no 2091897) is authorised and regulated by 
the Financial Conduct Authority for insurance mediation activities. It is 
registered in England & Wales with a registered office at One Coleman Street, 
London, EC2R 5AA.

LGIM Managers (Europe) Limited is authorised and regulated by the Central Bank 
of Ireland (C173733). It is registered in the Republic of Ireland (no 609677) 
with a registered office at 33/34 Sir John Rogerson's Quay, Dublin 2, D02 XK09.

Legal & General Group PLC, Registered Office One Coleman Street, London, EC2R 
5AA.

Registered in England no: 1417162

 This email has come from the internet and has been scanned for all viruses 
and potentially offensive content by Messagelabs on behalf of Legal & General 




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Need help SSL LDAP Nifi Registry

2020-06-30 Thread Josef.Zahner1
Hi Etienne

Did you tried the following in «nifi-registry.properties»:
nifi.registry.security.needClientAuth=false

Cheers Josef


From: Etienne Jouvin 
Reply to: "users@nifi.apache.org" 
Date: Tuesday, 30 June 2020 at 10:46
To: "users@nifi.apache.org" 
Subject: Need help SSL LDAP Nifi Registry

Hello all.

I am trying to setup LDAP authentication on NiFi Registry.
I followed some links, like 
https://community.cloudera.com/t5/Community-Articles/Setting-Up-a-Secure-Apache-NiFi-Registry/ta-p/247753

But each time, it requires that a certificate is installed on client side. I 
had this "problem" for NiFi but because I did not provided the 
nifi.security.user.login.identity.provider

But for the registry, I remember that and did it.

For summary, what I have in nifi-registry.properties
nifi.registry.security.keystore=./conf/keystore.jks
nifi.registry.security.keystoreType=jks
nifi.registry.security.keystorePasswd=password
nifi.registry.security.keyPasswd=password
nifi.registry.security.truststore=./conf/truststore.jks
nifi.registry.security.truststoreType=jks
nifi.registry.security.truststorePasswd=password

(All of those informations were given by the tls-toolkit, when executed for 
NiFi)
Then I put this
#nifi.registry.security.identity.provider=
nifi.registry.security.identity.provider=ldap-identity-provider

In the file identity-providers.xml
I setup the LDAP provider

ldap-identity-provider

org.apache.nifi.registry.security.ldap.LdapIdentityProvider
SIMPLE

uid=admin,ou=system
secret











FOLLOW
10 secs
10 secs

ldap://localhost:10389
ou=users,dc=test,dc=ch
uid={0}

USE_DN
12 hours


And finally in authorizers.xml

file-user-group-provider

org.apache.nifi.registry.security.authorization.file.FileUserGroupProvider
./conf/users.xml
uid=firstuser, 
ou=users,dc=test,dc=ch



file-access-policy-provider

org.apache.nifi.registry.security.authorization.file.FileAccessPolicyProvider
file-user-group-provider
./conf/authorizations.xml
 uid=firstuser, 
ou=users,dc=test,dc=ch 






Starting Registry is OK.

But when I want to access throw Chrome, I have a certificate error : 
ERR_BAD_SSL_CLIENT_AUTH_CERT

How can I force the authentication to not request a client side certificate ?

Thanks for any input.

Etienne Jouvin



smime.p7s
Description: S/MIME Cryptographic Signature


Re: ConsumeKafkaRecord Performance Issue

2020-06-22 Thread Josef.Zahner1
Hi Mark,

it really doesn’t matter what I configure for “Max Poll Records” (tried 10, 
1’000, 100’000) or for “Max Uncommitted Time” (tried 1s, 10s, 100s). The 
flowfile size is always randomly between 1 and about 500 records. So those two 
parameters doesn’t work at all in my case. The theory behind it is clear, but 
it doesn’t work as expected… Of course the queue was more than full.

In the meantime I’ve done some tests with the KafkaConsume processor, and the 
performance difference is again huge - 4 times better performance (1 Million 
messages per sec) with the same topic and amount of threads for the Non-Record 
processor -> network limit reached. Seems that the RecordReader/RecordWriter 
part of the KafkaConsumeRecord processor consumes a lot of CPU power. 
Interesting that nobody complained about it until now, are we that high with a 
few 100’000 messages per second? We have sources which produces about 200’000 
messages/s and we would like to consume that as well a few times faster than 
producing.

We have now plans to implement a KafkaAvroConsumer, which is based on the 
KafkaConsumer processor. It will consume from kafka and write avro out instead 
of the plain message with demarcator. We hope to get the same great performance 
as with the KafkaConsume processor.

Cheers Josef

From: Mark Payne 
Reply to: "users@nifi.apache.org" 
Date: Monday, 22 June 2020 at 15:03
To: "users@nifi.apache.org" 
Subject: Re: ConsumeKafkaRecord Performance Issue

Josef,

The Max Poll Records just gets handed to the Kafka client and limits the size 
of how much should be pulled back in a single request. This can be important 
because if you set the value really high, you could potentially buffer up a lot 
of messages in memory and consume a lot of heap. But setting it too low would 
result in small batches. So that property can play a roll in the size of a 
batch of records, but you should not expect to see batches output that are 
necessarily equal to the value there.

What value do you have set for the “Max Uncommitted Time”? That can certainly 
play a roll in the size of the FlowFiles that are output.

Thanks
-Mark



On Jun 22, 2020, at 2:48 AM, 
josef.zahn...@swisscom.com wrote:

Hi Mark,

thanks a lot for your explanation, makes fully sense! Did you checked as well 
the “Max Poll Records” parameter? Because no matter how high I’m setting it I’m 
getting always a random number of records back into one flowfile. The max. is 
about 400 records which isn’t ideal for small records as NiFi gets a lot of 
Flowfiles with a few kilobytes in case of a huge backlog.

Cheers Josef


From: Mark Payne mailto:marka...@hotmail.com>>
Reply to: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Friday, 19 June 2020 at 17:06
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: ConsumeKafkaRecord Performance Issue

Josef,

Glad you were able to get past this hurdle. The reason for the consumer 
yielding is a bit complex. NiFi issues an async request to Kafka to retrieve 
messages. Then, NiFi performs a long-poll to get those messages from the Kafka 
client. If the client returns 0 messages from the long-poll, the assumption 
that NiFi makes is that there are no more messages available from Kafka. So it 
yields to avoid hammering the Kafka server constantly when there are no 
messages available. Unfortunately, though, I have found fairly recently by 
digging into the Kafka client code that returning 0 messages happens not only 
when there are no messages available on the Kafka server but also if the client 
just takes longer than that long-poll (10 milliseconds) to receive the response 
and prepare the messages on the client side. The client doesn’t appear to 
readily expose any information about whether or not there are more messages 
available, so this seems to be the best we can do with what the client 
currently provides.

So setting a yield duration of 0 seconds will provide much higher throughput 
but may put more load on the Kafka brokers.



On Jun 19, 2020, at 10:12 AM, 
josef.zahn...@swisscom.com wrote:

Hi Mark, Pierre

We are using NiFi 1.11.4, so fully up to date.

Are you kidding me :-D,  “Yield Duration” was always on the default value (1 
secs), as I didn’t expect that the processor “yields”. But due to your comment 
I’ve changed it to “0 secs”. I can’t believe it, the performance has been 
increased to the same value (about 250’000k messages per seconds) as the  
kafka-consumer-perf-test.sh shows. Thanks a lot!! However 250k messages is 
still not enough to cover all our use cases, but at least it is now consistent 
to the kafka performance testing script. The Kafka Grafana shows about 60MB/s 
outgoing with the current number of messages.

@Pierre: The setup you are referring to with 10-20Mio messages per seconds. How 
many partitions had they and how 

Re: ConsumeKafkaRecord Performance Issue

2020-06-22 Thread Josef.Zahner1
Hi Mark,

thanks a lot for your explanation, makes fully sense! Did you checked as well 
the “Max Poll Records” parameter? Because no matter how high I’m setting it I’m 
getting always a random number of records back into one flowfile. The max. is 
about 400 records which isn’t ideal for small records as NiFi gets a lot of 
Flowfiles with a few kilobytes in case of a huge backlog.

Cheers Josef


From: Mark Payne 
Reply to: "users@nifi.apache.org" 
Date: Friday, 19 June 2020 at 17:06
To: "users@nifi.apache.org" 
Subject: Re: ConsumeKafkaRecord Performance Issue

Josef,

Glad you were able to get past this hurdle. The reason for the consumer 
yielding is a bit complex. NiFi issues an async request to Kafka to retrieve 
messages. Then, NiFi performs a long-poll to get those messages from the Kafka 
client. If the client returns 0 messages from the long-poll, the assumption 
that NiFi makes is that there are no more messages available from Kafka. So it 
yields to avoid hammering the Kafka server constantly when there are no 
messages available. Unfortunately, though, I have found fairly recently by 
digging into the Kafka client code that returning 0 messages happens not only 
when there are no messages available on the Kafka server but also if the client 
just takes longer than that long-poll (10 milliseconds) to receive the response 
and prepare the messages on the client side. The client doesn’t appear to 
readily expose any information about whether or not there are more messages 
available, so this seems to be the best we can do with what the client 
currently provides.

So setting a yield duration of 0 seconds will provide much higher throughput 
but may put more load on the Kafka brokers.


On Jun 19, 2020, at 10:12 AM, 
josef.zahn...@swisscom.com wrote:

Hi Mark, Pierre

We are using NiFi 1.11.4, so fully up to date.

Are you kidding me :-D,  “Yield Duration” was always on the default value (1 
secs), as I didn’t expect that the processor “yields”. But due to your comment 
I’ve changed it to “0 secs”. I can’t believe it, the performance has been 
increased to the same value (about 250’000k messages per seconds) as the  
kafka-consumer-perf-test.sh shows. Thanks a lot!! However 250k messages is 
still not enough to cover all our use cases, but at least it is now consistent 
to the kafka performance testing script. The Kafka Grafana shows about 60MB/s 
outgoing with the current number of messages.

@Pierre: The setup you are referring to with 10-20Mio messages per seconds. How 
many partitions had they and how big were the messages? We are storing the 
messages in this example as AVRO with about 44 fields.

Cheers Josef


PS: below some more information about my setup (even though our main issue has 
been solved):

As record reader I’m using a AvroReader which gets the schema from a confluent 
schema registry. Every setting there is default, except the connection 
parameters to confluent. As record writer I’m using AvroRecordSetWriter with a 
predefined schema as we only want to have a reduced column set.

The 8 servers are using only SAS SSDs and doesn’t store the data. The data goes 
from ConsumeKafkaRecord directly into our DB which runs on another cluster. As 
I mentioned already, problem was there whether I’ was using “Primary Only” or 
distribute it on the cluster. So it wasn’t a limit on a single node.





From: Mark Payne mailto:marka...@hotmail.com>>
Reply to: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Friday, 19 June 2020 at 14:46
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: ConsumeKafkaRecord Performance Issue

Josef,

Have you tried updating the processor’s Yield Duration (configure -> settings 
tab)? Setting that to “0 secs” can make a big difference in 
ConsumeKafka(Record)’s performance.

Also, what kind of data rate (MB/sec) are you looking at, which record reader 
and writer are you using? Are you using a schema registry? Spinning disk or ssd?

All of these can make a big difference in performance.

Thanks
Mark


On Jun 19, 2020, at 3:45 AM, 
"josef.zahn...@swisscom.com" 
mailto:josef.zahn...@swisscom.com>> wrote:
Hi Chris

Our brokers are using Kafka 2.3.0, just slightly different to my 
kafka-consumer-perf-test.sh.

I’ve now tested as well with the performance shell script from kafka 2.0.0, it 
showed the same result as with 2.3.1.

in my eyes at least 100k/s messages should be possible easily, especially with 
the number of threads of NiFi… As we have sources which generates about 300k to 
400k/s messages NiFi is at the moment far to slow to even consume real time, 
and it gets even worse if we are behind the offset we can’t catch up anymore.

At the moment we can’t use NiFi to consume from Kafka.

Cheers Josef


From: "christophe.mon...@post.ch" 

Re: maven nifi 1.11.4 libs

2020-03-23 Thread Josef.Zahner1
Perfect, thanks Pierre :-).

Cheers Josef

From: Pierre Villard 
Reply to: "users@nifi.apache.org" 
Date: Monday, 23 March 2020 at 10:54
To: "users@nifi.apache.org" 
Subject: Re: maven nifi 1.11.4 libs

Hi Josef,

It can take a bit before sync is complete between repositories. It should be OK 
later today I'd imagine.

Thanks,
Pierre

Le lun. 23 mars 2020 à 10:47, 
mailto:josef.zahn...@swisscom.com>> a écrit :
As NiFi 1.11.4 has been released yesterday, we tried to upgrade our processors 
to 1.11.4. Sadly maven shows no update yet for 1.11.4, am I to early? We can’t 
wait to install the new release (bugfix release, yeah).

Cheers Josef


smime.p7s
Description: S/MIME Cryptographic Signature


maven nifi 1.11.4 libs

2020-03-23 Thread Josef.Zahner1
As NiFi 1.11.4 has been released yesterday, we tried to upgrade our processors 
to 1.11.4. Sadly maven shows no update yet for 1.11.4, am I to early? We can’t 
wait to install the new release (bugfix release, yeah).

Cheers Josef


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Logging/Monitoring of Invalid Processors at NiFi Startup

2020-03-04 Thread Josef.Zahner1
With logback.xml you can finetune logmessages, but don't ask me the details __.

Cheers Josef


On 04.03.20, 16:35, "Dobbernack, Harald (Key-Work)" 
 wrote:

Hi Josef,

thank you for your input! We planned on merging/aggregating the errors into 
one mail an hour (so the length of the mail would only be caused by the 
distinct number of error types in the timeframe), but of course I'll check with 
my splunk colleagues!  We still have the problem though that the invalid 
processors are not logged into the nifi-app.log - or is there a way to enable 
this?

Thank you,
Harald



Von: josef.zahn...@swisscom.com 
Gesendet: Mittwoch, 4. März 2020 16:19
An: users@nifi.apache.org
Betreff: Re: Logging/Monitoring of Invalid Processors at NiFi Startup

Hi Harald,

I can tell you only what we do, we are sending the whole nifi-app.log to 
splunk and scan there specific for alarms/warnings. We don’t use e-mail 
notification as it doesn’t help too much. In the nifi-app.log you would see as 
well startup issues, so we just focus on that instead.

In my eyes e-mail isn’t the right medium to alert/monitor, eg. If you have 
massiv issues it would flood your e-mail account completely with warnings – and 
I don’t think that you want that.

Cheers Josef


From: "Dobbernack, Harald (Key-Work)" 
Reply to: "mailto:users@nifi.apache.org; 
Date: Wednesday, 4 March 2020 at 14:11
To: "mailto:users@nifi.apache.org; 
Subject: Logging/Monitoring of Invalid Processors at NiFi Startup

Our standalone Nifi 1.11.1 on Debian 10.2 will on service startup not throw 
an Error or Warning into the NiFi log if it deams a processor invalid, for 
example if a samba mount is not available and the listfile or getfile 
processors cannot reach the mount. If, on the other hand, the processors are 
running and the connection to the mount gets lost then we will see Error 
entrances in the NiFi App log, which we then can put to use to alert us.

We had thought to let Nifi report Errors to us via push mail via taillog 
processor on it’s own log, but in case of unreachable mount at service startup 
it wouldn’t be able to alert us that something is wrong.

Is it possible to log invalid processors at service startup? Or how Do you 
monitor or report failures of this type?

Thank you,
Harald

 --


 Harald Dobbernack
Key-Work Consulting GmbH | Kriegsstr. 100 | 76133 | Karlsruhe | Germany | 
https://www.key-work.de | 
Datenschutz
Fon: +49-721-78203-264 | E-Mail: harald.dobbern...@key-work.de | Fax: 
+49-721-78203-10

Key-Work Consulting GmbH, Karlsruhe, HRB 108695, HRG Mannheim
Geschäftsführer: Andreas Stappert, Tobin Wotring




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Logging/Monitoring of Invalid Processors at NiFi Startup

2020-03-04 Thread Josef.Zahner1
Hi Harald,

I can tell you only what we do, we are sending the whole nifi-app.log to splunk 
and scan there specific for alarms/warnings. We don’t use e-mail notification 
as it doesn’t help too much. In the nifi-app.log you would see as well startup 
issues, so we just focus on that instead.

In my eyes e-mail isn’t the right medium to alert/monitor, eg. If you have 
massiv issues it would flood your e-mail account completely with warnings – and 
I don’t think that you want that.

Cheers Josef


From: "Dobbernack, Harald (Key-Work)" 
Reply to: "users@nifi.apache.org" 
Date: Wednesday, 4 March 2020 at 14:11
To: "users@nifi.apache.org" 
Subject: Logging/Monitoring of Invalid Processors at NiFi Startup

Our standalone Nifi 1.11.1 on Debian 10.2 will on service startup not throw an 
Error or Warning into the NiFi log if it deams a processor invalid, for example 
if a samba mount is not available and the listfile or getfile processors cannot 
reach the mount. If, on the other hand, the processors are running and the 
connection to the mount gets lost then we will see Error entrances in the NiFi 
App log, which we then can put to use to alert us.

We had thought to let Nifi report Errors to us via push mail via taillog 
processor on it’s own log, but in case of unreachable mount at service startup 
it wouldn’t be able to alert us that something is wrong.

Is it possible to log invalid processors at service startup? Or how Do you 
monitor or report failures of this type?

Thank you,
Harald


Harald Dobbernack
Key-Work Consulting GmbH | Kriegsstr. 100 | 76133 | Karlsruhe | Germany | 
https://www.key-work.de | 
Datenschutz
Fon: +49-721-78203-264 | E-Mail: harald.dobbern...@key-work.de | Fax: 
+49-721-78203-10

Key-Work Consulting GmbH, Karlsruhe, HRB 108695, HRG Mannheim
Geschäftsführer: Andreas Stappert, Tobin Wotring


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Downtime behaviour

2020-03-04 Thread Josef.Zahner1
If you mean “Cron driven” scheduling strategy which has exactly a start of 
08:00:00, then it won’t run afterwards. If you use “Timer driven” scheduling, 
it will start again after a restart. So all depends on your scheduling strategy.


From: Mike Thomsen 
Reply to: "users@nifi.apache.org" 
Date: Wednesday, 4 March 2020 at 12:25
To: "users@nifi.apache.org" 
Subject: Re: Downtime behaviour

I've never set a flow to run at an exact time of the day, but giving yourself 
such a tight window to have everything come back doesn't seem like a good 
practice for system maintenance.

On Wed, Mar 4, 2020 at 12:24 AM Nifi Rocks 
mailto:nifi.rocket...@gmail.com>> wrote:
Hi community.
Suppose I set the start of my flow to 8 o'clock. Now I shut down the container 
between 7:59 and 8:01 and restart it afterwards. Will my flow start at 8:02?

If the flow is not triggered afterwards, several things will get complicated. I 
would have to inform all users during upgrades and first find out which 
processes would run during the maintenance period.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: FetchSFTP keeps files open (max open file descriptors reached)

2020-03-04 Thread Josef.Zahner1
Oh sorry, missed one of of the most important parts, we are using a 8-node 
cluster with nifi 1.11.3 – so perfectly up to date.

Cheers Josef

From: Bryan Bende 
Reply to: "users@nifi.apache.org" 
Date: Wednesday, 4 March 2020 at 12:57
To: "users@nifi.apache.org" 
Subject: Re: FetchSFTP keeps files open (max open file descriptors reached)

Hello,

What version of nifi are you using?


On Wed, Mar 4, 2020 at 5:41 AM 
mailto:josef.zahn...@swisscom.com>> wrote:
Hi guys,

We have an issue with the FetchSFTP processor and the max open file 
descriptors. In short, it seems that the FetchSFTP keeps the file open 
“forever” on our Synology NAS, so we are reaching always the default max open 
files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
files (so in fact it’s not possible to read the files as everything is blocked 
after 1024 files).

We found no option to rise the limit of max open files on the Synology NAS (but 
that’s not NiFi’s fault ). We have also other linux machine with CentOS, but 
the behavior there isn’t exactly always the same. Sometimes the file 
descriptors get closed but sometimes as well not.

Synology has no lsof command, but this is how I’ve checked it:
user@nas-01:~$ sudo ls -l /proc//fd | wc -l
1024

Any comments how we can troubleshoot the issue?

Cheers Josef

--
Sent from Gmail Mobile


smime.p7s
Description: S/MIME Cryptographic Signature


FetchSFTP keeps files open (max open file descriptors reached)

2020-03-04 Thread Josef.Zahner1
Hi guys,

We have an issue with the FetchSFTP processor and the max open file 
descriptors. In short, it seems that the FetchSFTP keeps the file open 
“forever” on our Synology NAS, so we are reaching always the default max open 
files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
files (so in fact it’s not possible to read the files as everything is blocked 
after 1024 files).

We found no option to rise the limit of max open files on the Synology NAS (but 
that’s not NiFi’s fault ). We have also other linux machine with CentOS, but 
the behavior there isn’t exactly always the same. Sometimes the file 
descriptors get closed but sometimes as well not.

Synology has no lsof command, but this is how I’ve checked it:
user@nas-01:~$ sudo ls -l /proc//fd | wc -l
1024

Any comments how we can troubleshoot the issue?

Cheers Josef



smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi import processor group created with previous NiFi version does not work

2020-02-05 Thread Josef.Zahner1
Hi Valentina,

has your template/canvas disabled processors? I’ve already faced a bug which 
caused the same error as yours.
https://issues.apache.org/jira/projects/NIFI/issues/NIFI-6958

Cheers Josef


From: Valentina Ivanova 
Reply to: "users@nifi.apache.org" 
Date: Wednesday, 5 February 2020 at 11:10
To: "users@nifi.apache.org" 
Subject: NiFi import processor group created with previous NiFi version does 
not work

Hi everyone!

We are investigating the following issue.
I have a dev NiFi instance with version 1.10.0 where I created a process group 
template with approximately 15 processors. I can upload that template to the 
production NiFi with version 1.11.0 and then start version control and commit 
it to the nifi registry.
However when I delete that template from the canvas and try to import it from 
the registry I get the following error message - No Processor with ID  
belongs to this Process Group.
If I create the processor group template in the production NiFi 1.11.0 then I 
can delete it and import it back to the canvas without issues.
Do you have any idea why this happens? Is this any versioning issues in 
connection to the NiFi processors? Help is very appreciated.

Thanks & all the  best

Valentina


smime.p7s
Description: S/MIME Cryptographic Signature


PutKudu Processor - Be careful with upgrading to NiFi 1.10.0

2019-11-27 Thread Josef.Zahner1
Just a short information for other people who are using the PutKudu processor 
and are planning to Upgrade to NiFI 1.10.0. Before you start, check the jira 
tickets below, especially the memory leak… If we had known that before, then we 
probably would not have upgraded.

PutKudu related Jira Tickets

  *   Memory Leak: https://jira.apache.org/jira/browse/NIFI-6908
  *   Millions of Logmessages: https://jira.apache.org/jira/browse/NIFI-6895
  *   Operation Type Parameter:  https://jira.apache.org/jira/browse/NIFI-6867

Cheers Josef


smime.p7s
Description: S/MIME Cryptographic Signature


Re: PutKudu 1.10.0 Processor generates tons of log warnings - org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed session; this is unsafe

2019-11-21 Thread Josef.Zahner1
Thank you Pierre, your workaround works like a charm! JIRA request has been 
created: https://issues.apache.org/jira/projects/NIFI/issues/NIFI-6895


From: Pierre Villard 
Reply to: "users@nifi.apache.org" 
Date: Thursday, 21 November 2019 at 10:52
To: "users@nifi.apache.org" 
Subject: Re: PutKudu 1.10.0 Processor generates tons of log warnings - 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe

Hi Joseph,

Yes please file a JIRA, we need to have a look even though it might be due to a 
version change in the Kudu dependency.

To workaround the issue, adding the below line into logback.xml should fix it:


Hope this helps,
Pierre

Le jeu. 21 nov. 2019 à 10:33, 
mailto:josef.zahn...@swisscom.com>> a écrit :
Hi guys

We have just upgraded to NiFi 1.10.0 yesterday. We have seen that we got 1’000 
times more logs (nifi-app.log) since the upgrade, caused mainly by the PutKudu 
processor.

The logmessage we always get is:
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-40] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-40] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe
…

The PutKudu processor itself seems to work fine. We have the same amount of 
inserts as before the upgrade to NiFi 1.10.0.

The line of code which from Apache Kudu which reflects the message is here: 
(line 547)
https://github.com/apache/kudu/blob/master/java/kudu-client/src/main/java/org/apache/kudu/client/AsyncKuduSession.java

Questions:

  1.  How can I suppress just this single type of warning in NiFi? Most likely 
in logback.xml – but is there any tutorial how to do this? The messages 
generates multiple gigabytes of logs on our nodes per hour… we have to get rid 
of this kind of message as soon as possible.
  2.  Seems that from NiFi 1.9.2 to NiFi 1.10.0 something has changed in the 
PutKudu processor, we haven’t seen this message before. Shall I fill a jira 
ticket for that? Or has anybody an idea?

Thanks in advance,
Josef



smime.p7s
Description: S/MIME Cryptographic Signature


PutKudu 1.10.0 Processor generates tons of log warnings - org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed session; this is unsafe

2019-11-21 Thread Josef.Zahner1
Hi guys

We have just upgraded to NiFi 1.10.0 yesterday. We have seen that we got 1’000 
times more logs (nifi-app.log) since the upgrade, caused mainly by the PutKudu 
processor.

The logmessage we always get is:
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-40] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-40] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe
…

The PutKudu processor itself seems to work fine. We have the same amount of 
inserts as before the upgrade to NiFi 1.10.0.

The line of code which from Apache Kudu which reflects the message is here: 
(line 547)
https://github.com/apache/kudu/blob/master/java/kudu-client/src/main/java/org/apache/kudu/client/AsyncKuduSession.java

Questions:

  1.  How can I suppress just this single type of warning in NiFi? Most likely 
in logback.xml – but is there any tutorial how to do this? The messages 
generates multiple gigabytes of logs on our nodes per hour… we have to get rid 
of this kind of message as soon as possible.
  2.  Seems that from NiFi 1.9.2 to NiFi 1.10.0 something has changed in the 
PutKudu processor, we haven’t seen this message before. Shall I fill a jira 
ticket for that? Or has anybody an idea?

Thanks in advance,
Josef



smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

2019-11-12 Thread Josef.Zahner1
Update: we found out that the described issue (fallback to “simple” mode) is 
related to Java-11 (and of course LDAP with START_TLS) only. The error message 
is gone with Java 1.8.0, so in our case we will use for now Java 1.8.0. As 
already mentioned earlier, another option would be to use Java 11 but with 
LDAPS instead of START_TLS, but we decided against it.

I’ve updated https://issues.apache.org/jira/browse/NIFI-6860 with this 
information.

Hopefully one of the devs can fix this in future releases.

Cheers Josef

From: "Zahner Josef, GSB-LR-TRW-LI" 
Date: Monday, 11 November 2019 at 11:16
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

And additionally, below the output of the tcpdump captured on the NiFi node 
during startup of NiFi 1.10.0. We use the standard LDAP port (389). And you 
where right, I see in the dump that NiFi tries to authenticate with “simple” 
authentication with START_TLS…

[cid:image001.png@01D59881.72F2B710]


From: "Zahner Josef, GSB-LR-TRW-LI" 
Date: Monday, 11 November 2019 at 11:06
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

Hi Andy,

I’ve just opened a jira bugreport:
https://issues.apache.org/jira/projects/NIFI/issues/NIFI-6860

We changed nothing on the LDAP. The whole setup still works for our production 
nodes with NiFi 1.9.2, we have multiple clusters/single NiFi’s running. As we 
use ansible I’ve removed again NiFi 1.10.0 from the test node and installed 
again NiFi 1.9.2, it was working without any issues. And the only difference 
between NiFi 1.9.2 and 1.10.0 deployment are the new config parameters.

As you can see in  the bugreport, I’ve switched now to LDAPS and this is 
working… Users are visible in the “Users” windows and I can login with an LDAP 
user. I just switched to LDAPS instead of START_TLS and added an “S” to the URL 
of the LDAP server.

Cheers Josef



From: Andy LoPresto 
Reply to: "users@nifi.apache.org" 
Date: Monday, 11 November 2019 at 10:46
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

Hi Josef,

My inclination is that somehow the password NiFi is trying to send to the LDAP 
service is no longer sufficiently protected? The only other change I am aware 
of that could influence this is the Spring Security upgrade from 4.2.8 to 
4.2.13 (NiFi-6412) [1]; the new version of Spring Security might enforce a new 
restriction on how the password is sent that LDAP doesn’t like. The LDAP error 
code 13 refers to the password being sent in plaintext [2]. As you are using 
StartTLS, I am assuming the LDAP port you’re connecting to is still 389? Did 
anything change on the LDAP server? Can you verify a simple lookup using 
ldapsearch still works? If you get the same error code, you may need to add -Z 
to the command to initialize a secure TLS channel.

[1] https://issues.apache.org/jira/browse/NIFI-6412
[2] 
https://ldap.com/ldap-result-code-reference-core-ldapv3-result-codes/#rc-confidentialityRequired


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69




On Nov 11, 2019, at 4:59 PM, 
josef.zahn...@swisscom.com wrote:

Hi guys

We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with LDAP 
(START_TLS) authentication successfully enabled on 1.9.2. Now after upgrading,  
we have an issue which prevents nifi from startup:


2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 
'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
 Unsatisfied dependency expressed through method 
'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
org.springframework.beans.factory.BeanExpressionException: Expression parsing 
failed; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': 
Unsatisfied dependency expressed through method 'setJwtAuthenticationProvider' 
parameter 0; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'jwtAuthenticationProvider' defined in class path resource 
[nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' 
while setting constructor argument; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'authorizer': FactoryBean threw exception on object creation; nested 
exception is org.springframework.ldap.AuthenticationNotSupportedException: 
[LDAP: error code 13 - confidentiality required]; nested exception is 
javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
confidentiality required]
 

Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

2019-11-11 Thread Josef.Zahner1
And additionally, below the output of the tcpdump captured on the NiFi node 
during startup of NiFi 1.10.0. We use the standard LDAP port (389). And you 
where right, I see in the dump that NiFi tries to authenticate with “simple” 
authentication with START_TLS…

[cid:image001.png@01D59881.72F2B710]


From: "Zahner Josef, GSB-LR-TRW-LI" 
Date: Monday, 11 November 2019 at 11:06
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

Hi Andy,

I’ve just opened a jira bugreport:
https://issues.apache.org/jira/projects/NIFI/issues/NIFI-6860

We changed nothing on the LDAP. The whole setup still works for our production 
nodes with NiFi 1.9.2, we have multiple clusters/single NiFi’s running. As we 
use ansible I’ve removed again NiFi 1.10.0 from the test node and installed 
again NiFi 1.9.2, it was working without any issues. And the only difference 
between NiFi 1.9.2 and 1.10.0 deployment are the new config parameters.

As you can see in  the bugreport, I’ve switched now to LDAPS and this is 
working… Users are visible in the “Users” windows and I can login with an LDAP 
user. I just switched to LDAPS instead of START_TLS and added an “S” to the URL 
of the LDAP server.

Cheers Josef



From: Andy LoPresto 
Reply to: "users@nifi.apache.org" 
Date: Monday, 11 November 2019 at 10:46
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

Hi Josef,

My inclination is that somehow the password NiFi is trying to send to the LDAP 
service is no longer sufficiently protected? The only other change I am aware 
of that could influence this is the Spring Security upgrade from 4.2.8 to 
4.2.13 (NiFi-6412) [1]; the new version of Spring Security might enforce a new 
restriction on how the password is sent that LDAP doesn’t like. The LDAP error 
code 13 refers to the password being sent in plaintext [2]. As you are using 
StartTLS, I am assuming the LDAP port you’re connecting to is still 389? Did 
anything change on the LDAP server? Can you verify a simple lookup using 
ldapsearch still works? If you get the same error code, you may need to add -Z 
to the command to initialize a secure TLS channel.

[1] https://issues.apache.org/jira/browse/NIFI-6412
[2] 
https://ldap.com/ldap-result-code-reference-core-ldapv3-result-codes/#rc-confidentialityRequired


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69



On Nov 11, 2019, at 4:59 PM, 
josef.zahn...@swisscom.com wrote:

Hi guys

We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with LDAP 
(START_TLS) authentication successfully enabled on 1.9.2. Now after upgrading,  
we have an issue which prevents nifi from startup:


2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 
'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
 Unsatisfied dependency expressed through method 
'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
org.springframework.beans.factory.BeanExpressionException: Expression parsing 
failed; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': 
Unsatisfied dependency expressed through method 'setJwtAuthenticationProvider' 
parameter 0; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'jwtAuthenticationProvider' defined in class path resource 
[nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' 
while setting constructor argument; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'authorizer': FactoryBean threw exception on object creation; nested 
exception is org.springframework.ldap.AuthenticationNotSupportedException: 
[LDAP: error code 13 - confidentiality required]; nested exception is 
javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
confidentiality required]
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
at 
org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
at 

Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

2019-11-11 Thread Josef.Zahner1
Hi Andy,

I’ve just opened a jira bugreport:
https://issues.apache.org/jira/projects/NIFI/issues/NIFI-6860

We changed nothing on the LDAP. The whole setup still works for our production 
nodes with NiFi 1.9.2, we have multiple clusters/single NiFi’s running. As we 
use ansible I’ve removed again NiFi 1.10.0 from the test node and installed 
again NiFi 1.9.2, it was working without any issues. And the only difference 
between NiFi 1.9.2 and 1.10.0 deployment are the new config parameters.

As you can see in  the bugreport, I’ve switched now to LDAPS and this is 
working… Users are visible in the “Users” windows and I can login with an LDAP 
user. I just switched to LDAPS instead of START_TLS and added an “S” to the URL 
of the LDAP server.

Cheers Josef



From: Andy LoPresto 
Reply to: "users@nifi.apache.org" 
Date: Monday, 11 November 2019 at 10:46
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

Hi Josef,

My inclination is that somehow the password NiFi is trying to send to the LDAP 
service is no longer sufficiently protected? The only other change I am aware 
of that could influence this is the Spring Security upgrade from 4.2.8 to 
4.2.13 (NiFi-6412) [1]; the new version of Spring Security might enforce a new 
restriction on how the password is sent that LDAP doesn’t like. The LDAP error 
code 13 refers to the password being sent in plaintext [2]. As you are using 
StartTLS, I am assuming the LDAP port you’re connecting to is still 389? Did 
anything change on the LDAP server? Can you verify a simple lookup using 
ldapsearch still works? If you get the same error code, you may need to add -Z 
to the command to initialize a secure TLS channel.

[1] https://issues.apache.org/jira/browse/NIFI-6412
[2] 
https://ldap.com/ldap-result-code-reference-core-ldapv3-result-codes/#rc-confidentialityRequired


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69


On Nov 11, 2019, at 4:59 PM, 
josef.zahn...@swisscom.com wrote:

Hi guys

We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with LDAP 
(START_TLS) authentication successfully enabled on 1.9.2. Now after upgrading,  
we have an issue which prevents nifi from startup:


2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 
'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
 Unsatisfied dependency expressed through method 
'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
org.springframework.beans.factory.BeanExpressionException: Expression parsing 
failed; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': 
Unsatisfied dependency expressed through method 'setJwtAuthenticationProvider' 
parameter 0; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'jwtAuthenticationProvider' defined in class path resource 
[nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' 
while setting constructor argument; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'authorizer': FactoryBean threw exception on object creation; nested 
exception is org.springframework.ldap.AuthenticationNotSupportedException: 
[LDAP: error code 13 - confidentiality required]; nested exception is 
javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
confidentiality required]
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
at 
org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
at 
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at 

NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

2019-11-10 Thread Josef.Zahner1
Hi guys

We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with LDAP 
(START_TLS) authentication successfully enabled on 1.9.2. Now after upgrading,  
we have an issue which prevents nifi from startup:


2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 
'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
 Unsatisfied dependency expressed through method 
'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
org.springframework.beans.factory.BeanExpressionException: Expression parsing 
failed; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': 
Unsatisfied dependency expressed through method 'setJwtAuthenticationProvider' 
parameter 0; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'jwtAuthenticationProvider' defined in class path resource 
[nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' 
while setting constructor argument; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'authorizer': FactoryBean threw exception on object creation; nested 
exception is org.springframework.ldap.AuthenticationNotSupportedException: 
[LDAP: error code 13 - confidentiality required]; nested exception is 
javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
confidentiality required]
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
at 
org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
at 
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at 
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
at 
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at 
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
at 
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
at 
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
…

In authorizers.xml we added the line “false”, but beside of that at least the 
authorizers.xml is the same. Anybody an idea what could cause the error?

NiFi-5839 seems to be related to the property above. Other than that I found no 
change regarding LDAP authentication…
https://issues.apache.org/jira/browse/NIFI-5839

Any help would be appreciated
Josef




smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi WebGUI Timestamp issue

2019-10-30 Thread Josef.Zahner1
Yes of course… 4 servers.

From: "Wesley C. Dias de Oliveira" 
Reply to: "users@nifi.apache.org" 
Date: Wednesday, 30 October 2019 at 16:56
To: "users@nifi.apache.org" 
Subject: Re: NiFi WebGUI Timestamp issue

Are you using NTP servers?

Em qua, 30 de out de 2019 às 11:36, 
mailto:josef.zahn...@swisscom.com>> escreveu:
Hi Wesley

I didn’t change anything on the machine during the time when I restarted the 
cluster (multiple times), however NiFi showed after each restart different 
timestamps/localizations.

Cheers Josef

From: "Wesley C. Dias de Oliveira" 
mailto:wcdolive...@gmail.com>>
Reply to: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Wednesday, 30 October 2019 at 14:41
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: NiFi WebGUI Timestamp issue

Hi, Josef.

It seems to be related to machine time settings.

Have you checked this?

Em qua, 30 de out de 2019 às 06:57, 
mailto:josef.zahn...@swisscom.com>> escreveu:
Hi Guys,

We just faced an issue with the time shown in the NiFi webgui (Screenshot 
below).

We are in Switzerland, so at the moment we have “CET” (UTC + 1h). After a 
restart of our 8-node NiFi 1.9.2 cluster, NiFi suddenly showed “CET” - but the 
timestamp was in fact UTC, so minus 1h. Then we decided to restart all NiFi 
nodes again and now it’s even more confusing, it shows “UTC” but the time is 
from CET?

[cid:16e1ce4a8a64cff311]


Can anybody explain where the timestamp and the localization letters are coming 
from? After the reboot for a very short time period it showed another 
localization and timestamp, but then it changed to what I mentioned in the 
screenshot above.

We have more clusters and single nifi’s and all the other machines show the 
correct timestamp and location… all have the same config.

Cheers Josef


--
Grato,
Wesley C. Dias de Oliveira.

Linux User nº 576838.


--
Grato,
Wesley C. Dias de Oliveira.

Linux User nº 576838.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi WebGUI Timestamp issue

2019-10-30 Thread Josef.Zahner1
Hi Wesley

I didn’t change anything on the machine during the time when I restarted the 
cluster (multiple times), however NiFi showed after each restart different 
timestamps/localizations.

Cheers Josef

From: "Wesley C. Dias de Oliveira" 
Reply to: "users@nifi.apache.org" 
Date: Wednesday, 30 October 2019 at 14:41
To: "users@nifi.apache.org" 
Subject: Re: NiFi WebGUI Timestamp issue

Hi, Josef.

It seems to be related to machine time settings.

Have you checked this?

Em qua, 30 de out de 2019 às 06:57, 
mailto:josef.zahn...@swisscom.com>> escreveu:
Hi Guys,

We just faced an issue with the time shown in the NiFi webgui (Screenshot 
below).

We are in Switzerland, so at the moment we have “CET” (UTC + 1h). After a 
restart of our 8-node NiFi 1.9.2 cluster, NiFi suddenly showed “CET” - but the 
timestamp was in fact UTC, so minus 1h. Then we decided to restart all NiFi 
nodes again and now it’s even more confusing, it shows “UTC” but the time is 
from CET?

[cid:16e1ce4a8a64cff311]


Can anybody explain where the timestamp and the localization letters are coming 
from? After the reboot for a very short time period it showed another 
localization and timestamp, but then it changed to what I mentioned in the 
screenshot above.

We have more clusters and single nifi’s and all the other machines show the 
correct timestamp and location… all have the same config.

Cheers Josef


--
Grato,
Wesley C. Dias de Oliveira.

Linux User nº 576838.


smime.p7s
Description: S/MIME Cryptographic Signature


NiFi WebGUI Timestamp issue

2019-10-30 Thread Josef.Zahner1
Hi Guys,

We just faced an issue with the time shown in the NiFi webgui (Screenshot 
below).

We are in Switzerland, so at the moment we have “CET” (UTC + 1h). After a 
restart of our 8-node NiFi 1.9.2 cluster, NiFi suddenly showed “CET” - but the 
timestamp was in fact UTC, so minus 1h. Then we decided to restart all NiFi 
nodes again and now it’s even more confusing, it shows “UTC” but the time is 
from CET?

[cid:image001.png@01D58F10.B9FC5C30]


Can anybody explain where the timestamp and the localization letters are coming 
from? After the reboot for a very short time period it showed another 
localization and timestamp, but then it changed to what I mentioned in the 
screenshot above.

We have more clusters and single nifi’s and all the other machines show the 
correct timestamp and location… all have the same config.

Cheers Josef


smime.p7s
Description: S/MIME Cryptographic Signature


Re: JoltTransformRecord Processor Issue

2019-08-29 Thread Josef.Zahner1
Sorry there was a copy/paste mistake for the spec, in fact I tried with the 
following one:

[
  {
"operation": "shift",
"spec": {
  "cgnat_public_ipv4": "cgnat_public_src_ipv4"
}
  }
]

Spec seems to be valid as the processor runs fine.


From: Jean-Sebastien Vachon 
Reply to: "users@nifi.apache.org" 
Date: Thursday, 29 August 2019 at 14:02
To: "users@nifi.apache.org" 
Subject: Re: JoltTransformRecord Processor Issue

Hi,

it might not be your problem at all but you seem to have a funny double-quote 
in your Jolt transformation.



From: josef.zahn...@swisscom.com
Sent: Thursday, August 29, 2019 7:57 AM
To: users@nifi.apache.org
Subject: JoltTransformRecord Processor Issue


I’m working for the first time with the NiFi Jolt Processors 
(JoltTransformRecord & JoltTransformJson) to transform my JSON file. I’ve 
experimented with a pretty simple spec. My input is an avro with just one 
record, hence I have to convert the flow for the non-record processor first. 
However with the JoltTransformRecord Processor I’m always getting an error 
message (mentioned far below) and the processor loops with the input flow as 
long as the outgoing queue isn’t full. The JoltTransformJSON processor just 
works fine with exactly the same settings and spec. The files in the success 
queue of the JoltTransformRecord queue are missing the outer brackets “[]”, …



The Jolt Transformation DSL dropdown is for both processors set for “Chain”.



[

  {

"operation": "shift",

"spec": {

  "cgnat_public_ipv4": “cgnat_public_src_ipv4",}

  }

]





[cid:image001.png@01D55E71.B5F2A0B0]





Error Message from JoltTransformRecord



2019-08-29 13:38:24,055 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.jolt.record.JoltTransformRecord 
JoltTransformRecord[id=c56e1fb0-3437-143d-c75a-575bfa19cd34] Unable to 
transform 
StandardFlowFileRecord[uuid=a86e1e36-b38f-497e-b1b8-42b1fea9c830,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1567078679515-46394, 
container=default, section=314], offset=956319, 
length=526],offset=0,name=95465ca6-7997-4ed2-8e75-97bce1a0610e,size=526] due to 
java.lang.IllegalStateException: 
StandardFlowFileRecord[uuid=a86e1e36-b38f-497e-b1b8-42b1fea9c830,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1567078679515-46394, 
container=default, section=314], offset=956319, 
length=526],offset=0,name=95465ca6-7997-4ed2-8e75-97bce1a0610e,size=526] 
already in use for an active callback or an InputStream created by 
ProcessSession.read(FlowFile) has not been closed: 
java.lang.IllegalStateException: 
StandardFlowFileRecord[uuid=a86e1e36-b38f-497e-b1b8-42b1fea9c830,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1567078679515-46394, 
container=default, section=314], offset=956319, 
length=526],offset=0,name=95465ca6-7997-4ed2-8e75-97bce1a0610e,size=526] 
already in use for an active callback or an InputStream created by 
ProcessSession.read(FlowFile) has not been closed

java.lang.IllegalStateException: 
StandardFlowFileRecord[uuid=a86e1e36-b38f-497e-b1b8-42b1fea9c830,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1567078679515-46394, 
container=default, section=314], offset=956319, 
length=526],offset=0,name=95465ca6-7997-4ed2-8e75-97bce1a0610e,size=526] 
already in use for an active callback or an InputStream created by 
ProcessSession.read(FlowFile) has not been closed

at 
org.apache.nifi.controller.repository.StandardProcessSession.validateRecordState(StandardProcessSession.java:3126)

at 
org.apache.nifi.controller.repository.StandardProcessSession.validateRecordState(StandardProcessSession.java:3121)

at 
org.apache.nifi.controller.repository.StandardProcessSession.transfer(StandardProcessSession.java:1887)

at 
org.apache.nifi.processors.jolt.record.JoltTransformRecord.onTrigger(JoltTransformRecord.java:371)

at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)

at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)

at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:209)

at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)

at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)

at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)

at java.util.concurrent.FutureTask.runAndReset(Unknown Source)

at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
 Source)

at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source)

at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

at java.lang.Thread.run(Unknown 

Re: "Deadlock" data provenance after few days

2019-08-20 Thread Josef.Zahner1
Hi guys,

I’m jumping in here, we see the same as Giulia, just in another timeframe. We 
are using the default provenance settings (nothing changed except the path) 
with 4GB space on the provenance partition. Are there any suggestions what to 
change? Of course as I’m checking it now we should lift up the two values below 
to fit better to the partition size of 4GB.

nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.index.shard.size =500 MB

What else?

[user@abc ~]$ du -h /provenance_repo/
144K/provenance_repo/toc
273M/provenance_repo/index-1564098506268
400M/provenance_repo/

Thanks in advance,
Josef


From: Mark Payne 
Reply to: "users@nifi.apache.org" 
Date: Monday, 19 August 2019 at 15:49
To: "users@nifi.apache.org" 
Subject: Re: "Deadlock" data provenance after few days

Giulia,

Looking at the thread dump, it appears that there are no problems with 
deadlocking or anything like that in Provenance. As Joe mentioned, the settings 
may need to be adjusted a bit. Can you share what you have in nifi.properties 
in the Provenance section? Also, it would be helpful to provide the results of 
running the following command on your provenance repo directory:

du -h provenance_repository/

The default settings that are used for provenance are intended to avoid using a 
huge amount of disk space. However, they are not ideal for production. The max 
storage size is set to 1 GB with the shard size set to 500 MB. As a result, 
depending on timing, it may be possible that the provenance index directories 
occupy all - or nearly all - of the 1 GB storage size allocated and as a 
result, all of the event data is aged off. If this happens, you'll see 
everything functioning properly except that provenance events won't be shown.

Thanks
-Mark


On Aug 19, 2019, at 7:59 AM, Joe Witt 
mailto:joe.w...@gmail.com>> wrote:

Just to clarify when you say locked fo you mean that you just cannot get any 
provenance results but the flow still functions?  If so I think your prov 
settings need to be adjusted.  Ill review the thread output when not on a phone.

Thanks

On Mon, Aug 19, 2019 at 4:24 AM Giulia Scalaberni 
mailto:giulia.scalabe...@genomedics.it>> wrote:
Hi all,
As requested by Joe Witt, here I attach the dump log after the lock.
My Nifi after nearly 30 days of scheduled activity has locked as usual…

BG,
Giulia

Da: Joe Witt [mailto:joe.w...@gmail.com]
Inviato: giovedì 1 agosto 2019 15:31
A: users@nifi.apache.org
Oggetto: Re: "Deadlock" data provenance after few days

Giulia,

When you're experiencing this condition can you capture a thread dump and share 
the logs (bootstrap/app)?  To create this you can run 
/bin/nifi.sh dump threaddump-locked-prov.log

Thanks

On Thu, Aug 1, 2019 at 3:57 AM Giulia Scalaberni 
mailto:giulia.scalabe...@genomedics.it>> wrote:
Hi,
Both on my 1.9.2 and 1.9.0 standalone installations data provenance is working 
correctly for nearly a month scheduled once a day  then on the server all the 
data provenance in all individual processors and in the general reporting stays 
empty, even when I send new data through all the processors manually.
When opening a data provenance menu I see a very short popup menu appearing, 
but it closes for me too fast to see what is going on.
On disk I still have more than 100GB available.
The problem goes away when I restart the Nifi instance, and pops up again after 
nearly a month.


Does anyone has the same behavior? Have you find a solution?

Thank you in advance.

Have a nice day,
Giulia



smime.p7s
Description: S/MIME Cryptographic Signature


Re: DistributeLoad across a NiFi cluster

2019-07-09 Thread Josef.Zahner1
The feature requires NiFi > 1.8.x… Pierre describes it very well in his blog : 
https://pierrevillard.com/2018/10/29/nifi-1-8-revolutionizing-the-list-fetch-pattern-and-more/


From: James McMahon 
Reply-To: "users@nifi.apache.org" 
Date: Tuesday, 9 July 2019 at 14:46
To: "users@nifi.apache.org" 
Subject: Re: DistributeLoad across a NiFi cluster

Andrew, when I right click on the connection between the two I do not see a 
cluster distribution strategy in the queue connection. I am running 1.7.1.g. Am 
I overlooking something?

On Tue, Jul 2, 2019 at 12:29 PM Andrew Grande 
mailto:apere...@gmail.com>> wrote:
Jim,

There's a better solution in NiFi. Right click on the connection between 
ListFile and FetchFile and select a cluster distribution strategy in options. 
That's it :)

Andrew

On Tue, Jul 2, 2019, 7:37 AM James McMahon 
mailto:jsmcmah...@gmail.com>> wrote:
We would like to employ a DistributeLoad processor, restricted to run on the 
primary node of our cluster. Is there a recommended approach employed to 
efficiently distribute across nodes in the cluster?

As I understand it, and using a FetchFile running in "all nodes" as the first 
processor following the DistributeLoad, I can have it distribute by round 
robin, next available, or load distribution service.  Can anyone provide a link 
to an example that employs the load distribution service? Is that the 
recommended distribution approach when running in clustered mode?

I am interested in maintaining load balance across my cluster nodes when 
running at high flowfile volumes. Flow files will vary greatly in contents, so 
I'd like to design with an approach that helps me balance processing 
distribution.

Thanks very much in advance. -Jim


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Connect NIFI with Impala by DBCPConnectionPool

2019-06-07 Thread Josef.Zahner1
Hi Carlos

I’m facing exact the same issue and I posted already in the cloudera forum. 
However no useful response yet ☹:
https://community.cloudera.com/t5/Interactive-Short-cycle-SQL/JDBC-Driver-Upgrade-to-2-6-9-fails-on-Apache-NiFi/td-p/89754

I asked as well in the impala user group 
(http://mail-archives.apache.org/mod_mbox/impala-user/201904.mbox/%3c1611621216.720975.1556517582...@ss002890.tauri.ch%3E)
 but they told me that it is a driver from cloudera…

My workaround for the moment is to use an older jdbc driver (2.6.4).

If anybody from nifi side has an idea how to resolve the log4j jar hell, I’m 
open for ideas.

Cheers Josef



From: "Carlos Manuel Fernandes (DSI)" 
Reply-To: "users@nifi.apache.org" 
Date: Friday, 7 June 2019 at 16:30
To: "users@nifi.apache.org" 
Subject: Connect NIFI with Impala by DBCPConnectionPool

Hello,

I’ m trying  to connect Nifi (version 1.9.2)   to Impala (3.2.0-cdh6.2.0)  with 
DBCPConnectionPool using jdbc driver ImpalaJDBC41.jar, using Kerberos  
authentication.

I Configured a controller KeytabCredentials pointing to keytab and in 
nifi.properties i set nifi.kerberos.krb5.file to krb5.ini. I made a test with a 
standalone java program using this krb5.ini  and keytab as System properties, 
and worked well.


Apparently  there is a mismatch of class of logfj from Nifi and impala driver : 
“Detected both log4j-over-slf4j.jar AND bound slf4j-log4j12.jar on the class 
path”

Any ideas ?

Carlos


Full Exception: 2019-06-07 15:12:26,897 ERROR [Timer-Driven Process Thread-30] 
o.a.nifi.processors.standard.ExecuteSQL 
ExecuteSQL[id=10231224-1c5a-1337-ab8d-df3308c759c2] 
ExecuteSQL[id=10231224-1c5a-1337-ab8d-df3308c759c2] failed to process session 
due to java.lang.ExceptionInInitializerError; Processor Administratively 
Yielded for 1 sec: java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError: null
at 
com.cloudera.impala.jdbc41.internal.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
at 
com.cloudera.impala.jdbc41.internal.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
at 
com.cloudera.impala.jdbc41.internal.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at 
com.cloudera.impala.jdbc41.internal.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at 
com.cloudera.impala.jdbc41.internal.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at 
com.cloudera.impala.jdbc41.internal.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at 
com.cloudera.impala.jdbc41.internal.apache.thrift.transport.TIOStreamTransport.(TIOStreamTransport.java:38)
at 
com.cloudera.impala.hivecommon.api.HiveServer2ClientFactory.createTransport(Unknown
 Source)
at 
com.cloudera.impala.hivecommon.api.ServiceDiscoveryFactory.createClient(Unknown 
Source)
at 
com.cloudera.impala.hivecommon.core.HiveJDBCCommonConnection.establishConnection(Unknown
 Source)
at 
com.cloudera.impala.impala.core.ImpalaJDBCConnection.establishConnection(Unknown
 Source)
at com.cloudera.impala.jdbc.core.LoginTimeoutConnection.connect(Unknown 
Source)
at 
com.cloudera.impala.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
at com.cloudera.impala.jdbc.common.AbstractDriver.connect(Unknown 
Source)
at 
org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:53)
at 
org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:291)
at 
org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2395)
at 
org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
at 
org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at 
org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at 
org.apache.nifi.dbcp.DBCPConnectionPool.lambda$getConnection$2(DBCPConnectionPool.java:467)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.nifi.security.krb.AbstractKerberosUser.doAs(AbstractKerberosUser.java:143)
at 
org.apache.nifi.security.krb.KerberosAction.execute(KerberosAction.java:68)
at 
org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:468)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

Re: Nifi Cluster with external zookeeper

2019-05-14 Thread Josef.Zahner1
Hi Matt

nifi.web.http.host has to be the hostname of your maschine and not the cluster 
name! if the name is wrong you are getting serious problems.


Additional properties for external zookeeper below. That’s just what came to my 
mind, hope it’s everything. Just to be sure, your NiFi Cluster was working with 
the embedded zookeeper and you are now trying to migrate to an external 
zookeeper right?

vim nifi.properties
nifi.state.management.embedded.zookeeper.start=false
# Properties file that provides the ZooKeeper properties to use if 
 is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=zookeeper-01:2181, zookeeper-02:2181, 
zookeeper-03:2181
nifi.zookeeper.connect.timeout=10 secs
nifi.zookeeper.session.timeout=10 secs
nifi.zookeeper.root.node=/nifi/cluster01

vim state-management.xml

zk-provider

org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider
 zookeeper-01:2181, zookeeper-02:2181, 
zookeeper-03:2181
/nifi/cluster01
10 seconds
Open


Cheers Josef

From: Matt Penna 
Reply-To: "users@nifi.apache.org" 
Date: Monday, 13 May 2019 at 18:58
To: "users@nifi.apache.org" 
Subject: Nifi Cluster with external zookeeper

I’m trying to create a nifi cluster using an external zookeeper quorum. I am 
currently getting the following error in the Cannot replicate request to Node 
localhost:8080 because the node is not connected. I have set the 
nifi.web.http.host to be the same as the cluster name. Also I have verified 
that there are no firewall issues causing network issues. I am not seeing any 
errors in logs. I do see that there are issues with voting on the correct 
workflow and that the process to do that runs for the configured 5 mintues then 
stops. That is when this message becomes available. I don’t see where it is 
getting localhost from. Any suggestions on things I can try or any missed 
configurations?



smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi PutKudu Improvement

2019-02-06 Thread Josef.Zahner1
Perfect, thank you Bryan.

Cheers Josef

On 06.02.19, 15:43, "Bryan Bende"  wrote:

Hi Josef,

Thanks for the heads up. Submitting a patch or pull request and
letting everyone know is generally the correct approach.

To get it into a release will require a committer with Kudu experience
to have time to review the change.

Thanks,

Bryan

On Wed, Feb 6, 2019 at 9:27 AM  wrote:
>
> Hi
>
>
>
> A colleague just created a jira ticket to add an additional parameter to 
the NiFi PutKudu processor. Any suggestions how to get this code change into 
one of the future NiFi releases?
>
>
>
> Jira Ticket:
>
> https://issues.apache.org/jira/browse/NIFI-5989
>
>
>
> Cheers Josef




smime.p7s
Description: S/MIME Cryptographic Signature


NiFi PutKudu Improvement

2019-02-06 Thread Josef.Zahner1
Hi

A colleague just created a jira ticket to add an additional parameter to the 
NiFi PutKudu processor. Any suggestions how to get this code change into one of 
the future NiFi releases?

Jira Ticket:
https://issues.apache.org/jira/browse/NIFI-5989

Cheers Josef


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Minimum file age

2019-01-31 Thread Josef.Zahner1
Hi Tom

In the wait processor you can define how long you wanna delay the fetch and 
that value needs to be the longest expected writing time. Of course this works 
only if you know what the max. writing time is. In my case this is clear as in 
max. every hour at least one new file gets created. Sorry for not giving you a 
better reply.

Cheers Josef

From: Tomislav Novosel 
Reply-To: "users@nifi.apache.org" 
Date: Thursday, 31 January 2019 at 12:29
To: "users@nifi.apache.org" 
Subject: Re: Minimum file age

Hi all,

@Josef, what do you mean with Wait and DetectDuplicate processors? How to delay 
fetching by time of conversion?
How can Wait processor know that file is converted completely? If the file is 
listed again and DetectDuplicate processor
caches identifier, Wair processor will pass flowfile downstream. What in case 
if file is that big that will be listed three times or four times?

Regards,
Tom

On Mon, 28 Jan 2019 at 11:09, 
mailto:josef.zahn...@swisscom.com>> wrote:
Hi Tom

I suggest to use a Wait Processor (to delay the fetch) together with 
DetectDuplicate Processor. In that way you will fetch the file only once and 
after it has been written completely (as long as you know how long it takes in 
max. to finish writing). I know it’s not nice but that’s how we do it for the 
moment… I’m waiting for this feature as well :-(.

Cheers Josef


From: Arpad Boda mailto:ab...@hortonworks.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Monday, 28 January 2019 at 10:17
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: Minimum file age

Hi,

It’s on the way: https://issues.apache.org/jira/browse/NIFI-5977 :)

Regards,
Arpad

From: Tomislav Novosel mailto:to.novo...@gmail.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Monday, 28 January 2019 at 09:19
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Minimum file age

Hi all,

I'm having issue with SFTPList processor in Nifi. When reading files from 
folder where another process writes files, it lists the same file multiple time 
and ingests file multiple times because modification date of file changes 
rapidly as another process writes to it.

It appears that Nifi lists faster than external process writes, so before the 
end of writing (conversion of file from one format to another), Nifi lists file 
multiple time and then creates duplicates.

There is no property of Minimum file age like in ListFile processor.

How can I resolve this to wait for a moment when the file is converted 
completely and then to list file and pass it to FetchSFTP processor?

Thanks in advance,
Tom.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Minimum file age

2019-01-28 Thread Josef.Zahner1
Hi Tom

I suggest to use a Wait Processor (to delay the fetch) together with 
DetectDuplicate Processor. In that way you will fetch the file only once and 
after it has been written completely (as long as you know how long it takes in 
max. to finish writing). I know it’s not nice but that’s how we do it for the 
moment… I’m waiting for this feature as well :-(.

Cheers Josef


From: Arpad Boda 
Reply-To: "users@nifi.apache.org" 
Date: Monday, 28 January 2019 at 10:17
To: "users@nifi.apache.org" 
Subject: Re: Minimum file age

Hi,

It’s on the way: https://issues.apache.org/jira/browse/NIFI-5977 :)

Regards,
Arpad

From: Tomislav Novosel 
Reply-To: "users@nifi.apache.org" 
Date: Monday, 28 January 2019 at 09:19
To: "users@nifi.apache.org" 
Subject: Minimum file age

Hi all,

I'm having issue with SFTPList processor in Nifi. When reading files from 
folder where another process writes files, it lists the same file multiple time 
and ingests file multiple times because modification date of file changes 
rapidly as another process writes to it.

It appears that Nifi lists faster than external process writes, so before the 
end of writing (conversion of file from one format to another), Nifi lists file 
multiple time and then creates duplicates.

There is no property of Minimum file age like in ListFile processor.

How can I resolve this to wait for a moment when the file is converted 
completely and then to list file and pass it to FetchSFTP processor?

Thanks in advance,
Tom.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ListSFTP Question

2019-01-26 Thread Josef.Zahner1
Hi Joe

Done! https://issues.apache.org/jira/browse/NIFI-5977

Cheers Josef


From: Joe Witt 
Reply-To: "users@nifi.apache.org" 
Date: Thursday, 24 January 2019 at 17:18
To: "users@nifi.apache.org" 
Subject: Re: ListSFTP Question

hey josef.  yeah we need to add a min file age property to ListSftp.  please 
file a jira.

thanks

On Thu, Jan 24, 2019, 11:13 AM 
mailto:josef.zahn...@swisscom.com> wrote:
Hi guys

We need your advice,… we use the ListSFTP processor to read files on a remote 
folder. The files gets written like that:


  *   File1
  *   File2
  *   File3 (at the time of “ls” command this file is growing, we don’t know 
how big it gets or when writing is finished)

So the application on that remote folder writes the files one by one, always 
one file grows until a certain size (not predictable). So our issue is that 
NiFi reads the same file multiple time as it grows and the modify timestamp 
changes. But as we don’t know when writing is finished we don’t know how to 
drop all “incomplete” ListSFTP flowfiles of the “File3” in the example.

Any help in this, in theory, pretty simple case? The best solution would be to 
tell the ListSFTP that it should ignore files with a timestamp of lastmodified 
from at least about x minutes. But there is no such option. We need somehow a 
ListSFTP delay and we shouldn’t read too new files. But we have no idea how to 
do that.

Greetings Josef


smime.p7s
Description: S/MIME Cryptographic Signature


Re: RedisConnectionPoolService - Incorrectly Attempts to Connect to localhost

2019-01-26 Thread Josef.Zahner1
FYI

I’ve just created a bugreport – I don’t think this is expected behavior.
https://issues.apache.org/jira/browse/NIFI-5976

I’ve also described a workaround in there.

Cheers Josef


From: "Zahner Josef, GSB-LR-TRW-LI" 
Date: Saturday, 26 January 2019 at 13:07
To: "users@nifi.apache.org" 
Subject: Re: RedisConnectionPoolService - Incorrectly Attempts to Connect to 
localhost

Hi Jim

Did you solve the issue? I have exactly the same behavior… I wanted to try 
Redis but ended up with this error.

Cheers Josef


On 2018/11/29 21:54:25, "Williams, Jim" 
mailto:j...@alertlogic.com>> wrote:
> Hello,>
>
>  >
>
> I'm trying to set up the>
> RedisConnectionPoolService/RedisDistributedMapCacheClientService.>
>
>  >
>
> Some basic observations:>
>
> *  This is a standalone Nifi 1.8.0 server>
> *  SELinux is disabled on the server>
> *  There are no iptables rules configured for blocking on the server>
> *  I am able to resolve the hostname of the Redis server to an IP>
> address on the Nifi server>
> *  I can connect to the Redis server to the Nifi server using telnet>
>
>  >
>
> The stack trace I see when the services are started is:>
>
> 2018-11-29 21:16:03,527 WARN [Timer-Driven Process Thread-8]>
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding>
> PutDistributedMapCache[id=0167105c-4a54-1adf-cb8d-1b45de7f0c99] due to>
> uncaught Exception: org.springframework.data.redis.RedisConnection>
>
> FailureException: Cannot get Jedis connection; nested exception is>
> redis.clients.jedis.exceptions.JedisConnectionException: Could not get a>
> resource from the pool>
>
> org.springframework.data.redis.RedisConnectionFailureException: Cannot get>
> Jedis connection; nested exception is>
> redis.clients.jedis.exceptions.JedisConnectionException: Could not get a>
> resource from the pool>
>
> at>
> org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetch>
> JedisConnector(JedisConnectionFactory.java:281)>
>
> at>
> org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getCo>
> nnection(JedisConnectionFactory.java:464)>
>
> at>
> org.apache.nifi.redis.service.RedisConnectionPoolService.getConnection(Redis>
> ConnectionPoolService.java:89)>
>
> at sun.reflect.GeneratedMethodAccessor580.invoke(Unknown Source)>
>
> at>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl>
> .java:43)>
>
> at java.lang.reflect.Method.invoke(Method.java:498)>
>
> at>
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandle>
> r.invoke(StandardControllerServiceInvocationHandler.java:84)>
>
> at com.sun.proxy.$Proxy98.getConnection(Unknown Source)>
>
> at>
> org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.withConn>
> ection(RedisDistributedMapCacheClientService.java:343)>
>
> at>
> org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.put(Redi>
> sDistributedMapCacheClientService.java:189)>
>
> at sun.reflect.GeneratedMethodAccessor579.invoke(Unknown Source)>
>
> at>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl>
> .java:43)>
>
> at java.lang.reflect.Method.invoke(Method.java:498)>
>
> at>
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandle>
> r.invoke(StandardControllerServiceInvocationHandler.java:84)>
>
> at com.sun.proxy.$Proxy96.put(Unknown Source)>
>
> at>
> org.apache.nifi.processors.standard.PutDistributedMapCache.onTrigger(PutDist>
> ributedMapCache.java:202)>
>
> at>
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java>
> :27)>
>
> at>
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessor>
> Node.java:1165)>
>
> at>
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java>
> :203)>
>
> at>
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(Timer>
> DrivenSchedulingAgent.java:117)>
>
> at>
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)>
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)>
>
> at>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$>
> 301(ScheduledThreadPoolExecutor.java:180)>
>
> at>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Sch>
> eduledThreadPoolExecutor.java:294)>
>
> at>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11>
> 49)>
>
> at>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6>
> 24)>
>
> at java.lang.Thread.run(Thread.java:748)>
>
> Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Could>
> not get a resource from the pool>
>
> at redis.clients.util.Pool.getResource(Pool.java:53)>
>

Re: RedisConnectionPoolService - Incorrectly Attempts to Connect to localhost

2019-01-26 Thread Josef.Zahner1
Hi Jim

Did you solve the issue? I have exactly the same behavior… I wanted to try 
Redis but ended up with this error.

Cheers Josef


On 2018/11/29 21:54:25, "Williams, Jim" 
mailto:j...@alertlogic.com>> wrote:
> Hello,>
>
>  >
>
> I'm trying to set up the>
> RedisConnectionPoolService/RedisDistributedMapCacheClientService.>
>
>  >
>
> Some basic observations:>
>
> *  This is a standalone Nifi 1.8.0 server>
> *  SELinux is disabled on the server>
> *  There are no iptables rules configured for blocking on the server>
> *  I am able to resolve the hostname of the Redis server to an IP>
> address on the Nifi server>
> *  I can connect to the Redis server to the Nifi server using telnet>
>
>  >
>
> The stack trace I see when the services are started is:>
>
> 2018-11-29 21:16:03,527 WARN [Timer-Driven Process Thread-8]>
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding>
> PutDistributedMapCache[id=0167105c-4a54-1adf-cb8d-1b45de7f0c99] due to>
> uncaught Exception: org.springframework.data.redis.RedisConnection>
>
> FailureException: Cannot get Jedis connection; nested exception is>
> redis.clients.jedis.exceptions.JedisConnectionException: Could not get a>
> resource from the pool>
>
> org.springframework.data.redis.RedisConnectionFailureException: Cannot get>
> Jedis connection; nested exception is>
> redis.clients.jedis.exceptions.JedisConnectionException: Could not get a>
> resource from the pool>
>
> at>
> org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetch>
> JedisConnector(JedisConnectionFactory.java:281)>
>
> at>
> org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getCo>
> nnection(JedisConnectionFactory.java:464)>
>
> at>
> org.apache.nifi.redis.service.RedisConnectionPoolService.getConnection(Redis>
> ConnectionPoolService.java:89)>
>
> at sun.reflect.GeneratedMethodAccessor580.invoke(Unknown Source)>
>
> at>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl>
> .java:43)>
>
> at java.lang.reflect.Method.invoke(Method.java:498)>
>
> at>
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandle>
> r.invoke(StandardControllerServiceInvocationHandler.java:84)>
>
> at com.sun.proxy.$Proxy98.getConnection(Unknown Source)>
>
> at>
> org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.withConn>
> ection(RedisDistributedMapCacheClientService.java:343)>
>
> at>
> org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.put(Redi>
> sDistributedMapCacheClientService.java:189)>
>
> at sun.reflect.GeneratedMethodAccessor579.invoke(Unknown Source)>
>
> at>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl>
> .java:43)>
>
> at java.lang.reflect.Method.invoke(Method.java:498)>
>
> at>
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandle>
> r.invoke(StandardControllerServiceInvocationHandler.java:84)>
>
> at com.sun.proxy.$Proxy96.put(Unknown Source)>
>
> at>
> org.apache.nifi.processors.standard.PutDistributedMapCache.onTrigger(PutDist>
> ributedMapCache.java:202)>
>
> at>
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java>
> :27)>
>
> at>
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessor>
> Node.java:1165)>
>
> at>
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java>
> :203)>
>
> at>
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(Timer>
> DrivenSchedulingAgent.java:117)>
>
> at>
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)>
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)>
>
> at>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$>
> 301(ScheduledThreadPoolExecutor.java:180)>
>
> at>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Sch>
> eduledThreadPoolExecutor.java:294)>
>
> at>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11>
> 49)>
>
> at>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6>
> 24)>
>
> at java.lang.Thread.run(Thread.java:748)>
>
> Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Could>
> not get a resource from the pool>
>
> at redis.clients.util.Pool.getResource(Pool.java:53)>
>
> at redis.clients.jedis.JedisPool.getResource(JedisPool.java:226)>
>
> at redis.clients.jedis.JedisPool.getResource(JedisPool.java:16)>
>
> at>
> org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetch>
> JedisConnector(JedisConnectionFactory.java:271)>
>
> ... 26 common frames omitted>
>
> Caused by: 

ListSFTP Question

2019-01-24 Thread Josef.Zahner1
Hi guys

We need your advice,… we use the ListSFTP processor to read files on a remote 
folder. The files gets written like that:


  *   File1
  *   File2
  *   File3 (at the time of “ls” command this file is growing, we don’t know 
how big it gets or when writing is finished)

So the application on that remote folder writes the files one by one, always 
one file grows until a certain size (not predictable). So our issue is that 
NiFi reads the same file multiple time as it grows and the modify timestamp 
changes. But as we don’t know when writing is finished we don’t know how to 
drop all “incomplete” ListSFTP flowfiles of the “File3” in the example.

Any help in this, in theory, pretty simple case? The best solution would be to 
tell the ListSFTP that it should ignore files with a timestamp of lastmodified 
from at least about x minutes. But there is no such option. We need somehow a 
ListSFTP delay and we shouldn’t read too new files. But we have no idea how to 
do that.

Greetings Josef


smime.p7s
Description: S/MIME Cryptographic Signature


Re: process group name reverts back to initial value if I do a nifi registry "Change version"

2019-01-13 Thread Josef.Zahner1
Thanks Chad for confirming it.

@Bryan Bende: how shall we continue here? I understand that it isn't a high 
prio issue, however it would be great to get it fixed or at least know that it 
will be fixed in one of the future releases...

Cheers Josef


On 08.01.19, 20:34, "Chad Woodhead"  wrote:

Bryan/Josef,

Just wanted to let you know that I just tested this with NiFi 1.8.0 and 
NiFi Registry 0.3.0 and I experience the same behavior as Josef.

-Chad

> On Jan 8, 2019, at 12:58 PM, Bryan Bende  wrote:
> 
> I will keep trying, but I haven't been able to reproduce using NiFi
> 1.8.0 and registry 0.2.0.
> 
> I must be missing something.
> 
> On Tue, Jan 8, 2019 at 11:50 AM  wrote:
>> 
>> I've tried it now on another secured cluster exactly like you did:
>> 
>>1) Create PG "A" and save to registry
>>2) Import PG "A" from registry and rename to "B"
>>3) Add new processor (Execute Script) to original PG "A" and save 
to registry
>>4) Change version on PG "B" to new version
>> 
>> Problem still there... after changing to new version on "B" the name 
changed back to "A"
>> 
>> 
>> 
>> On 08.01.19, 17:40, "Zahner Josef, GSB-LR-TRW-LI" 
 wrote:
>> 
>>Hi Bryan
>> 
>>In my case it happens all the time, doesn't matter what kind of 
change. On my first test below I've changed a variable on a processor inside 
the PG and the second time (a few seconds ago) I've added a connection to my 
"Execute Script" processor. All the time my second PG with the new name changed 
back to the initial name... Even if I just click "Change version" and select 
another one than the current, my second PG changes the name back to the initial 
value.
>> 
>>Btw. we use NiFi Registry 0.2.0.
>> 
>>Cheers Josef
>> 
>> 
>> 
>> 
>> 
>>On 08.01.19, 17:23, "Bryan Bende"  wrote:
>> 
>>Hi Josef,
>> 
>>That sounds like a possible bug. I think the PG name is supposed 
to
>>remain unchanged.
>> 
>>I wasn't able to reproduce this though... in step 5 when you 
change
>>the "abc" group, what type of change are you making?
>> 
>>I did the following...
>> 
>>1) Create PG "A" and save to registry
>>2) Import PG "A" from registry and rename to "B"
>>3) Add new processor to original PG "A" and save to registry
>>4) Change version on PG "B" to new version
>> 
>>PG "B" is still named "B" at this point.
>> 
>>On Tue, Jan 8, 2019 at 10:26 AM  
wrote:
>>> 
>>> Hi guys
>>> 
>>> 
>>> 
>>> I’ve faced again an (at least for me) unexpected behavior of NiFi 1.8.0 
together with NiFi Registry.
>>> 
>>> 
>>> 
>>> Following use case:
>>> 
>>> 
>>> 
>>> Create a process group with name “abc” and add some processors to the pg
>>> Commit the pg to the NiFi registry
>>> Create a new pg and import the pg from step 1 from the registry
>>> Change the name of the new pg to “def” instead of “abc” – so far so 
good, no change from registry point of view
>>> Change the original pg “abc” from step 1 and commit the change to the 
registry
>>> Now we have change to the newest version for the pg “def” from step 4, 
as it isn’t anymore up to date – but now in my case as soon as I’m changing the 
version, the pg name gets changed back to “abc”. This happens all the time if I 
change the version on a pg which has another name than on the commit
>>> 
>>> 
>>> 
>>> Any comments on this? We use the NiFi registry as well as templating 
infrastructure, means we have several times the same pg but with different 
variables and names on the same NiFi canvas. But with the actual behavior this 
is very inconvenient… we have to memorize the name before we do the “Change 
version” and then after execution we have to set it again.
>>> 
>>> 
>>> 
>>> Cheers Josef
>> 
>> 
>> 
>> 





smime.p7s
Description: S/MIME Cryptographic Signature


Re: process group name reverts back to initial value if I do a nifi registry "Change version"

2019-01-08 Thread Josef.Zahner1
I've tried it now on another secured cluster exactly like you did:

1) Create PG "A" and save to registry
2) Import PG "A" from registry and rename to "B"
3) Add new processor (Execute Script) to original PG "A" and save to 
registry
4) Change version on PG "B" to new version

Problem still there... after changing to new version on "B" the name changed 
back to "A"



On 08.01.19, 17:40, "Zahner Josef, GSB-LR-TRW-LI"  
wrote:

Hi Bryan

In my case it happens all the time, doesn't matter what kind of change. On 
my first test below I've changed a variable on a processor inside the PG and 
the second time (a few seconds ago) I've added a connection to my "Execute 
Script" processor. All the time my second PG with the new name changed back to 
the initial name... Even if I just click "Change version" and select another 
one than the current, my second PG changes the name back to the initial value.

Btw. we use NiFi Registry 0.2.0.

Cheers Josef





On 08.01.19, 17:23, "Bryan Bende"  wrote:

Hi Josef,

That sounds like a possible bug. I think the PG name is supposed to
remain unchanged.

I wasn't able to reproduce this though... in step 5 when you change
the "abc" group, what type of change are you making?

I did the following...

1) Create PG "A" and save to registry
2) Import PG "A" from registry and rename to "B"
3) Add new processor to original PG "A" and save to registry
4) Change version on PG "B" to new version

PG "B" is still named "B" at this point.

On Tue, Jan 8, 2019 at 10:26 AM  wrote:
>
> Hi guys
>
>
>
> I’ve faced again an (at least for me) unexpected behavior of NiFi 
1.8.0 together with NiFi Registry.
>
>
>
> Following use case:
>
>
>
> Create a process group with name “abc” and add some processors to the 
pg
> Commit the pg to the NiFi registry
> Create a new pg and import the pg from step 1 from the registry
> Change the name of the new pg to “def” instead of “abc” – so far so 
good, no change from registry point of view
> Change the original pg “abc” from step 1 and commit the change to the 
registry
> Now we have change to the newest version for the pg “def” from step 
4, as it isn’t anymore up to date – but now in my case as soon as I’m changing 
the version, the pg name gets changed back to “abc”. This happens all the time 
if I change the version on a pg which has another name than on the commit
>
>
>
> Any comments on this? We use the NiFi registry as well as templating 
infrastructure, means we have several times the same pg but with different 
variables and names on the same NiFi canvas. But with the actual behavior this 
is very inconvenient… we have to memorize the name before we do the “Change 
version” and then after execution we have to set it again.
>
>
>
> Cheers Josef






smime.p7s
Description: S/MIME Cryptographic Signature


Re: process group name reverts back to initial value if I do a nifi registry "Change version"

2019-01-08 Thread Josef.Zahner1
Hi Bryan

In my case it happens all the time, doesn't matter what kind of change. On my 
first test below I've changed a variable on a processor inside the PG and the 
second time (a few seconds ago) I've added a connection to my "Execute Script" 
processor. All the time my second PG with the new name changed back to the 
initial name... Even if I just click "Change version" and select another one 
than the current, my second PG changes the name back to the initial value.

Btw. we use NiFi Registry 0.2.0.

Cheers Josef





On 08.01.19, 17:23, "Bryan Bende"  wrote:

Hi Josef,

That sounds like a possible bug. I think the PG name is supposed to
remain unchanged.

I wasn't able to reproduce this though... in step 5 when you change
the "abc" group, what type of change are you making?

I did the following...

1) Create PG "A" and save to registry
2) Import PG "A" from registry and rename to "B"
3) Add new processor to original PG "A" and save to registry
4) Change version on PG "B" to new version

PG "B" is still named "B" at this point.

On Tue, Jan 8, 2019 at 10:26 AM  wrote:
>
> Hi guys
>
>
>
> I’ve faced again an (at least for me) unexpected behavior of NiFi 1.8.0 
together with NiFi Registry.
>
>
>
> Following use case:
>
>
>
> Create a process group with name “abc” and add some processors to the pg
> Commit the pg to the NiFi registry
> Create a new pg and import the pg from step 1 from the registry
> Change the name of the new pg to “def” instead of “abc” – so far so good, 
no change from registry point of view
> Change the original pg “abc” from step 1 and commit the change to the 
registry
> Now we have change to the newest version for the pg “def” from step 4, as 
it isn’t anymore up to date – but now in my case as soon as I’m changing the 
version, the pg name gets changed back to “abc”. This happens all the time if I 
change the version on a pg which has another name than on the commit
>
>
>
> Any comments on this? We use the NiFi registry as well as templating 
infrastructure, means we have several times the same pg but with different 
variables and names on the same NiFi canvas. But with the actual behavior this 
is very inconvenient… we have to memorize the name before we do the “Change 
version” and then after execution we have to set it again.
>
>
>
> Cheers Josef




smime.p7s
Description: S/MIME Cryptographic Signature


process group name reverts back to initial value if I do a nifi registry "Change version"

2019-01-08 Thread Josef.Zahner1
Hi guys

I’ve faced again an (at least for me) unexpected behavior of NiFi 1.8.0 
together with NiFi Registry.

Following use case:


  1.  Create a process group with name “abc” and add some processors to the pg
  2.  Commit the pg to the NiFi registry
  3.  Create a new pg and import the pg from step 1 from the registry
  4.  Change the name of the new pg to “def” instead of “abc” – so far so good, 
no change from registry point of view
  5.  Change the original pg “abc” from step 1 and commit the change to the 
registry
  6.  Now we have change to the newest version for the pg “def” from step 4, as 
it isn’t anymore up to date – but now in my case as soon as I’m changing the 
version, the pg name gets changed back to “abc”. This happens all the time if I 
change the version on a pg which has another name than on the commit

Any comments on this? We use the NiFi registry as well as templating 
infrastructure, means we have several times the same pg but with different 
variables and names on the same NiFi canvas. But with the actual behavior this 
is very inconvenient… we have to memorize the name before we do the “Change 
version” and then after execution we have to set it again.

Cheers Josef


smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi (De-)"Compress Content" Processor causes to fill up content_repo insanly fast by corrupt GZIP files

2019-01-04 Thread Josef.Zahner1
Mark,

Yes we are using Load Balancing capability and we do that after the ListSFTP 
processor, so yes we loadbalance 0-Byte files. Seems that we probably facing 
your Bug here.

Thanks a lot for explaining in detail what happens regarding the 
flowfile/content repo in NiFi.

Additionally we have several custom processors, could be as well that one of 
them causing it? Can someone share a (java) codesnipplet which ensures that a 
custom processor doesn’t keep the flowfiles in content repo?

Cheers Josef

From: Mark Payne 
Reply-To: "users@nifi.apache.org" 
Date: Friday, 4 January 2019 at 14:48
To: "users@nifi.apache.org" 
Subject: Re: NiFi (De-)"Compress Content" Processor causes to fill up 
content_repo insanly fast by corrupt GZIP files

Josef,

Thanks for the info! There are a few things to consider here. Firstly, you said 
that you are using NiFi 1.8.0.
Are you using the new Load Balancing capability? I.e., do you have any 
Connections configured to balance
load across your cluster? And if so, are you load-balancing any 0-byte files? 
If so, then you may be getting
bitten by [1]. That can result in data staying in the Content Repo and not 
getting cleaned up until restart.

The second thing that is important to consider is the interaction between the 
FlowFile Repositories and Content
Repository. At a high level, the Content Repository stores the FlowFiles' 
content/payload. The FlowFile Repository
stores the FlowFiles' attributes, which queue it is in, and some other 
metadata. Once a FlowFile completes its processing
and is no longer part of the flow, we cannot simply delete the content claim 
from the Content Repository. If we did so,
we could have a condition where the node is restarted and the FlowFile 
Repository has not yet been fully flushed to disk
(NiFi may have already written to the file, but the Operating System may be 
caching that without having flushed/"fsync'ed"
to disk). In such a case, we want the transaction to be "rolled back" and 
reprocessed. So, if we deleted the Content Claim
from the Content Repository immediately when it is no longer needed, and then 
restarted, we could have a case where the
FlowFile repo wasn't flushed to disk and as a result points to a Content Claim 
that has been deleted, and this would result
in data loss.

So, to avoid the above scenario, what we do is instead keep track of how many 
"claims" there are for a Content Claim
and then, when the FlowFile repo performs a checkpoint (every 2 minutes by 
default), we go through and delete any Content
Claims that have a claim count of 0. So this means that any Content Claim that 
has been accessed in the past 2 minutes
(or however long the checkpoint time is) will be considered "active" and will 
not be cleaned up.

I hope this helps to explain some of the behavior, but if not, let's please 
investigate further!

Thanks
-Mark



[1] https://issues.apache.org/jira/browse/NIFI-5771



On Jan 4, 2019, at 7:41 AM, 
mailto:josef.zahn...@swisscom.com>> 
mailto:josef.zahn...@swisscom.com>> wrote:

Hi Joe

We use NiFi 1.8.0. Yes we have different partitions for each repo. You see the 
partitions below.

[nifi@nifi-12 ~]$ df -h
Filesystem Size  Used Avail Use% Mounted on
/dev/mapper/disk1-root 100G  2.0G   99G   2% /
devtmpfs   126G 0  126G   0% /dev
tmpfs  126G 0  126G   0% /dev/shm
tmpfs  126G  3.1G  123G   3% /run
tmpfs  126G 0  126G   0% /sys/fs/cgroup
/dev/sda1 1014M  188M  827M  19% /boot
/dev/mapper/disk1-home  30G   34M   30G   1% /home
/dev/mapper/disk1-var  100G  1.1G   99G   2% /var
/dev/mapper/disk1-opt   50G  5.9G   45G  12% /opt
/dev/mapper/disk1-database_repo   1014M   35M  980M   4% /database_repo
/dev/mapper/disk1-provenance_repo  4.0G   33M  4.0G   1% /provenance_repo
/dev/mapper/disk1-flowfile_repo530G   34M  530G   1% /flowfile_repo
/dev/mapper/disk2-content_repo 850G   64G  786G   8% /content_repo
tmpfs   26G 0   26G   0% /run/user/2000


Cheers Josef


From: Joe Witt mailto:joe.w...@gmail.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Friday, 4 January 2019 at 13:29
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: NiFi (De-)"Compress Content" Processor causes to fill up 
content_repo insanly fast by corrupt GZIP files

Josef

Not looping for that proc for sure makes sense.  Nifi dying in the middle of a 
process/transaction is no problem..it will restart the transaction.

But we do need to find out what is filling the repo.  You have flowfile, 
content, and prov in different disk volumes or partitins right?  What version 
of nifi?

Lets definitely figure this out.  You should see clean behavior of the repos 
and you should never have 

Re: A question about [MergeContent] processor

2019-01-04 Thread Josef.Zahner1
Hi Jianan
I just say that as soon as “Minimum Number of Entries” is reached the flow can 
be flushed out,  and further if the minimum number isn’t reached I would expect 
that the “Max Bin Age” takes place. Have you tried that?
Cheers Josef


From: Jianan Zhang 
Reply-To: "users@nifi.apache.org" 
Date: Friday, 4 January 2019 at 12:46
To: "users@nifi.apache.org" 
Subject: Re: A question about [MergeContent] processor

Hi Josef,

Thanks for reply. In my opinion the “Minimum Number of Entries” is should not 
and can not stronger than the “Max Bin Age”. Suppose I have only ONE flowfile 
from datasource put into MergeContent processor, and I set "Minimum Number of 
Entries" = 2, then this ONE flowfile will never coming out from nifi, even if 
it reach the deadline of bin. This is very easy lead to dead lock.

And I don't know how to use the “Merge Strategy: Defragment” to merge the 
flowfile from kafka, I really don't know the speed the producer produce the 
messge.

Jianan Zhang

mailto:josef.zahn...@swisscom.com>> 于2019年1月4日周五 
下午6:43写道:
Hi Jianan

As you have “Minimum Number of Entries: 1” it is normal that you can see merges 
with only one flowfile. In my opinion the “Minimum Number of Entries” is 
stronger than the “Max Bin Age” (first is written bold and second not). 
Additionally it is called “Max Bin Age” and not “Bin Age”. So as soon as you 
reach at least 1 flowfile it could be pushed out. However, in my opinion the 
documentation for “Max Bin Age” is to unspecific (when does it really takes 
place?), only the developers know exactly the function behind it. Would be 
great to get more information here…

Just my 2 cents. Whenever possible try to use “Merge Strategy: Defragment” 
instead of the current one, but this is working only if it is predictable how 
many flowfiles you would like to merge. With this strategy the max bin age 
makes fully sense and works as expected.

Cheers Josef


From: Jianan Zhang 
mailto:william.jn.zh...@gmail.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Friday, 4 January 2019 at 11:16
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: A question about [MergeContent] processor

Hi all,
I have a job consist of following steps: first consuming data from kafka, and 
then packing data every 5 minutes into one file, finally put the packed file 
into hdfs.
I use the [MergeContent] processor to accomplish the “packing” step. The 
properties of MergeContent I configured is list below:

--
Merge Strategy: Bin-Packing Algorithm
Merge Format: Binary Concatenation
Attribute Strategy: Keep Only Common Attributes
Correlation Attribute Name: No value set
Metadata Strategy: Do Not Merge Uncommon Metadata
Minimum Number of Entries: 1
Maximum Number of Entries: 9
Minimum Group Size: 255 MB
Maximum Group Size:No value set
Max Bin Age: 5 minutes
Maximum number of Bins: 1
--

I found the behavior of the MergeContent processor is very uncontrollable. 
There are serveral workflows running on the nifi with the same configuration of 
MergeContent processor, some workflows can packing the data every 5 minutes 
into one file correctly, but some others can’t. It even happened that some 
MergeContent processor generate one flowfile per record.

I am wondering if I misunderstanding the machanism of MergeContent processor.

An newbie of nifi, please help me.

Thanks!


smime.p7s
Description: S/MIME Cryptographic Signature


Re: A question about [MergeContent] processor

2019-01-04 Thread Josef.Zahner1
Hi Jianan

As you have “Minimum Number of Entries: 1” it is normal that you can see merges 
with only one flowfile. In my opinion the “Minimum Number of Entries” is 
stronger than the “Max Bin Age” (first is written bold and second not). 
Additionally it is called “Max Bin Age” and not “Bin Age”. So as soon as you 
reach at least 1 flowfile it could be pushed out. However, in my opinion the 
documentation for “Max Bin Age” is to unspecific (when does it really takes 
place?), only the developers know exactly the function behind it. Would be 
great to get more information here…

Just my 2 cents. Whenever possible try to use “Merge Strategy: Defragment” 
instead of the current one, but this is working only if it is predictable how 
many flowfiles you would like to merge. With this strategy the max bin age 
makes fully sense and works as expected.

Cheers Josef


From: Jianan Zhang 
Reply-To: "users@nifi.apache.org" 
Date: Friday, 4 January 2019 at 11:16
To: "users@nifi.apache.org" 
Subject: A question about [MergeContent] processor

Hi all,
I have a job consist of following steps: first consuming data from kafka, and 
then packing data every 5 minutes into one file, finally put the packed file 
into hdfs.
I use the [MergeContent] processor to accomplish the “packing” step. The 
properties of MergeContent I configured is list below:

--
Merge Strategy: Bin-Packing Algorithm
Merge Format: Binary Concatenation
Attribute Strategy: Keep Only Common Attributes
Correlation Attribute Name: No value set
Metadata Strategy: Do Not Merge Uncommon Metadata
Minimum Number of Entries: 1
Maximum Number of Entries: 9
Minimum Group Size: 255 MB
Maximum Group Size:No value set
Max Bin Age: 5 minutes
Maximum number of Bins: 1
--

I found the behavior of the MergeContent processor is very uncontrollable. 
There are serveral workflows running on the nifi with the same configuration of 
MergeContent processor, some workflows can packing the data every 5 minutes 
into one file correctly, but some others can’t. It even happened that some 
MergeContent processor generate one flowfile per record.

I am wondering if I misunderstanding the machanism of MergeContent processor.

An newbie of nifi, please help me.

Thanks!


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Cron-schedule not working

2019-01-02 Thread Josef.Zahner1
Hi Asanka

For a CRON which runs once a day at 7 AM I use this pattern “0 0 7 * * ?”. So I 
don’t see any reason why your CRON doesn’t work. Can you try with an example 
like 1 AM instead of midnight and see what’s happening?

Cheers Josef


From: Asanka Sanjaya 
Reply-To: "users@nifi.apache.org" 
Date: Wednesday, 2 January 2019 at 07:01
To: "users@nifi.apache.org" 
Subject: Re: Cron-schedule not working

Hi All,

Does anybody else have any idea regarding this?

On Tue, Jan 1, 2019 at 7:26 PM Chad Woodhead 
mailto:chadwoodh...@gmail.com>> wrote:
Hi Asanka,

Yes you are correct, my mistake. NiFi uses the Quartz Cron and not the Linux 
cron.

Every day at midnight is: 0 0 0 * * ?


My apologies.

-Chad

On Jan 1, 2019, at 1:55 AM, Asanka Sanjaya 
mailto:angal...@gmail.com>> wrote:
Hi Chad,

The nifi docs say that the expression should start with seconds and not with 
minutes.  https://nifi.apache.org/docs/nifi-docs/html/user-guide.html

[image.png]

This is the example that they have given:
The string 0 0 13 * * ? indicates that you want to schedule the processor to 
run at 1:00 PM every day.


It seems like nifi cron expressions are quite different.


On Tue, Jan 1, 2019 at 11:58 AM Asanka Sanjaya 
mailto:angal...@gmail.com>> wrote:
Thanks Chad!

On Mon, Dec 31, 2018 at 6:22 PM Chad Woodhead 
mailto:chadwoodh...@gmail.com>> wrote:
Hi Asanka,

The cron expression for every night at midnight is: 0 0 * * *

0 0 0 * * is not a valid cron expression as the 3rd field (day of month) cannot 
be a 0.

Here is an online cron editor that can help build your cron expressions: 
https://crontab.guru/

Also here is simple cron tester to see the next n iterations of your cron 
expression: http://cron.schlitt.info/

-Chad


On Dec 31, 2018, at 2:04 AM, Asanka Sanjaya 
mailto:angal...@gmail.com>> wrote:

Hello,

I use a cron-driven processor to start the workflow and the expression is:

0 0 0 * * ?

I expect the processor to run each day at 00:00 hours. But it runs each hour 
instead.

Processor configuration:



Flow:



What could be the possible reasons for this?


--
Thanks,
Asanka Sanjaya Herath
Senior Software Engineer | Zone24x7



--
Thanks,

Asanka Sanjaya Herath

Senior Software Engineer | Zone24x7


--
Thanks,

Asanka Sanjaya Herath

Senior Software Engineer | Zone24x7


--
Thanks,

Asanka Sanjaya Herath

Senior Software Engineer | Zone24x7


smime.p7s
Description: S/MIME Cryptographic Signature


Re: flowfiles stuck in load balanced queue; nifi 1.8

2018-12-18 Thread Josef.Zahner1
Hi Dano

Seems that the problem has been seen by a few people but until now nobody from 
NiFi team really cared about it – except Mark Payne. He mentioned the part 
below with the diagnostics, however in my case this doesn’t even work (tried it 
on standalone unsecured cluster as well as on secured cluster)! Can you get the 
diagnostics on your cluster?

I guess at the end we have to open a Jira ticket to narrow it down.

Cheers Josef


One thing that I would recommend, to get more information, is to go to the REST 
endpoint (in your browser is fine)
/nifi-api/processors//diagnostics

Where  is the UUID of either the source or the destination of the 
Connection in question. This gives us
a lot of information about the internals of Connection. The easiest way to get 
that Processor ID is to just click on the
processor on the canvas and look at the Operate palette on the left-hand side. 
You can copy & paste from there. If you
then send the diagnostics information to us, we can analyze that to help 
understand what's happening.



From: dan young 
Reply-To: "users@nifi.apache.org" 
Date: Wednesday, 19 December 2018 at 05:28
To: NiFi Mailing List 
Subject: flowfiles stuck in load balanced queue; nifi 1.8

We're seeing this more frequently where flowfiles seem to be stuck in a load 
balanced queue.  The only resolution is to disconnect the node and then restart 
that node.  After this, the flowfile disappears from the queue.  Any ideas on 
what might be going on here or what additional information I might be able to 
provide to debug this?

I've attached another thread dump and some screen shots


Regards,

Dano



smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

2018-11-19 Thread Josef.Zahner1
@Mark: I’m now able to query the REST-API. However, the call 
nifi-api/processors//diagnostics does reply on all nodes “An 
unexpected error has occurred. Please check the logs for additional details.” 
If I remove the /diagnostics I’m getting a useful reply in JSON format. I tried 
on several nifi nodes, even on a not secure one 
(http://unsec.company.net:8080/nifi-api/processors/eeba4842-0166-1000-9f56-e59ad010d040/diagnostics
 ) , all show the same behavior with the error.

Example:
curl 
'https://abc.company.net:8443/nifi-api/processors/cea533b0-edb9-1d11-955f-7380c973ac62/diagnostics'
 -H 'Authorization: Bearer $TOKEN' --compressed –insecure
An unexpected error has occurred. Please check the logs for additional details.

The loadbalancing problem still exists on our cluster…

Cheers Josef

From: "Zahner Josef, GSB-LR-TRW-LI" 
Date: Friday, 16 November 2018 at 22:32
To: "users@nifi.apache.org" 
Subject: Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

Looks pretty much the same in our case, just that we have a 8 node cluster (all 
nodes connected).


  *   Flowfile expiration: 0
  *   Back Pressure Object Threshold: 100
  *   Size Threshold: 1 GB
  *   Load Balance Strategy: Round Robin
  *   Load Balance Compression: Do not compress


We have on both sides a processor. Just tried to access the nifi-api for the 
destination, but we have a secured (ldap) cluster and it returns “Unknown user 
with identity 'anonymous'. Contact the system administrator.” Need to find out 
first how to access it…

Cheers Josef

From: dan young 
Reply-To: "users@nifi.apache.org" 
Date: Friday, 16 November 2018 at 17:35
To: NiFi Mailing List 
Subject: Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

Here's what we're running:

- Load Balance: Round Robin
- Compression: None
- Nodes: 3
- Are all nodes connected? : Yes
- Backpressure configuration:  Default, not changed.

Note, with our case there wasn't any other flowfiles in any queues. it was just 
stuck.  The only resolution for use was a rolling restart each node in the 
cluster.  Since then, I have configured the the FlowFile Expiration to 3600 
seconds and we haven't seen any stuck Flowfilescontinuing to monitor.

Regards,

Dano

On Fri, Nov 16, 2018 at 7:14 AM Mark Payne 
mailto:marka...@hotmail.com>> wrote:
Hey Josef,

So a few questions to help figure out what's going on here:

- What is the Load Balance Strategy in use? (I.e., Round Robin, Partition by 
Attribute, etc.)
- What compression is being used (None, Compress Attributes Only, Compress 
Contents and Attributes)
- How many nodes in the cluster?
- Are all nodes connected?
- What is the backpressure configured to?

Is the source or destination of the Connection a Processor? If so, we can get a 
lot of info by going to

http://localhost:8080/nifi-api/processors//diagnostics

Where http://localhost:8080/ would need to be changed to wherever your nifi 
instance is running and
 is the UUID of the Processor that is either the source or 
destination of the connection.
You can get the UUID of the Processor by clicking on it in the UI and then 
looking at the 'Operate' palette
on the left-hand side of the screen. If you can get the result of going to the 
URL, it will likely be helpful,
as it shows a lot of details about the processor, as well as all of its 
incoming and outgoing connections.

Thanks
-Mark





On Nov 16, 2018, at 3:05 AM, 
mailto:josef.zahn...@swisscom.com>> 
mailto:josef.zahn...@swisscom.com>> wrote:

Hi Mark

We see the issue again, even after a fresh started cluster (where we started 
everything at the same time). The files stuck for multiple seconds/minutes in 
the queue and the light blue loadbalancing icon on the right side shows that it 
is actually loadbalancing the whole time (even if it is just 1 or 2 files). The 
log (with default log levels) show no WARN or ERRORs…

Thanks in advance, Josef


From: Mark Payne mailto:marka...@hotmail.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Monday, 12 November 2018 at 17:19
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

Hey Dan,

Have looked through the logs to see if there are any WARN or ERROR's indicating 
what's going on?

Thanks
-Mark


On Nov 12, 2018, at 9:06 AM, dan young 
mailto:danoyo...@gmail.com>> wrote:

Hello,

We have two processor groups connected via the new  Load Balancing/Round Robin 
queue.  It seems that a flowfile is "stuck" in this queue.  I've been watching 
it for some time now.  Is there any way to trouble shoot what is stuck in the 
queue and why?  or maybe remove it?  I've tried to stop the PG and empty the 
queue, but always says emptied 0 out of 1 flowflies...

Regards,

Dano







smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

2018-11-16 Thread Josef.Zahner1
Looks pretty much the same in our case, just that we have a 8 node cluster (all 
nodes connected).


  *   Flowfile expiration: 0
  *   Back Pressure Object Threshold: 100
  *   Size Threshold: 1 GB
  *   Load Balance Strategy: Round Robin
  *   Load Balance Compression: Do not compress


We have on both sides a processor. Just tried to access the nifi-api for the 
destination, but we have a secured (ldap) cluster and it returns “Unknown user 
with identity 'anonymous'. Contact the system administrator.” Need to find out 
first how to access it…

Cheers Josef

From: dan young 
Reply-To: "users@nifi.apache.org" 
Date: Friday, 16 November 2018 at 17:35
To: NiFi Mailing List 
Subject: Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

Here's what we're running:

- Load Balance: Round Robin
- Compression: None
- Nodes: 3
- Are all nodes connected? : Yes
- Backpressure configuration:  Default, not changed.

Note, with our case there wasn't any other flowfiles in any queues. it was just 
stuck.  The only resolution for use was a rolling restart each node in the 
cluster.  Since then, I have configured the the FlowFile Expiration to 3600 
seconds and we haven't seen any stuck Flowfilescontinuing to monitor.

Regards,

Dano

On Fri, Nov 16, 2018 at 7:14 AM Mark Payne 
mailto:marka...@hotmail.com>> wrote:
Hey Josef,

So a few questions to help figure out what's going on here:

- What is the Load Balance Strategy in use? (I.e., Round Robin, Partition by 
Attribute, etc.)
- What compression is being used (None, Compress Attributes Only, Compress 
Contents and Attributes)
- How many nodes in the cluster?
- Are all nodes connected?
- What is the backpressure configured to?

Is the source or destination of the Connection a Processor? If so, we can get a 
lot of info by going to

http://localhost:8080/nifi-api/processors//diagnostics

Where http://localhost:8080/ would need to be changed to wherever your nifi 
instance is running and
 is the UUID of the Processor that is either the source or 
destination of the connection.
You can get the UUID of the Processor by clicking on it in the UI and then 
looking at the 'Operate' palette
on the left-hand side of the screen. If you can get the result of going to the 
URL, it will likely be helpful,
as it shows a lot of details about the processor, as well as all of its 
incoming and outgoing connections.

Thanks
-Mark




On Nov 16, 2018, at 3:05 AM, 
mailto:josef.zahn...@swisscom.com>> 
mailto:josef.zahn...@swisscom.com>> wrote:

Hi Mark

We see the issue again, even after a fresh started cluster (where we started 
everything at the same time). The files stuck for multiple seconds/minutes in 
the queue and the light blue loadbalancing icon on the right side shows that it 
is actually loadbalancing the whole time (even if it is just 1 or 2 files). The 
log (with default log levels) show no WARN or ERRORs…

Thanks in advance, Josef


From: Mark Payne mailto:marka...@hotmail.com>>
Reply-To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Date: Monday, 12 November 2018 at 17:19
To: "users@nifi.apache.org" 
mailto:users@nifi.apache.org>>
Subject: Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

Hey Dan,

Have looked through the logs to see if there are any WARN or ERROR's indicating 
what's going on?

Thanks
-Mark


On Nov 12, 2018, at 9:06 AM, dan young 
mailto:danoyo...@gmail.com>> wrote:

Hello,

We have two processor groups connected via the new  Load Balancing/Round Robin 
queue.  It seems that a flowfile is "stuck" in this queue.  I've been watching 
it for some time now.  Is there any way to trouble shoot what is stuck in the 
queue and why?  or maybe remove it?  I've tried to stop the PG and empty the 
queue, but always says emptied 0 out of 1 flowflies...

Regards,

Dano







smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

2018-11-16 Thread Josef.Zahner1
Hi Mark

We see the issue again, even after a fresh started cluster (where we started 
everything at the same time). The files stuck for multiple seconds/minutes in 
the queue and the light blue loadbalancing icon on the right side shows that it 
is actually loadbalancing the whole time (even if it is just 1 or 2 files). The 
log (with default log levels) show no WARN or ERRORs…

Thanks in advance, Josef


From: Mark Payne 
Reply-To: "users@nifi.apache.org" 
Date: Monday, 12 November 2018 at 17:19
To: "users@nifi.apache.org" 
Subject: Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

Hey Dan,

Have looked through the logs to see if there are any WARN or ERROR's indicating 
what's going on?

Thanks
-Mark



On Nov 12, 2018, at 9:06 AM, dan young 
mailto:danoyo...@gmail.com>> wrote:

Hello,

We have two processor groups connected via the new  Load Balancing/Round Robin 
queue.  It seems that a flowfile is "stuck" in this queue.  I've been watching 
it for some time now.  Is there any way to trouble shoot what is stuck in the 
queue and why?  or maybe remove it?  I've tried to stop the PG and empty the 
queue, but always says emptied 0 out of 1 flowflies...

Regards,

Dano







smime.p7s
Description: S/MIME Cryptographic Signature


Re: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

2018-11-12 Thread Josef.Zahner1
Same Issue here! Had to reboot the whole cluster to fix that issue. The files 
stuck a few seconds/minutes in the queue until they get processed. In my case I 
assume that it was caused by a one-by-one reboot of the cluster nodes, means we 
normally reboot only one of 8 nodes and wait a few seconds until we reboot the 
next one to get as much performance as possible. It must be a bug in this new 
Load Balancing/Round Robin queue… any comments from the devs?

Cheers Josef

From: dan young 
Reply-To: "users@nifi.apache.org" 
Date: Monday, 12 November 2018 at 15:06
To: NiFi Mailing List 
Subject: NiFi 1.8 and stuck flowfile in Load Balanced enabled queue

Hello,

We have two processor groups connected via the new  Load Balancing/Round Robin 
queue.  It seems that a flowfile is "stuck" in this queue.  I've been watching 
it for some time now.  Is there any way to trouble shoot what is stuck in the 
queue and why?  or maybe remove it?  I've tried to stop the PG and empty the 
queue, but always says emptied 0 out of 1 flowflies...

Regards,

Dano





smime.p7s
Description: S/MIME Cryptographic Signature