RE: Content repository data filling up disk...
We are observing this issue on 1.9.2, we modified properties, not sure anything additional needs to done. # Content Repository nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository nifi.content.claim.max.appendable.size=1 MB nifi.content.claim.max.flow.files=100 nifi.content.repository.directory.default=./content_repository nifi.content.repository.archive.max.retention.period=2 hours nifi.content.repository.archive.max.usage.percentage=25% nifi.content.repository.archive.enabled=true nifi.content.repository.always.sync=false nifi.content.viewer.url=../nifi-content-viewer/ Regards, Dnyaneshwar Pawar From: Boris Tyukin Sent: Wednesday, June 19, 2019 11:40 PM To: users@nifi.apache.org Subject: Re: Content repository data filling up disk... we had the exact same issue and after the upgrade to 1.9.2 all is good On Wed, Jun 19, 2019 at 1:28 PM Erik Anderson mailto:eand...@pobox.com>> wrote: > Mark Payne > Version 1.9.1 introduced a bug [1] that resulted We ran into the same > issues with NiFi filling up the disk with NiFi 1.9.1 > 1.9.2 doesnt have the problem. in the content repository not being properly > cleaned up, which could cause you to run > out of disk space. I can confirm what Mark said, NiFi 1.9.1 indeed has the above bug. We ran into that problem with 1.9.1 not cleaning up the content repository filled the disk. NiFi 1.9.2 does fix this problem. Erik Anderson Bloomberg DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
QueryRecord processor where clause does not work with equals operator and decimal numbers.
Hi, We are using QueryRecord processor to read and parse the the CSV files using CSV Reader as Record reader and JsonRecordSetWriter as Record Writer. We observed, when WHERE clause with decimal number and equals operator used does not yields the results. The rest of the operators are just working fine. It looks like valid test case and failing, so just wanted to know if anyone came across scenario, and if defect or enhancement request is already in place. Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
ScanHBase processor time range.
Hi All, We are facing difficulty in ScanHBase processor where min and max time range optional parameter's are set using variable. The values are coming from upstream and empty or null values are not accepted as valid time-range. It should be either valid time range or variables be removed keeping it no value set. PFB screenshot. Is there any way we can keep the variables and still able to handle the no values coming from upstream? [cid:image001.png@01D510AF.1F21AEC0] Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
RE: GetMongo failure routing issue.
Thanks Mike. And yes, we will need that feature. Do we need to raise the feature request? Regards, Dnyaneshwar Pawar From: Mike Thomsen Sent: Tuesday, February 05, 2019 5:45 PM To: users@nifi.apache.org Subject: Re: GetMongo failure routing issue. GetMongo isn't stuck, it's just not sending anything to the success relationship because there's nothing to send. If you need an empty result array to be sent, that's a feature we could add, but isn't a bug. Thanks, Mike On Tue, Feb 5, 2019 at 1:57 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Thanks Mike! We will defiantly try the SNAPSHOT as we get the time. Below is what I was trying to convey you: When we send the query which actually does not fetch any data (empty response), the GetMongo stuck and nothing appears in bulletin. For example. Query={“id”:123} The above is valid query, however, MongoDB don’t have anything associated with this, so we get the empty response. Regards, Dnyaneshwar Pawar From: Mike Thomsen mailto:mikerthom...@gmail.com>> Sent: Monday, February 04, 2019 8:19 PM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: Re: GetMongo failure routing issue. We have two PRs related to this here if you want to pull 1.9.0-SNAPSHOT and try it out in a test environment: https://github.com/apache/nifi/pulls 5987/5988 Two things I need from you to close the loop here: 1. Are you saying that if you send {} to even an empty collection it gets stuck? 2. Can you elaborate on what you were seeing in the first bullet (about the invalid URL in the service). Thanks, Mike On Mon, Feb 4, 2019 at 1:12 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Just an update to the point 3 below, invalid query still stuck in GetMongo, only invalid JSON passthrough to the failure relationship. The typical case is query MongoDB where it does not return anything ,the GetMongo processor stuck in this case as well. Regards, Dnyaneshwar Pawar From: Dnyaneshwar Pawar Sent: Wednesday, January 30, 2019 6:58 PM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: RE: GetMongo failure routing issue. Hi Mike, Thanks for the support. Just to report new test cases, we tried below: 1. Invalid URL: If GetMongo processor is configured with invalid “MongoURL” in controller service, while fetching data, it gives ERROR in Bulletin and log file (Ex. Connection Refused). But the flow stuck, instead of routing to the failure relation and the request flow file remains in QUEUE. 1. Invalid Database/Collection names: If GetMongo is configured with invalid DatabaseName/CollectionName, it doesn't log any ERROR as bulletin or in log file and also doesn't route to failure relation. It should route to failure with some Error message. 1. Invalid Query [Existing issue we discussed]: We tried the below solution you have provided and it is working fine. However, do we need to raise the JIRA, to get it fixed by the way of pulling it off from the attribute? Regards, Dnyaneshwar Pawar From: Mike Thomsen mailto:mikerthom...@gmail.com>> Sent: Wednesday, January 30, 2019 2:14 AM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: Re: GetMongo failure routing issue. Ok, figured it out. I missed the fact that you were pulling it off an attribute. I can reproduce the bug now. For now, a work around would be to put it in the body of the flowfile. That doesn't seem to have any problems with handling malformed queries. Thanks, Mike On Tue, Jan 29, 2019 at 3:40 PM Mike Thomsen mailto:mikerthom...@gmail.com>> wrote: I just passed a query through the flowfile body with the value {"input":} and it routed to failure without incident. Anything else about your environment you can share? On Mon, Jan 28, 2019 at 1:55 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Hi Mike, We are sending input through the post body from our application. Below is the complete request, the yellow highlighted query value would go as input to GetMongo: { "QUERY" : "{\"input1\":}" } Below is the error logs: 2019-01-28 12:02:41,559 ERROR [Timer-Driven Process Thread-5] o.a.nifi.processors.mongodb.GetMongo GetMongo[id=a14b5120-8924-3892-0de1-7279148dba2b] GetMongo[id=a14b5120-8924-3892-0de1-7279148dba2b] failed to process session due to org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'.; Processor Administratively Yielded for 1 sec: org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'. org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'. at org.bson.json.JsonReader.readBsonType(JsonReader.java:228) at org.bson.codecs.DocumentCodec.decode(DocumentCodec.java:139) at org.bson.codecs.DocumentCodec.decode(DocumentCodec.java:45) at
RE: GetMongo failure routing issue.
Thanks Mike! We will defiantly try the SNAPSHOT as we get the time. Below is what I was trying to convey you: When we send the query which actually does not fetch any data (empty response), the GetMongo stuck and nothing appears in bulletin. For example. Query={“id”:123} The above is valid query, however, MongoDB don’t have anything associated with this, so we get the empty response. Regards, Dnyaneshwar Pawar From: Mike Thomsen Sent: Monday, February 04, 2019 8:19 PM To: users@nifi.apache.org Subject: Re: GetMongo failure routing issue. We have two PRs related to this here if you want to pull 1.9.0-SNAPSHOT and try it out in a test environment: https://github.com/apache/nifi/pulls 5987/5988 Two things I need from you to close the loop here: 1. Are you saying that if you send {} to even an empty collection it gets stuck? 2. Can you elaborate on what you were seeing in the first bullet (about the invalid URL in the service). Thanks, Mike On Mon, Feb 4, 2019 at 1:12 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Just an update to the point 3 below, invalid query still stuck in GetMongo, only invalid JSON passthrough to the failure relationship. The typical case is query MongoDB where it does not return anything ,the GetMongo processor stuck in this case as well. Regards, Dnyaneshwar Pawar From: Dnyaneshwar Pawar Sent: Wednesday, January 30, 2019 6:58 PM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: RE: GetMongo failure routing issue. Hi Mike, Thanks for the support. Just to report new test cases, we tried below: 1. Invalid URL: If GetMongo processor is configured with invalid “MongoURL” in controller service, while fetching data, it gives ERROR in Bulletin and log file (Ex. Connection Refused). But the flow stuck, instead of routing to the failure relation and the request flow file remains in QUEUE. 1. Invalid Database/Collection names: If GetMongo is configured with invalid DatabaseName/CollectionName, it doesn't log any ERROR as bulletin or in log file and also doesn't route to failure relation. It should route to failure with some Error message. 1. Invalid Query [Existing issue we discussed]: We tried the below solution you have provided and it is working fine. However, do we need to raise the JIRA, to get it fixed by the way of pulling it off from the attribute? Regards, Dnyaneshwar Pawar From: Mike Thomsen mailto:mikerthom...@gmail.com>> Sent: Wednesday, January 30, 2019 2:14 AM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: Re: GetMongo failure routing issue. Ok, figured it out. I missed the fact that you were pulling it off an attribute. I can reproduce the bug now. For now, a work around would be to put it in the body of the flowfile. That doesn't seem to have any problems with handling malformed queries. Thanks, Mike On Tue, Jan 29, 2019 at 3:40 PM Mike Thomsen mailto:mikerthom...@gmail.com>> wrote: I just passed a query through the flowfile body with the value {"input":} and it routed to failure without incident. Anything else about your environment you can share? On Mon, Jan 28, 2019 at 1:55 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Hi Mike, We are sending input through the post body from our application. Below is the complete request, the yellow highlighted query value would go as input to GetMongo: { "QUERY" : "{\"input1\":}" } Below is the error logs: 2019-01-28 12:02:41,559 ERROR [Timer-Driven Process Thread-5] o.a.nifi.processors.mongodb.GetMongo GetMongo[id=a14b5120-8924-3892-0de1-7279148dba2b] GetMongo[id=a14b5120-8924-3892-0de1-7279148dba2b] failed to process session due to org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'.; Processor Administratively Yielded for 1 sec: org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'. org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'. at org.bson.json.JsonReader.readBsonType(JsonReader.java:228) at org.bson.codecs.DocumentCodec.decode(DocumentCodec.java:139) at org.bson.codecs.DocumentCodec.decode(DocumentCodec.java:45) at org.bson.Document.parse(Document.java:105) at org.bson.Document.parse(Document.java:90) at org.apache.nifi.processors.mongodb.GetMongo.getQuery(GetMongo.java:326) at org.apache.nifi.processors.mongodb.GetMongo.onTrigger(GetMongo.java:225) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAge
RE: GetMongo failure routing issue.
Just an update to the point 3 below, invalid query still stuck in GetMongo, only invalid JSON passthrough to the failure relationship. The typical case is query MongoDB where it does not return anything ,the GetMongo processor stuck in this case as well. Regards, Dnyaneshwar Pawar From: Dnyaneshwar Pawar Sent: Wednesday, January 30, 2019 6:58 PM To: users@nifi.apache.org Subject: RE: GetMongo failure routing issue. Hi Mike, Thanks for the support. Just to report new test cases, we tried below: 1. Invalid URL: If GetMongo processor is configured with invalid “MongoURL” in controller service, while fetching data, it gives ERROR in Bulletin and log file (Ex. Connection Refused). But the flow stuck, instead of routing to the failure relation and the request flow file remains in QUEUE. 1. Invalid Database/Collection names: If GetMongo is configured with invalid DatabaseName/CollectionName, it doesn't log any ERROR as bulletin or in log file and also doesn't route to failure relation. It should route to failure with some Error message. 1. Invalid Query [Existing issue we discussed]: We tried the below solution you have provided and it is working fine. However, do we need to raise the JIRA, to get it fixed by the way of pulling it off from the attribute? Regards, Dnyaneshwar Pawar From: Mike Thomsen mailto:mikerthom...@gmail.com>> Sent: Wednesday, January 30, 2019 2:14 AM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: Re: GetMongo failure routing issue. Ok, figured it out. I missed the fact that you were pulling it off an attribute. I can reproduce the bug now. For now, a work around would be to put it in the body of the flowfile. That doesn't seem to have any problems with handling malformed queries. Thanks, Mike On Tue, Jan 29, 2019 at 3:40 PM Mike Thomsen mailto:mikerthom...@gmail.com>> wrote: I just passed a query through the flowfile body with the value {"input":} and it routed to failure without incident. Anything else about your environment you can share? On Mon, Jan 28, 2019 at 1:55 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Hi Mike, We are sending input through the post body from our application. Below is the complete request, the yellow highlighted query value would go as input to GetMongo: { "QUERY" : "{\"input1\":}" } Below is the error logs: 2019-01-28 12:02:41,559 ERROR [Timer-Driven Process Thread-5] o.a.nifi.processors.mongodb.GetMongo GetMongo[id=a14b5120-8924-3892-0de1-7279148dba2b] GetMongo[id=a14b5120-8924-3892-0de1-7279148dba2b] failed to process session due to org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'.; Processor Administratively Yielded for 1 sec: org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'. org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'. at org.bson.json.JsonReader.readBsonType(JsonReader.java:228) at org.bson.codecs.DocumentCodec.decode(DocumentCodec.java:139) at org.bson.codecs.DocumentCodec.decode(DocumentCodec.java:45) at org.bson.Document.parse(Document.java:105) at org.bson.Document.parse(Document.java:90) at org.apache.nifi.processors.mongodb.GetMongo.getQuery(GetMongo.java:326) at org.apache.nifi.processors.mongodb.GetMongo.onTrigger(GetMongo.java:225) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) We have also enclosed screen shots for NiFi flow, GetMongo configuration and MongoDB console output with same query. PFA. Regards, Dnyaneshwar Pawar From: Mike Thomsen mailto:mikerthom...@gmail.com>> Sent: Saturday, January 26, 2019 5:48 PM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: Re: GetMongo failure routing issue. I just tried to replicate with GenerateFlowfile and GetMongo, and couldn't reproduce it. I
RE: GetMongo failure routing issue.
Hi Mike, Thanks for the support. Just to report new test cases, we tried below: 1. Invalid URL: If GetMongo processor is configured with invalid “MongoURL” in controller service, while fetching data, it gives ERROR in Bulletin and log file (Ex. Connection Refused). But the flow stuck, instead of routing to the failure relation and the request flow file remains in QUEUE. 1. Invalid Database/Collection names: If GetMongo is configured with invalid DatabaseName/CollectionName, it doesn't log any ERROR as bulletin or in log file and also doesn't route to failure relation. It should route to failure with some Error message. 1. Invalid Query [Existing issue we discussed]: We tried the below solution you have provided and it is working fine. However, do we need to raise the JIRA, to get it fixed by the way of pulling it off from the attribute? Regards, Dnyaneshwar Pawar From: Mike Thomsen Sent: Wednesday, January 30, 2019 2:14 AM To: users@nifi.apache.org Subject: Re: GetMongo failure routing issue. Ok, figured it out. I missed the fact that you were pulling it off an attribute. I can reproduce the bug now. For now, a work around would be to put it in the body of the flowfile. That doesn't seem to have any problems with handling malformed queries. Thanks, Mike On Tue, Jan 29, 2019 at 3:40 PM Mike Thomsen mailto:mikerthom...@gmail.com>> wrote: I just passed a query through the flowfile body with the value {"input":} and it routed to failure without incident. Anything else about your environment you can share? On Mon, Jan 28, 2019 at 1:55 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Hi Mike, We are sending input through the post body from our application. Below is the complete request, the yellow highlighted query value would go as input to GetMongo: { "QUERY" : "{\"input1\":}" } Below is the error logs: 2019-01-28 12:02:41,559 ERROR [Timer-Driven Process Thread-5] o.a.nifi.processors.mongodb.GetMongo GetMongo[id=a14b5120-8924-3892-0de1-7279148dba2b] GetMongo[id=a14b5120-8924-3892-0de1-7279148dba2b] failed to process session due to org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'.; Processor Administratively Yielded for 1 sec: org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'. org.bson.json.JsonParseException: JSON reader was expecting a value but found '}'. at org.bson.json.JsonReader.readBsonType(JsonReader.java:228) at org.bson.codecs.DocumentCodec.decode(DocumentCodec.java:139) at org.bson.codecs.DocumentCodec.decode(DocumentCodec.java:45) at org.bson.Document.parse(Document.java:105) at org.bson.Document.parse(Document.java:90) at org.apache.nifi.processors.mongodb.GetMongo.getQuery(GetMongo.java:326) at org.apache.nifi.processors.mongodb.GetMongo.onTrigger(GetMongo.java:225) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) We have also enclosed screen shots for NiFi flow, GetMongo configuration and MongoDB console output with same query. PFA. Regards, Dnyaneshwar Pawar From: Mike Thomsen mailto:mikerthom...@gmail.com>> Sent: Saturday, January 26, 2019 5:48 PM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: Re: GetMongo failure routing issue. I just tried to replicate with GenerateFlowfile and GetMongo, and couldn't reproduce it. I used these examples: { "invalid": true } { invalid: true } [ invalid: true ] The last two are clearly not valid JSON, but Mongo's client API can apparently parse the middle one. The first two routed to success, the last one routed to failure both on 1.8.0 and 1.9.0-SNAPSHOT. Can you share your input? Thanks, Mike On Fri, Jan 25, 2019 at 10:53 AM Mike Thomsen mailto:mikerthom...@gmail.com>> wrote: Ok, so it's a current issue. I'll look into it. On Fri, Jan 25, 2019 at 12
Question on NiFi upgrade.
Hi All, Help us, if anyone come across below scenario: We ship NiFi with one of our product to customer, in new release we are planning to provide the upgrade scripts to the customer to upgrade to the new features that we are providing through NiFi. All aspects has been analyzed and below 2 scenarios has become pain points for us: 1. Merge flow.xml.gz: Since customer may have implemented new feature at their end we simple can not replace our flow file. We need to retain the customer's changes to the flow and add our new features on top of it. 2. Merge nifi.proeprties: for the same reason described above. (For example. Customer may have changed HTTPS port.) Is there any inbuild utility to achieve this? Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
RE: GetMongo failure routing issue.
Mike, We are using MongoDB 3.4.7 and Apache NiFi 1.8.0 Regards, Dnyaneshwar Pawar From: Mike Thomsen Sent: Thursday, January 24, 2019 8:44 PM To: users@nifi.apache.org Subject: Re: GetMongo failure routing issue. What version are you using? On Thu, Jan 24, 2019 at 8:23 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Hi All, We are working with MongoDB integration with NiFi (Version 1.8). And we observed one issue related to failure case. Whenever, GetMongo processes the incorrect query (for example. Invalid json), the bulletin gets updated with appropriate error message, however, the flow stuck, instead of routing to the failure relation. Any pointer? Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
GetMongo failure routing issue.
Hi All, We are working with MongoDB integration with NiFi (Version 1.8). And we observed one issue related to failure case. Whenever, GetMongo processes the incorrect query (for example. Invalid json), the bulletin gets updated with appropriate error message, however, the flow stuck, instead of routing to the failure relation. Any pointer? Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
RE: Exception when setting up NiFi 1.8 with IBM JDK (SSL Setup).
Hi Mike, Thanks for your response. I think, there are some changes happened in NiFi 1.8 around this, the earlier version (NiFi 1.7) was just worked fine with IBM JDK. Regards, Dnyaneshwar Pawar From: Mike Thomsen Sent: Thursday, January 17, 2019 5:33 PM To: users@nifi.apache.org Subject: Re: Exception when setting up NiFi 1.8 with IBM JDK (SSL Setup). This is something that we could write a Jira ticket against, but as it is hard-coded for now, I don't think there's a work around. On Thu, Jan 17, 2019 at 6:57 AM Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: …just found out that Jetty is not loading appropriate keystore provider for IBM JDK which should be IBMJCE. It is always returning SUN. Any pointers? Regards, Dnyaneshwar Pawar From: Dnyaneshwar Pawar Sent: Friday, December 21, 2018 3:00 PM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: Exception when setting up NiFi 1.8 with IBM JDK (SSL Setup). Hi, As a upgrade process we tried using NiFi 1.8(HTTPS) with IBM JDK 1.8, PFB screenshot of the actual java -version command. [cid:image001.png@01D4AF1B.81310C00] Below is the screen capture of error occurred while trying to start nifi: [cid:image002.png@01D4AF1B.81310C00] [cid:image003.png@01D4AF1B.81310C00] We have tried couple of way such as setting JCE provider, generating keystore with IBM keytool, nothing helped. The same IBM JDK and NiFi 1.7combination just works fine. Just checked KeystoreUtils file and it seems provider for JKS type is SUN, then why it fails to load the keystore? Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
RE: Exception when setting up NiFi 1.8 with IBM JDK (SSL Setup).
...just found out that Jetty is not loading appropriate keystore provider for IBM JDK which should be IBMJCE. It is always returning SUN. Any pointers? Regards, Dnyaneshwar Pawar From: Dnyaneshwar Pawar Sent: Friday, December 21, 2018 3:00 PM To: users@nifi.apache.org Subject: Exception when setting up NiFi 1.8 with IBM JDK (SSL Setup). Hi, As a upgrade process we tried using NiFi 1.8(HTTPS) with IBM JDK 1.8, PFB screenshot of the actual java -version command. [cid:image001.png@01D4AE89.D03D5520] Below is the screen capture of error occurred while trying to start nifi: [cid:image002.png@01D4AE89.D03D5520] [cid:image003.png@01D4AE89.D03D5520] We have tried couple of way such as setting JCE provider, generating keystore with IBM keytool, nothing helped. The same IBM JDK and NiFi 1.7combination just works fine. Just checked KeystoreUtils file and it seems provider for JKS type is SUN, then why it fails to load the keystore? Regards, Dnyaneshwar Pawar
QueryElasticsearchHttp processor over SSL
Hi All, We are trying to setup connection with ElasticSearch, HTTP connections are working fine, however, the HTTPS connections fails with hostname verification failure. We know this is the certificate issue, where server is using self signed certificate's with common hostname which does not match the actual hostname. Is there any way to pass through this situation like InvokeHTTP processor has TrustedHostname property which allow us to add the actual hostname. Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
RE: Data provenance screen is always blank with HTTPS.
Thanks Andy! It is empty table window. Further, I performed same action with Developer Tools open for both HTTP and HTTPS (For HTTS, I have created query provenance policy for user ). In case of HTTPS it executes/queries provenance successfully but in result provenanceEvents array is empty which in case of HTTP connection returns appropriate event objects. Regards, Dnyaneshwar Pawar From: Andy LoPresto Sent: Friday, November 30, 2018 12:39 PM To: users@nifi.apache.org Subject: Re: Data provenance screen is always blank with HTTPS. Is the result that you get an empty table window, or a blank (literally white) screen? Can you perform the same action with the Developer Tools open and the network tab activated to see the response from the server, or use curl to perform the same request on the command line and examine the JSON response? That should indicate if the actual lineage is being returned or it’s an error message for unauthorized access. This could be a permissions issue, or it could be a coaching issue. Try clearing the browser cache and reloading the UI. Andy LoPresto alopre...@apache.org<mailto:alopre...@apache.org> alopresto.apa...@gmail.com<mailto:alopresto.apa...@gmail.com> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69 On Nov 29, 2018, at 22:48, Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: Hi We are not able to see the data provenance events on NiFi UI, especially when we moved to secure connections. Access policies have been created for the users, but provenance does not show any event on screen. Is there anything we need to do more? Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
Data provenance screen is always blank with HTTPS.
Hi We are not able to see the data provenance events on NiFi UI, especially when we moved to secure connections. Access policies have been created for the users, but provenance does not show any event on screen. Is there anything we need to do more? Regards, Dnyaneshwar Pawar
RE: Edit QueryCassandra processor using REST API.
Thanks Koji. We tried terminating and it is not working for QueryCassandra processor particularly. The QueryCassandra just refuse to start complaining processor is still in STOPPING state. Regards, Dnyaneshwar Pawar -Original Message- From: Koji Kawamura Sent: Thursday, November 15, 2018 1:22 PM To: users@nifi.apache.org Subject: Re: Edit QueryCassandra processor using REST API. Hi Dnyaneshwar, You can terminate remaining thread forcefully by sending a DELETE request to /processors/{id}/threads. https://nifi.apache.org/docs/nifi-docs/rest-api/index.html Thanks, Koji On Thu, Nov 15, 2018 at 4:14 PM Dnyaneshwar Pawar wrote: > > Hi > > We are trying to edit the QueryCassandra processor using REST API, this > needs processor to be stopped programmatically and start post changes done. > In this processes, initially we were getting issue of processor taking time > to stop. We added wait time of 10 sec and that worked. However, the recent > observation is, sometime processor stuck in STOPPING state and never goes > down in 10 sec time. > > This does not allow start of the processor and that processor remains in > the stopped/stopping state. PFB Error log: > > 6a4bf199-4071-3823-04c1-fa4fe9fba79b cannot be started because it is > not stopped. Current state is STOPPING > > > > We know we can get the processor state by REST URL, however, sometime this > processor stuck in the STOPPING state. Is there any way to force stop this > processor or any max time limit at which processor will go down? > > > > > > > > > > > > Regards, > > Dnyaneshwar Pawar > > DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
Edit QueryCassandra processor using REST API.
Hi We are trying to edit the QueryCassandra processor using REST API, this needs processor to be stopped programmatically and start post changes done. In this processes, initially we were getting issue of processor taking time to stop. We added wait time of 10 sec and that worked. However, the recent observation is, sometime processor stuck in STOPPING state and never goes down in 10 sec time. This does not allow start of the processor and that processor remains in the stopped/stopping state. PFB Error log: 6a4bf199-4071-3823-04c1-fa4fe9fba79b cannot be started because it is not stopped. Current state is STOPPING We know we can get the processor state by REST URL, however, sometime this processor stuck in the STOPPING state. Is there any way to force stop this processor or any max time limit at which processor will go down? Regards, Dnyaneshwar Pawar
RE: Security issues for NiFi's supporting libs.
Thank You for your reply. Is there any official documentation or link where we can point and assure our user about this? Basically there scan is reporting CVE-2018-1000613, CVE-2018-1000180, 2009-0001 issues. Regards, Dnyaneshwar Pawar From: Andy LoPresto Sent: Wednesday, October 10, 2018 6:37 PM To: users@nifi.apache.org Subject: Re: Security issues for NiFi's supporting libs. The Apache NiFi security team has responded to these messages via private email. For all interested parties, please know that NiFi is not vulnerable to CVE-2018-1000613 and further discussion is needed for the second CVE listed as this issued number does not match the description provided. All users should refer to the Apache NiFi Security Reporting Guidelines for coordinated disclosure process [1]. [1] https://nifi.apache.org/security.html Andy LoPresto alopre...@apache.org<mailto:alopre...@apache.org> alopresto.apa...@gmail.com<mailto:alopresto.apa...@gmail.com> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69 On Oct 10, 2018, at 4:50 PM, Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> wrote: More organized information. Vulnarability Severity Package/jar Description CVE-2018-1000613 High bcprov-jdk15on-1.59.jar Legion of the Bouncy Castle Legion of the Bouncy Castle Java Cryptography APIs version prior to version 1.60 contains a CWE-470: Use of Externally-Controlled Input to Select Classes or Code ('Unsafe Reflection') vulnerability in XMSS/XMSS^MT private key deserialization that can result in Deserializing an XMSS/XMSS^MT private key can result in the execution of unexpected code.. This attack appear to be exploitable via A handcrafted private key can include references to unexpected classes which will be picked up from the class path for the executing application.. This vulnerability appears to have been fixed in 1.60 and later. CVE 2009-0001 Medium commons-codec-1.11.jar Not all "business" method implementations of public API in Apache Commons Codec 1.x are thread safe, which might disclose the wrong data or allow an attacker to change non-private fields. From: Dnyaneshwar Pawar mailto:dnyaneshwar_pa...@persistent.com>> Sent: Wednesday, October 10, 2018 12:35 PM To: users@nifi.apache.org<mailto:users@nifi.apache.org> Subject: Security issues for NiFi's supporting libs. Hi, We are using Apache NiFi 1.7.0 and the security scan has high severity issues for Bouncy castle bcprov-jdk15on-1.59 and Apache’s commons-codec lib. How should we address them? The Bouncy Castle upgraded them to fix the issues. What about commons codec and are they available in 1.7.1? Thanks in advance. Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
RE: High volume data with ExecuteSQL processor
Hi Koji, As suggested, the "Max Rows Per Flow File" is not available for ExecuteSQL processor, its available with QueryDatabaseTable processor. But we cannot use QueryDatabaseTable processor as its not accepting upstream connections and we have requirement of accepting upstream connection from other processors (e.g. HandleHTTPRequest processor). Please suggest how we can use ExecuteSQL to process high volume data. -Original Message- From: Koji Kawamura Sent: Tuesday, September 25, 2018 5:59 AM To: users@nifi.apache.org Subject: Re: High volume data with ExecuteSQL processor Hello, Did you try setting 'Max Rows Per Flow File' at ExecuteSQL processor? If the OOM happened when NiFi writes all results into a single FlowFile, then the property can help breaking the result set into several FlowFiles to avoid that. Thanks, Koji On Fri, Sep 21, 2018 at 3:56 PM Dnyaneshwar Pawar wrote: > > Hi, > > > > How to execute/process High volume data with ExecuteSQL processor: > > > > We tried to execute query for db2 database which has around 10 lakh > records. While executing this query > > we are getting OutOfMemory error and that request(flowfile) is stuck > in queue. When we restart nifi, it still stuck in queue and as soon as > we start nifi, > > we are again getting same error as it is stuck in queue. Is there any way to > configure retry for queue(connection to 2 processor). > > > > We also tried to change property for Flow File repository in > nifi.properties (nifi.flowfile.repository.implementation) to > 'org.apache.nifi.controller.repository.VolatileFlowFileRepository', > > This is removing flowfile in query while restarting nifi. But it has risk of > data loss in the event of power/machine failure for other processes. > > So please suggest how to execute high volume data query execution or any > retry mechanism available for queued flowfile. > > > > > > Regards, > > Dnyaneshwar Pawar > > DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
RE: Security issues for NiFi's supporting libs.
More organized information. Vulnarability Severity Package/jar Description CVE-2018-1000613 High bcprov-jdk15on-1.59.jar Legion of the Bouncy Castle Legion of the Bouncy Castle Java Cryptography APIs version prior to version 1.60 contains a CWE-470: Use of Externally-Controlled Input to Select Classes or Code ('Unsafe Reflection') vulnerability in XMSS/XMSS^MT private key deserialization that can result in Deserializing an XMSS/XMSS^MT private key can result in the execution of unexpected code.. This attack appear to be exploitable via A handcrafted private key can include references to unexpected classes which will be picked up from the class path for the executing application.. This vulnerability appears to have been fixed in 1.60 and later. CVE 2009-0001 Medium commons-codec-1.11.jar Not all "business" method implementations of public API in Apache Commons Codec 1.x are thread safe, which might disclose the wrong data or allow an attacker to change non-private fields. From: Dnyaneshwar Pawar Sent: Wednesday, October 10, 2018 12:35 PM To: users@nifi.apache.org Subject: Security issues for NiFi's supporting libs. Hi, We are using Apache NiFi 1.7.0 and the security scan has high severity issues for Bouncy castle bcprov-jdk15on-1.59 and Apache's commons-codec lib. How should we address them? The Bouncy Castle upgraded them to fix the issues. What about commons codec and are they available in 1.7.1? Thanks in advance. Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
Security issues for NiFi's supporting libs.
Hi, We are using Apache NiFi 1.7.0 and the security scan has high severity issues for Bouncy castle bcprov-jdk15on-1.59 and Apache's commons-codec lib. How should we address them? The Bouncy Castle upgraded them to fix the issues. What about commons codec and are they available in 1.7.1? Thanks in advance. Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
RE: ***UNCHECKED*** Re: Case insensitive NiFi Login
Hi Bryan, I may not have followed you completely, however, I tried below and its not working: nifi.security.identity.mapping.pattern.dn=^uid=(.*?),ou=(.*?),o=(.*?)$ #LDAP DN Pattern nifi.security.identity.mapping.value.dn=$1 nifi.security.identity.mapping.transform.dn=LOWER I deleted authorizations.xml file and restarted nifi to check if login works. Regards, Dnyaneshwar Pawar -Original Message- From: Bryan Bende Sent: Wednesday, September 19, 2018 6:40 PM To: users@nifi.apache.org Subject: ***UNCHECKED*** Re: Case insensitive NiFi Login Hello, I don't think you can make it truly case insensitive, but you can use the identity mappings to transform the incoming identity to either upper case or lower case: nifi.security.identity.mapping.transform.dn=LOWER Assuming you make nifi.security.identity.mapping.pattern.dn match your LDAP DN pattern. -Bryan On Wed, Sep 19, 2018 at 8:57 AM Dnyaneshwar Pawar wrote: > > Hi, > >We observed NiFi authorization is case sensitive on the NiFi integrated > with LDAP. It complains insufficient permissions (Unknown user with Identity > … ) when tried to log in with the username case change. Is there any way to > disable this behavior and make the username as case insensitive? > > > > Regards, > > Dnyaneshwar Pawar > > DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
High volume data with ExecuteSQL processor
Hi, How to execute/process High volume data with ExecuteSQL processor: We tried to execute query for db2 database which has around 10 lakh records. While executing this query we are getting OutOfMemory error and that request(flowfile) is stuck in queue. When we restart nifi, it still stuck in queue and as soon as we start nifi, we are again getting same error as it is stuck in queue. Is there any way to configure retry for queue(connection to 2 processor). We also tried to change property for Flow File repository in nifi.properties (nifi.flowfile.repository.implementation) to 'org.apache.nifi.controller.repository.VolatileFlowFileRepository', This is removing flowfile in query while restarting nifi. But it has risk of data loss in the event of power/machine failure for other processes. So please suggest how to execute high volume data query execution or any retry mechanism available for queued flowfile. Regards, Dnyaneshwar Pawar
Case insensitive NiFi Login
Hi, We observed NiFi authorization is case sensitive on the NiFi integrated with LDAP. It complains insufficient permissions (Unknown user with Identity ... ) when tried to log in with the username case change. Is there any way to disable this behavior and make the username as case insensitive? Regards, Dnyaneshwar Pawar
Whiltelable Apache NiFi UI
Hi All, As part of Apache NiFi usage, we like to whitelable the Apache NiFi UI and add an image on the UI (along with other out of box contents). Can you suggest if it is allowed? If yes, how to do so? Regards, Dnyaneshwar Pawar
Invoke HTTP custom headers.
Hi All, We want to pass the custom headers to third party rest application through InvokeHttp processor. We know this can be achieved by using 'attribute to send' and 'dynamic properties', however both of these needs key name. While on our case we don't know key/value or even how many headers would present in request. So, the typical requirement is , how to pass/carry Header 1 ... Header N, from end user, as it is through InvokeHttp processor? Thank You! Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
Secure NiFi cluster on kubernetes.
Hi, We have requirement of deploying NiFi on cloud platforms and we are considering kubernetes as orchestrator. I have knowledge on configuring nifi cluster, however, I am not sure on how things would go on kubernetes. Further, we are using Apache DS as LDAP server for authentication and planning to use embedded zookeeper instance to make the zookeeper cluster. Any help or pointer to documentation would be appreciated. Thank You. Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
Implementing authorization programmatically.
Hi All, We have typical requirement of implementing authorization programmatically. We are new to NiFi and aware of the authorization through UI. However, modifying authorizations.xml programmatically is something we are looking for. Any pointer to documentation, source code would be great help. Thank You. Regards, Dnyaneshwar Pawar DISCLAIMER == This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.