[jira] [Updated] (NIFI-7775) Exclude TesseractOCR Parser from ExtractMediaMetadata Processor

2020-11-11 Thread faerballert (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

faerballert updated NIFI-7775:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Exclude TesseractOCR Parser from ExtractMediaMetadata Processor
> ---
>
> Key: NIFI-7775
> URL: https://issues.apache.org/jira/browse/NIFI-7775
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: faerballert
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> To extract Media Metadata is Apache Tika in use.
> Tika uses also TesseractOCRParser as DefaultParser - for this case it doesnt 
> needed and there are such runtime improvements.
> With TikaConfig File can be exclude TesseractOCRParser



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7997) ListFTP does not use HTTP proxy

2020-11-11 Thread Richard Zuidhof (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zuidhof updated NIFI-7997:
--
Description: 
After configuring a HTTP type proxy server in the ListFTP processor Nifi still 
tries to connect directly to the FTP server instead of the proxy server for the 
initial connection. Subsequent commands like DIR or GET are tunneled through 
the HTTP proxy but without a direct connection to the FTP server this will 
never succeeded. There is no issue using a SOCKS proxy.

You can reproduce this very easily, even without a real proxy. Just enter a non 
existing ftp host and set proxy type to HTTP and use a fake proxy host and 
port. When using a non-existing ftp hostname you will get this error. This 
shows Nifi wants to connect directly instead of via the proxy:

{{2020-11-11 12:36:01,002 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to nowhere.to.go: java.net.UnknownHostException: nowhere.to.go}}

When using an unreachable IP (like 10.123.1.1) you will see this:

{{2020-11-11 12:43:10,862 ERROR [Timer-Driven Process Thread-10] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to Connection timed out (Connection timed out): 
java.net.ConnectException: Connection timed out (Connection timed out) 
java.net.ConnectException: Connection timed out (Connection timed out) at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:607) at 
org.apache.commons.net.SocketClient._connect(SocketClient.java:243) at 
org.apache.commons.net.SocketClient.connect(SocketClient.java:181) at 
org.apache.nifi.processors.standard.util.FTPTransfer.getClient(FTPTransfer.java:600)
 at 
org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:233)
 at 
org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:196)
 at 
org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:106)
 at 
org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingTimestamps(AbstractListProcessor.java:472)
 at 
org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:414)
 at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
 at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
 at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
 at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)}}

  was:
After configuring a HTTP type proxy server in the ListFTP processor Nifi still 
tries to connect directly to the FTP server instead of the proxy server for the 
initial connection. Subsequent commands like DIR or GET are tunneled through 
the HTTP proxy but without a direct connection to the FTP server this will 
never succeeded. There is no issue using a SOCKS proxy.

You can reproduce this very easily, even without a real proxy. Just enter a non 
existing ftp host and set proxy type to HTTP and use a fake proxy host and 
port. When using a non-existing ftp hostname you will get this error. This 
shows Nifi wants to connect directly instead of via the proxy:

2020-11-11 12:36:01,002 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to nowhere.to.go: java.net.UnknownHostException: nowhere.to.go

When using an unreachable IP (like 10.123.1.1) you will see this: 2020-11-11 
12:43:10,862 ERROR [Timer-Driven Process Thread-10] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote 

[jira] [Updated] (NIFI-7997) ListFTP does not use HTTP proxy

2020-11-11 Thread Richard Zuidhof (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zuidhof updated NIFI-7997:
--
Description: 
After configuring a HTTP type proxy server in the ListFTP processor Nifi still 
tries to connect directly to the FTP server instead of the proxy server for the 
initial connection. Subsequent commands like DIR or GET are tunneled through 
the HTTP proxy but without a direct connection to the FTP server this will 
never succeeded. There is no issue using a SOCKS proxy.

You can reproduce this very easily, even without a real proxy. Just enter a non 
existing ftp host and set proxy type to HTTP and use a fake proxy host and 
port. When using a non-existing ftp hostname you will get this error. This 
shows Nifi wants to connect directly instead of via the proxy:

2020-11-11 12:36:01,002 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to nowhere.to.go: java.net.UnknownHostException: nowhere.to.go

When using an unreachable IP (like 10.123.1.1) you will see this: 2020-11-11 
12:43:10,862 ERROR [Timer-Driven Process Thread-10] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to Connection timed out (Connection timed out): 
java.net.ConnectException: Connection timed out (Connection timed out) 
java.net.ConnectException: Connection timed out (Connection timed out) at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:607) at 
org.apache.commons.net.SocketClient._connect(SocketClient.java:243) at 
org.apache.commons.net.SocketClient.connect(SocketClient.java:181) at 
org.apache.nifi.processors.standard.util.FTPTransfer.getClient(FTPTransfer.java:600)
 at 
org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:233)
 at 
org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:196)
 at 
org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:106)
 at 
org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingTimestamps(AbstractListProcessor.java:472)
 at 
org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:414)
 at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
 at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
 at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
 at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

  was:
After configuring a proxy server in the ListFTP processor Nifi still tries to 
connect directly to the FTP server instead of the proxy server for the initial 
connection. Subsequent commands like DIR or GET are tunneled through the HTTP 
proxy but without a direct connection to the FTP server this will never 
succeeded. There is no issue using a SOCKS proxy.

You can reproduce this very easily, even without a real proxy. Just enter a non 
existing ftp host and set proxy type to HTTP and use a fake proxy host and 
port. When using a non-existing ftp hostname you will get this error. This 
shows Nifi wants to connect directly instead of via the proxy:

2020-11-11 12:36:01,002 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to nowhere.to.go: java.net.UnknownHostException: nowhere.to.go

When using an unreachable IP (like 10.123.1.1) you will see this: 2020-11-11 
12:43:10,862 ERROR [Timer-Driven Process Thread-10] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to 

[jira] [Created] (NIFI-7996) Conversion with ConvertRecord to avro results in invalid date

2020-11-11 Thread Denes Arvay (Jira)
Denes Arvay created NIFI-7996:
-

 Summary: Conversion with ConvertRecord to avro results in invalid 
date
 Key: NIFI-7996
 URL: https://issues.apache.org/jira/browse/NIFI-7996
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Denes Arvay
Assignee: Denes Arvay


Converting a date field to avro using ConvertRecord results to invalid value if 
the system's timezone's offset is negative.

System timezone: EST (UTC-5)

Input json:
{code:java}
{ "SomeLocalDate": "20170411" }
{code}
Avro schema:
{code:java}
{ "fields": [ {
   "name": "SomeLocalDate",
   "type": [ "null", { "logicalType": "date", "type": "int" } ]
  }],
 "name": "DateTest",
 "namespace": "org.apache.nifi",
 "type": "record"
}
{code}
Result:
{code:java}
$ avro-tools tojson ./est-invalid.avro
{"SomeLocalDate":{"int":17266}}
{code}
In this case the days between 1970.01.01. and 2017.04.11. is stored in the 
SomeLocalDate field (see [1]), it should be 17267 ([2]).

After investigating the issue it seems that even though {{20170411}} is parsed 
to {{2017-04-11T00:00:00 UTC}} it is converted to the system timezone 
({{2017-04-10T19:00:00 EST}}) later and its date value is used.

[1] [https://avro.apache.org/docs/1.8.0/spec.html#Date]
 [2] 
[https://www.timeanddate.com/date/durationresult.html?d1=1=1=1970=11=04=2017]
  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFI-7775) Exclude TesseractOCR Parser from ExtractMediaMetadata Processor

2020-11-11 Thread faerballert (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

faerballert closed NIFI-7775.
-

> Exclude TesseractOCR Parser from ExtractMediaMetadata Processor
> ---
>
> Key: NIFI-7775
> URL: https://issues.apache.org/jira/browse/NIFI-7775
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: faerballert
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> To extract Media Metadata is Apache Tika in use.
> Tika uses also TesseractOCRParser as DefaultParser - for this case it doesnt 
> needed and there are such runtime improvements.
> With TikaConfig File can be exclude TesseractOCRParser



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7775) Exclude TesseractOCR Parser from ExtractMediaMetadata Processor

2020-11-11 Thread faerballert (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

faerballert updated NIFI-7775:
--
Status: Patch Available  (was: Reopened)

> Exclude TesseractOCR Parser from ExtractMediaMetadata Processor
> ---
>
> Key: NIFI-7775
> URL: https://issues.apache.org/jira/browse/NIFI-7775
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: faerballert
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> To extract Media Metadata is Apache Tika in use.
> Tika uses also TesseractOCRParser as DefaultParser - for this case it doesnt 
> needed and there are such runtime improvements.
> With TikaConfig File can be exclude TesseractOCRParser



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7775) Exclude TesseractOCR Parser from ExtractMediaMetadata Processor

2020-11-11 Thread faerballert (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

faerballert updated NIFI-7775:
--
Status: Reopened  (was: Closed)

> Exclude TesseractOCR Parser from ExtractMediaMetadata Processor
> ---
>
> Key: NIFI-7775
> URL: https://issues.apache.org/jira/browse/NIFI-7775
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: faerballert
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> To extract Media Metadata is Apache Tika in use.
> Tika uses also TesseractOCRParser as DefaultParser - for this case it doesnt 
> needed and there are such runtime improvements.
> With TikaConfig File can be exclude TesseractOCRParser



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7997) ListFTP does not use HTTP proxy

2020-11-11 Thread Richard Zuidhof (Jira)
Richard Zuidhof created NIFI-7997:
-

 Summary: ListFTP does not use HTTP proxy
 Key: NIFI-7997
 URL: https://issues.apache.org/jira/browse/NIFI-7997
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.11.4
Reporter: Richard Zuidhof


After configuring a proxy server in the ListFTP processor Nifi still tries to 
connect directly to the FTP server instead of the proxy server for the initial 
connection. Subsequent commands like DIR or GET are tunneled through the HTTP 
proxy but without a direct connection to the FTP server this will never 
succeeded. There is no issue using a SOCKS proxy.

You can reproduce this very easily, even without a real proxy. Just enter a non 
existing ftp host and set proxy type to HTTP and use a fake proxy host and 
port. When using a non-existing ftp hostname you will get this error. This 
shows Nifi wants to connect directly instead of via the proxy:

2020-11-11 12:36:01,002 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to nowhere.to.go: java.net.UnknownHostException: nowhere.to.go

When using an unreachable IP (like 10.123.1.1) you will see this: 2020-11-11 
12:43:10,862 ERROR [Timer-Driven Process Thread-10] 
o.a.nifi.processors.standard.ListFTP 
ListFTP[id=99240cfd-0175-1000--e04e112b] Failed to perform listing on 
remote host due to Connection timed out (Connection timed out): 
java.net.ConnectException: Connection timed out (Connection timed out) 
java.net.ConnectException: Connection timed out (Connection timed out) at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:607) at 
org.apache.commons.net.SocketClient._connect(SocketClient.java:243) at 
org.apache.commons.net.SocketClient.connect(SocketClient.java:181) at 
org.apache.nifi.processors.standard.util.FTPTransfer.getClient(FTPTransfer.java:600)
 at 
org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:233)
 at 
org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:196)
 at 
org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:106)
 at 
org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingTimestamps(AbstractListProcessor.java:472)
 at 
org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:414)
 at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
 at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
 at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
 at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm closed pull request #935: MINIFICPP-1404 Add option to disable unity build of AWS library

2020-11-11 Thread GitBox


szaszm closed pull request #935:
URL: https://github.com/apache/nifi-minifi-cpp/pull/935


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7995) Requesting create Parameter Context via API with Parameters null generates an NPE

2020-11-11 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17230031#comment-17230031
 ] 

Otto Fowler commented on NIFI-7995:
---

do you have a python script to reproduce this?

> Requesting create Parameter Context via API with Parameters null generates an 
> NPE
> -
>
> Key: NIFI-7995
> URL: https://issues.apache.org/jira/browse/NIFI-7995
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.12.1
>Reporter: Daniel Chaffelson
>Priority: Minor
>
> org.apache.nifi.web.api.ParameterContextResource.validateParameterNames(ParameterContextResource.java:403)
> gets very upset if you submit a JSON with null for the Parameter listing 
> instead of an empty or populated list. It is a minor thing, but probably 
> worth tidying up in the validator.
> Discovered because NiPyAPI defaults to Python 'None' for unpopulated 
> properties.
>  
> Full logged error is:
> 2020-11-09 13:20:28,612 ERROR [NiFi Web Server-22] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.web.api.ParameterContextResource.validateParameterNames(ParameterContextResource.java:403)
>   at 
> org.apache.nifi.web.api.ParameterContextResource.createParameterContext(ParameterContextResource.java:199)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
>   at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
>   at 
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
>   at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
>   at 
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
>   at 
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>   at 
> org.eclipse.jetty.servlet.ServletHolder$NotAsyncServlet.service(ServletHolder.java:1395)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:755)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)
>   at 
> org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
>   at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)
>   at 
> 

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #920: MINIFICPP-1296 - All tests should use volatile state storage

2020-11-11 Thread GitBox


szaszm commented on a change in pull request #920:
URL: https://github.com/apache/nifi-minifi-cpp/pull/920#discussion_r521406606



##
File path: 
libminifi/test/keyvalue-tests/UnorderedMapKeyValueStoreServiceTest.cpp
##
@@ -0,0 +1,173 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_RUNNER
+#include 
+#include "../TestBase.h"
+#include "core/controller/ControllerService.h"
+#include "core/ProcessGroup.h"
+#include "core/yaml/YamlConfiguration.h"
+
+#include "catch.hpp"
+
+namespace {
+  std::string config_yaml; // NOLINT
+
+  void configYamlHandler(Catch::ConfigData&, const std::string& path) {
+config_yaml = path;
+  }
+}
+
+int main(int argc, char* argv[]) {
+  Catch::Session session;
+
+  auto& cli = 
const_cast&>(session.cli());
+  cli["--config-yaml"]
+  .describe("path to the config.yaml containing the 
UnorderedMapKeyValueStoreServiceTest controller service configuration")
+  .bind(, "path");
+
+  int ret = session.applyCommandLine(argc, argv);
+  if (ret != 0) {
+return ret;
+  }
+
+  if (config_yaml.empty()) {
+std::cerr << "Missing --config-yaml . It must contain the path to 
the config.yaml containing the UnorderedMapKeyValueStoreServiceTest controller 
service configuration." << std::endl;
+return -1;
+  }
+
+  return session.run();
+}
+
+class UnorderedMapKeyValueStoreServiceTestFixture {

Review comment:
   Minor, optional:
   Since `UnorderedMapKeyValueStoreService` just became a 
`PersistableKeyValueStoreService`, I would add a test that you can actually 
treat it as persistable. A simple
   
   ```
   static_assert(std::is_convertible::value, "UnorderedMapKeyValueStoreService is 
a PersistableKeyValueStoreService");
   ```
   
   should do the trick. Or check that 
`unorderedMapKeyValueStoreService->persist()` is a valid boolean expression.

##
File path: libminifi/test/TestBase.h
##
@@ -282,18 +274,10 @@ class TestPlan {
 return prov_repo_;
   }
 
-  std::shared_ptr getContentRepo() {
-return content_repo_;
-  }
-
   std::shared_ptr getLogger() const {
 return logger_;
   }
 
-  std::string getStateDir() {
-return state_dir_;
-  }
-

Review comment:
   ```
   /home/szaszm/nifi-minifi-cpp-2/extensions/sftp/tests/ListSFTPTests.cpp: In 
member function ‘void 
ListSFTPTestsFixture::createPlan(org::apache::nifi::minifi::utils::Identifier*)’:
   
/home/szaszm/nifi-minifi-cpp-2/extensions/sftp/tests/ListSFTPTests.cpp:92:64: 
error: ‘using element_type = class TestPlan’ {aka ‘class TestPlan’} has no 
member named ‘getStateDir’
   92 | const std::string state_dir = plan == nullptr ? "" : 
plan->getStateDir();
   |^~~
   make[2]: *** 
[extensions/sftp/tests/CMakeFiles/ListSFTPTests.dir/build.make:82: 
extensions/sftp/tests/CMakeFiles/ListSFTPTests.dir/ListSFTPTests.cpp.o] Error 1
   make[1]: *** [CMakeFiles/Makefile2:8318: 
extensions/sftp/tests/CMakeFiles/ListSFTPTests.dir/all] Error 2
   make[1]: *** Waiting for unfinished jobs
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] tpalfy opened a new pull request #4655: NIFI-7972 TailFile NFS improvement

2020-11-11 Thread GitBox


tpalfy opened a new pull request #4655:
URL: https://github.com/apache/nifi/pull/4655


   Add a boolean property by which the user can tell the processor to yield 
(and try again later) whenever it encounters a NUL character.
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm closed pull request #934: MINIFICPP-1330-add conversion from microseconds

2020-11-11 Thread GitBox


szaszm closed pull request #934:
URL: https://github.com/apache/nifi-minifi-cpp/pull/934


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #931: MINIFICPP-1390 Create DeleteS3Object processor

2020-11-11 Thread GitBox


arpadboda commented on a change in pull request #931:
URL: https://github.com/apache/nifi-minifi-cpp/pull/931#discussion_r521445242



##
File path: extensions/aws/processors/DeleteS3Object.cpp
##
@@ -0,0 +1,92 @@
+/**
+ * @file DeleteS3Object.cpp
+ * DeleteS3Object class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DeleteS3Object.h"
+
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace aws {
+namespace processors {
+
+const core::Property DeleteS3Object::Version(
+  core::PropertyBuilder::createProperty("Version")
+->withDescription("The Version of the Object to delete")
+->supportsExpressionLanguage(true)
+->build());
+
+const core::Relationship DeleteS3Object::Success("success", "FlowFiles are 
routed to success relationship");
+const core::Relationship DeleteS3Object::Failure("failure", "FlowFiles are 
routed to failure relationship");
+
+void DeleteS3Object::initialize() {
+  // Set the supported properties
+  std::set properties(S3Processor::getSupportedProperties());
+  properties.insert(Version);
+  setSupportedProperties(properties);
+  // Set the supported relationships
+  std::set relationships;
+  relationships.insert(Failure);
+  relationships.insert(Success);
+  setSupportedRelationships(relationships);

Review comment:
   Just fancy, you can do it using init list:
   ```
   setSupportedRelationships({Failure, Success});
   ```

##
File path: extensions/aws/processors/DeleteS3Object.cpp
##
@@ -0,0 +1,92 @@
+/**
+ * @file DeleteS3Object.cpp
+ * DeleteS3Object class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DeleteS3Object.h"
+
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace aws {
+namespace processors {
+
+const core::Property DeleteS3Object::Version(
+  core::PropertyBuilder::createProperty("Version")
+->withDescription("The Version of the Object to delete")
+->supportsExpressionLanguage(true)
+->build());
+
+const core::Relationship DeleteS3Object::Success("success", "FlowFiles are 
routed to success relationship");
+const core::Relationship DeleteS3Object::Failure("failure", "FlowFiles are 
routed to failure relationship");
+
+void DeleteS3Object::initialize() {
+  // Set the supported properties
+  std::set properties(S3Processor::getSupportedProperties());
+  properties.insert(Version);
+  setSupportedProperties(properties);
+  // Set the supported relationships
+  std::set relationships;
+  relationships.insert(Failure);
+  relationships.insert(Success);
+  setSupportedRelationships(relationships);
+}
+
+bool DeleteS3Object::getExpressionLanguageSupportedProperties(
+const std::shared_ptr ,
+const std::shared_ptr _file) {
+  if (!S3Processor::getExpressionLanguageSupportedProperties(context, 
flow_file)) {
+return false;
+  }
+
+  context->getProperty(Version, version_, flow_file);
+  logger_->log_debug("DeleteS3Object: Version [%s]", version_);
+  return true;
+}
+
+void DeleteS3Object::onTrigger(const std::shared_ptr 
, const std::shared_ptr ) {
+  logger_->log_debug("DeleteS3Object onTrigger");
+  std::shared_ptr flow_file = session->get();
+  if (!flow_file) {
+return;

Review comment:
   yield can be applied in this case as well 

##
File path: extensions/aws/processors/DeleteS3Object.cpp
##
@@ 

[GitHub] [nifi] ottobackwards commented on pull request #4654: NIFI-7994: Fixed ReplaceText concurrency issue

2020-11-11 Thread GitBox


ottobackwards commented on pull request #4654:
URL: https://github.com/apache/nifi/pull/4654#issuecomment-725468947


   This looks like a nice improvement and a great fix.  +1 from me



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1404) Add option to disable unity build of AWS lib

2020-11-11 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi resolved MINIFICPP-1404.
--
Resolution: Fixed

> Add option to disable unity build of AWS lib
> 
>
> Key: MINIFICPP-1404
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1404
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Gabor Gyimesi
>Assignee: Gabor Gyimesi
>Priority: Trivial
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> AWS library is built with unity build option ON by default to make the 
> library smaller. The library is built using a single generated cpp file, 
> which is generated thus recompiled in every single build event. Because of 
> this if we build iteratively (for example while developing on a local 
> machine) we have to rebuild the library every single time even if no change 
> has occurred. We should add an option to disable the unity build in this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #931: MINIFICPP-1390 Create DeleteS3Object processor

2020-11-11 Thread GitBox


lordgamez commented on a change in pull request #931:
URL: https://github.com/apache/nifi-minifi-cpp/pull/931#discussion_r521469706



##
File path: extensions/aws/s3/S3Wrapper.cpp
##
@@ -38,8 +38,24 @@ minifi::utils::optional 
S3Wrapper::sendPutObjec
   logger_->log_info("Added S3 object %s to bucket %s", request.GetKey(), 
request.GetBucket());
   return outcome.GetResultWithOwnership();
   } else {
-  logger_->log_error("PutS3Object failed with the following: '%s'", 
outcome.GetError().GetMessage());
-  return minifi::utils::nullopt;
+logger_->log_error("PutS3Object failed with the following: '%s'", 
outcome.GetError().GetMessage());
+return minifi::utils::nullopt;
+  }
+}
+
+bool S3Wrapper::sendDeleteObjectRequest(const 
Aws::S3::Model::DeleteObjectRequest& request) {
+  Aws::S3::S3Client s3_client(credentials_, client_config_);
+  Aws::S3::Model::DeleteObjectOutcome outcome = 
s3_client.DeleteObject(request);
+
+  if (outcome.IsSuccess()) {
+logger_->log_info("Deleted S3 object %s from bucket %s", request.GetKey(), 
request.GetBucket());
+return true;
+  } else if (outcome.GetError().GetErrorType() == 
Aws::S3::S3Errors::NO_SUCH_KEY) {
+logger_->log_info("S3 object %s was not found in bucket %s", 
request.GetKey(), request.GetBucket());

Review comment:
   In NiFi the processor's 
[description](https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-aws-nar/1.5.0/org.apache.nifi.processors.aws.s3.DeleteS3Object/index.html)
 says "If attempting to delete a file that does not exist, FlowFile is routed 
to success". It's a bit strange to me as well, but I wanted to be consistent 
with the Nifi implementation. We could change it in our case if you prefer.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #931: MINIFICPP-1390 Create DeleteS3Object processor

2020-11-11 Thread GitBox


lordgamez commented on a change in pull request #931:
URL: https://github.com/apache/nifi-minifi-cpp/pull/931#discussion_r521488405



##
File path: extensions/aws/processors/DeleteS3Object.cpp
##
@@ -0,0 +1,92 @@
+/**
+ * @file DeleteS3Object.cpp
+ * DeleteS3Object class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DeleteS3Object.h"
+
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace aws {
+namespace processors {
+
+const core::Property DeleteS3Object::Version(
+  core::PropertyBuilder::createProperty("Version")
+->withDescription("The Version of the Object to delete")
+->supportsExpressionLanguage(true)
+->build());
+
+const core::Relationship DeleteS3Object::Success("success", "FlowFiles are 
routed to success relationship");
+const core::Relationship DeleteS3Object::Failure("failure", "FlowFiles are 
routed to failure relationship");
+
+void DeleteS3Object::initialize() {
+  // Set the supported properties
+  std::set properties(S3Processor::getSupportedProperties());
+  properties.insert(Version);
+  setSupportedProperties(properties);
+  // Set the supported relationships
+  std::set relationships;
+  relationships.insert(Failure);
+  relationships.insert(Success);
+  setSupportedRelationships(relationships);

Review comment:
   Done in 
[a2ba9ac7](https://github.com/apache/nifi-minifi-cpp/pull/931/commits/a2ba9ac73f38d032a678d546ed343afd63092150)
 for PutS3Object as well

##
File path: extensions/aws/processors/DeleteS3Object.cpp
##
@@ -0,0 +1,92 @@
+/**
+ * @file DeleteS3Object.cpp
+ * DeleteS3Object class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DeleteS3Object.h"
+
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace aws {
+namespace processors {
+
+const core::Property DeleteS3Object::Version(
+  core::PropertyBuilder::createProperty("Version")
+->withDescription("The Version of the Object to delete")
+->supportsExpressionLanguage(true)
+->build());
+
+const core::Relationship DeleteS3Object::Success("success", "FlowFiles are 
routed to success relationship");
+const core::Relationship DeleteS3Object::Failure("failure", "FlowFiles are 
routed to failure relationship");
+
+void DeleteS3Object::initialize() {
+  // Set the supported properties
+  std::set properties(S3Processor::getSupportedProperties());
+  properties.insert(Version);
+  setSupportedProperties(properties);
+  // Set the supported relationships
+  std::set relationships;
+  relationships.insert(Failure);
+  relationships.insert(Success);
+  setSupportedRelationships(relationships);
+}
+
+bool DeleteS3Object::getExpressionLanguageSupportedProperties(
+const std::shared_ptr ,
+const std::shared_ptr _file) {
+  if (!S3Processor::getExpressionLanguageSupportedProperties(context, 
flow_file)) {
+return false;
+  }
+
+  context->getProperty(Version, version_, flow_file);
+  logger_->log_debug("DeleteS3Object: Version [%s]", version_);
+  return true;
+}
+
+void DeleteS3Object::onTrigger(const std::shared_ptr 
, const std::shared_ptr ) {
+  logger_->log_debug("DeleteS3Object onTrigger");
+  std::shared_ptr flow_file = session->get();
+  if (!flow_file) {
+return;

Review comment:
   Done in 

[jira] [Commented] (NIFI-7995) Requesting create Parameter Context via API with Parameters null generates an NPE

2020-11-11 Thread Daniel Chaffelson (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17230062#comment-17230062
 ] 

Daniel Chaffelson commented on NIFI-7995:
-

Use the Next branch of NiPyAPI:

[https://github.com/Chaffelson/nipyapi/blob/27002493d6f4fbe85d2c0f62d4d69f2825883ac9/nipyapi/parameters.py#L91]
Modify this line to be 'else None' to submit the json null instead of empty 
list, and it should produce the NPE.

> Requesting create Parameter Context via API with Parameters null generates an 
> NPE
> -
>
> Key: NIFI-7995
> URL: https://issues.apache.org/jira/browse/NIFI-7995
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.12.1
>Reporter: Daniel Chaffelson
>Assignee: Otto Fowler
>Priority: Minor
>
> org.apache.nifi.web.api.ParameterContextResource.validateParameterNames(ParameterContextResource.java:403)
> gets very upset if you submit a JSON with null for the Parameter listing 
> instead of an empty or populated list. It is a minor thing, but probably 
> worth tidying up in the validator.
> Discovered because NiPyAPI defaults to Python 'None' for unpopulated 
> properties.
>  
> Full logged error is:
> 2020-11-09 13:20:28,612 ERROR [NiFi Web Server-22] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.web.api.ParameterContextResource.validateParameterNames(ParameterContextResource.java:403)
>   at 
> org.apache.nifi.web.api.ParameterContextResource.createParameterContext(ParameterContextResource.java:199)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
>   at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
>   at 
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
>   at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
>   at 
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
>   at 
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>   at 
> org.eclipse.jetty.servlet.ServletHolder$NotAsyncServlet.service(ServletHolder.java:1395)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:755)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)
>   at 
> org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66)
>   at 
> 

[jira] [Assigned] (NIFI-7995) Requesting create Parameter Context via API with Parameters null generates an NPE

2020-11-11 Thread Otto Fowler (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otto Fowler reassigned NIFI-7995:
-

Assignee: Otto Fowler

> Requesting create Parameter Context via API with Parameters null generates an 
> NPE
> -
>
> Key: NIFI-7995
> URL: https://issues.apache.org/jira/browse/NIFI-7995
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.12.1
>Reporter: Daniel Chaffelson
>Assignee: Otto Fowler
>Priority: Minor
>
> org.apache.nifi.web.api.ParameterContextResource.validateParameterNames(ParameterContextResource.java:403)
> gets very upset if you submit a JSON with null for the Parameter listing 
> instead of an empty or populated list. It is a minor thing, but probably 
> worth tidying up in the validator.
> Discovered because NiPyAPI defaults to Python 'None' for unpopulated 
> properties.
>  
> Full logged error is:
> 2020-11-09 13:20:28,612 ERROR [NiFi Web Server-22] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.web.api.ParameterContextResource.validateParameterNames(ParameterContextResource.java:403)
>   at 
> org.apache.nifi.web.api.ParameterContextResource.createParameterContext(ParameterContextResource.java:199)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
>   at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
>   at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
>   at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
>   at 
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
>   at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
>   at 
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
>   at 
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>   at 
> org.eclipse.jetty.servlet.ServletHolder$NotAsyncServlet.service(ServletHolder.java:1395)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:755)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)
>   at 
> org.apache.nifi.web.filter.RequestLogger.doFilter(RequestLogger.java:66)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
>   at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317)
>   at 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)
> 

[jira] [Commented] (NIFI-5548) Memory leak in graph user interface

2020-11-11 Thread Andrew Buddenberg (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17230218#comment-17230218
 ] 

Andrew Buddenberg commented on NIFI-5548:
-

+1

> Memory leak in graph user interface
> ---
>
> Key: NIFI-5548
> URL: https://issues.apache.org/jira/browse/NIFI-5548
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.6.0, 1.7.0
> Environment: Chrome, Firefox
>Reporter: Alex A.
>Priority: Minor
>
> When the graph interface is left up on a browser tab for an extended period 
> of time (several hours) the client RAM usage continues to grow to the point 
> where the page no longer responds. This behavior has been observed in both 
> versions of Firefox and Chrome on Windows 7 clients and throws an 
> unresponsive script error attributable to the d3.min.js library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7988) Prometheus Remote Write Processor

2020-11-11 Thread Javi Roman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Javi Roman updated NIFI-7988:
-
Status: Patch Available  (was: In Progress)

> Prometheus Remote Write Processor 
> --
>
> Key: NIFI-7988
> URL: https://issues.apache.org/jira/browse/NIFI-7988
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Javi Roman
>Assignee: Javi Roman
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A new processor that allows NiFi to store Prometheus metrics as a Remote 
> Write Adapter.  
> Prometheus's local storage is limited by single nodes in its scalability and 
> durability. Instead of trying to solve clustered storage in Prometheus 
> itself, Prometheus has a set of interfaces that allow integrating with remote 
> storage systems.
> The Remote Write feature in Prometheus allows sending samples to a third 
> party storage system. There is a list of specialized remote endpoints here 
> [1].
> With this NiFi Prometheus Remote Write Processor you can store the metrics in 
> whatever storage supported in NiFi, even to store in several storages at the 
> same time with the advantages of NiFi routing capabilities.
> This processor has two user defined working modes:
>  # One FlowFile per Prometheus sample.
>  # One FlowFile every N samples defined by the user. This mode allows storing 
> the samples in bunches in an easy way without needing other NiFi processors 
> for aggregations.
> The user decides the operation mode.
> The content of the FlowFiles is JSON format.
> [1] 
> [https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7988) Prometheus Remote Write Processor

2020-11-11 Thread Javi Roman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Javi Roman updated NIFI-7988:
-
Status: In Progress  (was: Patch Available)

> Prometheus Remote Write Processor 
> --
>
> Key: NIFI-7988
> URL: https://issues.apache.org/jira/browse/NIFI-7988
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Javi Roman
>Assignee: Javi Roman
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A new processor that allows NiFi to store Prometheus metrics as a Remote 
> Write Adapter.  
> Prometheus's local storage is limited by single nodes in its scalability and 
> durability. Instead of trying to solve clustered storage in Prometheus 
> itself, Prometheus has a set of interfaces that allow integrating with remote 
> storage systems.
> The Remote Write feature in Prometheus allows sending samples to a third 
> party storage system. There is a list of specialized remote endpoints here 
> [1].
> With this NiFi Prometheus Remote Write Processor you can store the metrics in 
> whatever storage supported in NiFi, even to store in several storages at the 
> same time with the advantages of NiFi routing capabilities.
> This processor has two user defined working modes:
>  # One FlowFile per Prometheus sample.
>  # One FlowFile every N samples defined by the user. This mode allows storing 
> the samples in bunches in an easy way without needing other NiFi processors 
> for aggregations.
> The user decides the operation mode.
> The content of the FlowFiles is JSON format.
> [1] 
> [https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ottobackwards opened a new pull request #4656: NIFI-7995 add null check before validating ParameterContexts

2020-11-11 Thread GitBox


ottobackwards opened a new pull request #4656:
URL: https://github.com/apache/nifi/pull/4656


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7998) Parquet file reader service throws Nullpointer Exception

2020-11-11 Thread Dennis Kliche (Jira)
Dennis Kliche created NIFI-7998:
---

 Summary: Parquet file reader service throws Nullpointer Exception
 Key: NIFI-7998
 URL: https://issues.apache.org/jira/browse/NIFI-7998
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.12.1
 Environment: Docker build on Kubernetes
Reporter: Dennis Kliche
 Attachments: Jira_Parquet_Error.json, 
image-2020-11-11-22-33-31-019.png, image-2020-11-11-22-41-13-099.png, 
image-2020-11-11-22-41-44-188.png

We have a pipeline that gets an parquet file and write it into a database via 
PutDatabase Record. The content of the flow file is a parquet file. If the flow 
file is processed we are getting a NullPointerException. 

 

!image-2020-11-11-22-33-31-019.png!

After some tests I found out, that the processor works like expected but if the 
flowfile is a parquet file and the reader is a parquet reader it throws this 
error.

To confirm that is has to do with the parquet reader, I used a very simple 
approach.
Generate flowfile with JSON transform it to parquet and then back to JSON. This 
resulted in the same error message.

In the attached Jira_Parquet_Error.json you can see my simple construction.
Here you can see the settings of the parquet writer:
!image-2020-11-11-22-41-13-099.png!




 

Here you can see the settings of Parquet Reader

 

!image-2020-11-11-22-41-44-188.png!

 

Until update from 1.11.4 to 1.12.1 it worked. Since 1.12.1 we are getting this 
issue.

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)