[jira] [Commented] (NIFI-2835) GetAzureEventHub processor should leverage partition offset to better handle restarts

2017-11-07 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243257#comment-16243257
 ] 

Koji Kawamura commented on NIFI-2835:
-

Hi [~josephxsxn] [~Eulicny] , are you guys still working on this? How is the 
implementation going? (Probably NIFI-3681 is blocking this to make progress)

I just wondered if this JIRA can be done by using EventProcessor Host instead 
of low-level PartitionReceiver API which GetAzureEventHub uses now. By reading 
these Azure Event hub docs/blogs, I think EventProcessor Host approach can make 
things simpler. NiFi does not have to implement leader election or 
partition/offset management itself.
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-java-get-started-receive-eph
https://blogs.biztalk360.com/understanding-consumer-side-of-azure-event-hubs-checkpoint-initialoffset-eventprocessorhost/

Also it will look like ConsumeKafka, as EventProcessor Host stores consumer 
group info in Azure blob storage so does Kafka in its special topic (or Zk 
previously), not in NiFi managed state. Since Event hub and Kafka are similar 
in architecture, storing consumer information at broker side might work better.

I'm going to test using EventProcessor Host from a NiFi processor. It will be 
hugely different from current GetAzureEventHub implementation, so it should be 
different processor such as ConsumeAzureEventHub. I will share my findings 
later.

> GetAzureEventHub processor should leverage partition offset to better handle 
> restarts
> -
>
> Key: NIFI-2835
> URL: https://issues.apache.org/jira/browse/NIFI-2835
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Assignee: Eric Ulicny
>
> The GetAzureEventHub processor utilizes the Azure client that consists of 
> receivers for each partition. The processor stores them in a map[1] that gets 
> cleared every time the processor is stopped[2]. These receivers have 
> partition offsets which keep track of which message it's currently on and 
> which it should receive next. So currently, when the processor is 
> stopped/restarted, any tracking of which message is next to be received is 
> lost.
> If instead of clearing the map each time, we hold onto the receivers, or kept 
> track of the partitionId/Offsets when stopping, (barring any relevant 
> configuration changes) the processor would restart exactly where it left off 
> with no loss of data.
> This would work very well with NIFI-2826.
> [1]https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/GetAzureEventHub.java#L122
> [2] 
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/GetAzureEventHub.java#L229



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-33) Create GetGPS processor for acquiring GPS coordinates

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242974#comment-16242974
 ] 

ASF GitHub Bot commented on MINIFICPP-33:
-

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/52
  
@jdye64 I think this PR has been superseded. I've taken your PR in 172 and 
updated it. This can be closed. 


> Create GetGPS processor for acquiring GPS coordinates
> -
>
> Key: MINIFICPP-33
> URL: https://issues.apache.org/jira/browse/MINIFICPP-33
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> GPSD is a popular framework for interacting with a multitude of GPS devices. 
> It drastically simplifies the interaction with vendor specific GPS devices by 
> providing a daemon service which communicates with the device, converts the 
> raw NMEA 0183 sentences into JSON objects, and then emits those JSON objects 
> over a socket for 0-N downstream devices to consume.
> This feature would create a GetGPS processor that would listen to a running 
> instance of GPSD as one of those downstream consumers. The processor would 
> provide integration with the GPSD daemon to accept the JSON objects and 
> create new flowfiles for each of the JSON objects received.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #52: MINIFI-218: Support for GPSD integration

2017-11-07 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/52
  
@jdye64 I think this PR has been superseded. I've taken your PR in 172 and 
updated it. This can be closed. 


---


[jira] [Commented] (NIFI-3472) Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase processors

2017-11-07 Thread Jeff Storck (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242754#comment-16242754
 ] 

Jeff Storck commented on NIFI-3472:
---

[~jomach] Rebooting is a workaround, but disrupts processing and is certainly 
not an ideal solution. The fix I'm investigating would allow the processor to 
acquire a new TGT without restarting NiFi.

> Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase 
> processors
> --
>
> Key: NIFI-3472
> URL: https://issues.apache.org/jira/browse/NIFI-3472
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.1.1, 1.0.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> PutHDFS is not able to relogin if the ticket expires.
> NiFi, running locally as standalone, was sending files to HDFS.  After 
> suspending the system for the weekend, when the flow attempted to continue to 
> process flowfiles, the following exception occurred:
> {code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
> org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
> server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
> o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
> class ClientNamenodeProtocolTranslatorPB over [host:port] after 3 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: "[host:port]"; destination 
> host is: [host:port];
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
> ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>  ~[hadoop-hdfs-2.7.3.jar:na]
>   at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
> ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>  ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
> [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_102]
>   at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)

[jira] [Commented] (NIFI-3472) Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase processors

2017-11-07 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242669#comment-16242669
 ] 

Jorge Machado commented on NIFI-3472:
-

Hey, so after a lot of debug I found out that this Could Be a keytab Cache 
Problem, I just rebooted Nifi and Then it Works 

> Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase 
> processors
> --
>
> Key: NIFI-3472
> URL: https://issues.apache.org/jira/browse/NIFI-3472
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.1.1, 1.0.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> PutHDFS is not able to relogin if the ticket expires.
> NiFi, running locally as standalone, was sending files to HDFS.  After 
> suspending the system for the weekend, when the flow attempted to continue to 
> process flowfiles, the following exception occurred:
> {code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
> org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
> server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
> o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
> class ClientNamenodeProtocolTranslatorPB over [host:port] after 3 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: "[host:port]"; destination 
> host is: [host:port];
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
> ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>  ~[hadoop-hdfs-2.7.3.jar:na]
>   at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
> ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>  ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
> [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_102]
>   at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>   at 
> 

[jira] [Updated] (NIFI-4579) When Strict Type Checking property is set to "false", ValidateRecord does not coerce fields into the correct type.

2017-11-07 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim updated NIFI-4579:
-
Description: 
The description of the Strict Type Checking property for the ValidateRecord 
processor states:

_If false, the Record will be considered valid and the field will be coerced 
into the correct type (if possible, according to the type coercion supported by 
the Record Writer)._

In my testing I've confirmed that in this scenario, the records are considered 
valid.  But, none of the record fields are coerced into the correct type.

We should either correct the documentation or implement the promised coercion 
functionality.

  was:
The description of the Strict Type Checking property for the ValidateRecord 
processor states:

{{If false, the Record will be considered valid and the field will be coerced 
into the correct type (if possible, according to the type coercion supported by 
the Record Writer).
}}

In my testing I've confirmed that in this scenario, the records are considered 
valid.  But, none of the record fields are coerced into the correct type.

We should either correct the documentation or implement the promised coercion 
functionality.


> When Strict Type Checking property is set to "false", ValidateRecord does not 
> coerce fields into the correct type.
> --
>
> Key: NIFI-4579
> URL: https://issues.apache.org/jira/browse/NIFI-4579
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website, Extensions
>Affects Versions: 1.4.0
>Reporter: Andrew Lim
>
> The description of the Strict Type Checking property for the ValidateRecord 
> processor states:
> _If false, the Record will be considered valid and the field will be coerced 
> into the correct type (if possible, according to the type coercion supported 
> by the Record Writer)._
> In my testing I've confirmed that in this scenario, the records are 
> considered valid.  But, none of the record fields are coerced into the 
> correct type.
> We should either correct the documentation or implement the promised coercion 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3472) Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase processors

2017-11-07 Thread Jeff Storck (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242653#comment-16242653
 ] 

Jeff Storck commented on NIFI-3472:
---

Investigating switching from UGI#checkTGTAndReloginFromKeytab to 
UGI#reloginFromKeytab to allow hadoo-client code to automatically acquire a new 
TGT in the case that the TGT has expired before a Hadoop processor has an 
opportunity to do a relogin within the ticket renew lifetime.

> Kerberos relogin not working (tgt) after ticket expires for HDFS/Hive/HBase 
> processors
> --
>
> Key: NIFI-3472
> URL: https://issues.apache.org/jira/browse/NIFI-3472
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.1.1, 1.0.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> PutHDFS is not able to relogin if the ticket expires.
> NiFi, running locally as standalone, was sending files to HDFS.  After 
> suspending the system for the weekend, when the flow attempted to continue to 
> process flowfiles, the following exception occurred:
> {code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] 
> org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
> server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10] 
> o.a.h.io.retry.RetryInvocationHandler Exception while invoking getFileInfo of 
> class ClientNamenodeProtocolTranslatorPB over [host:port] after 3 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]; Host Details : local host is: "[host:port]"; destination 
> host is: [host:port];
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1479) 
> ~[hadoop-common-2.7.3.jar:na]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1412) 
> ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>  ~[hadoop-hdfs-2.7.3.jar:na]
>   at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) 
> ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_102]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>  ~[hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.7.3.jar:na]
>   at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) 
> [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>  [hadoop-hdfs-2.7.3.jar:na]
>   at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:262) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_102]
>   at javax.security.auth.Subject.doAs(Subject.java:360) [na:1.8.0_102]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
>   at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.1.jar:1.1.1]
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.1.jar:1.1.1]
>   at 
> 

[jira] [Updated] (NIFI-4579) When Strict Type Checking property is set to "false", ValidateRecord does not coerce fields into the correct type.

2017-11-07 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim updated NIFI-4579:
-
Description: 
The description of the Strict Type Checking property for the ValidateRecord 
processor states:

{{If false, the Record will be considered valid and the field will be coerced 
into the correct type (if possible, according to the type coercion supported by 
the Record Writer).
}}

In my testing I've confirmed that in this scenario, the records are considered 
valid.  But, none of the record fields are coerced into the correct type.

We should either correct the documentation or implement the promised coercion 
functionality.

  was:
The description of the Strict Type Checking property for the ValidateRecord 
processor states:

If false, the Record will be considered valid and the field will be coerced 
into the correct type (if possible, according to the type coercion supported by 
the Record Writer).

In my testing I've confirmed that in this scenario, the records are considered 
valid.  But, none of the record fields are coerced into the correct type.

We should either correct the documentation or implement the promised coercion 
functionality.


> When Strict Type Checking property is set to "false", ValidateRecord does not 
> coerce fields into the correct type.
> --
>
> Key: NIFI-4579
> URL: https://issues.apache.org/jira/browse/NIFI-4579
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website, Extensions
>Affects Versions: 1.4.0
>Reporter: Andrew Lim
>
> The description of the Strict Type Checking property for the ValidateRecord 
> processor states:
> {{If false, the Record will be considered valid and the field will be coerced 
> into the correct type (if possible, according to the type coercion supported 
> by the Record Writer).
> }}
> In my testing I've confirmed that in this scenario, the records are 
> considered valid.  But, none of the record fields are coerced into the 
> correct type.
> We should either correct the documentation or implement the promised coercion 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4578) PutHBaseRecord fails on null record field values

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242654#comment-16242654
 ] 

ASF GitHub Bot commented on NIFI-4578:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2256
  
@bbende @ijokarumawak I discovered this nasty edge case this morning while 
building on a NiFi flow for on our projects. 


> PutHBaseRecord fails on null record field values
> 
>
> Key: NIFI-4578
> URL: https://issues.apache.org/jira/browse/NIFI-4578
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>
> A NullPointerException is thrown when a nullable field has a null value. A 
> strategy for handling this needs to be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2256: NIFI-4578 Added strategy for dealing with nullable fields ...

2017-11-07 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2256
  
@bbende @ijokarumawak I discovered this nasty edge case this morning while 
building on a NiFi flow for on our projects. 


---


[jira] [Commented] (NIFI-4578) PutHBaseRecord fails on null record field values

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242647#comment-16242647
 ] 

ASF GitHub Bot commented on NIFI-4578:
--

GitHub user MikeThomsen opened a pull request:

https://github.com/apache/nifi/pull/2256

NIFI-4578 Added strategy for dealing with nullable fields in PutHBase…

…Record.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MikeThomsen/nifi NIFI-4578

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2256.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2256


commit d71ca4cd0be94e286b8b063d679444663d061668
Author: Mike Thomsen 
Date:   2017-11-07T16:24:46Z

NIFI-4578 Added strategy for dealing with nullable fields in PutHBaseRecord.




> PutHBaseRecord fails on null record field values
> 
>
> Key: NIFI-4578
> URL: https://issues.apache.org/jira/browse/NIFI-4578
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>
> A NullPointerException is thrown when a nullable field has a null value. A 
> strategy for handling this needs to be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4579) When Strict Type Checking property is set to "false", ValidateRecord does not coerce fields into the correct type.

2017-11-07 Thread Andrew Lim (JIRA)
Andrew Lim created NIFI-4579:


 Summary: When Strict Type Checking property is set to "false", 
ValidateRecord does not coerce fields into the correct type.
 Key: NIFI-4579
 URL: https://issues.apache.org/jira/browse/NIFI-4579
 Project: Apache NiFi
  Issue Type: Bug
  Components: Documentation & Website, Extensions
Affects Versions: 1.4.0
Reporter: Andrew Lim


The description of the Strict Type Checking property for the ValidateRecord 
processor states:

If false, the Record will be considered valid and the field will be coerced 
into the correct type (if possible, according to the type coercion supported by 
the Record Writer).

In my testing I've confirmed that in this scenario, the records are considered 
valid.  But, none of the record fields are coerced into the correct type.

We should either correct the documentation or implement the promised coercion 
functionality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2256: NIFI-4578 Added strategy for dealing with nullable ...

2017-11-07 Thread MikeThomsen
GitHub user MikeThomsen opened a pull request:

https://github.com/apache/nifi/pull/2256

NIFI-4578 Added strategy for dealing with nullable fields in PutHBase…

…Record.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MikeThomsen/nifi NIFI-4578

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2256.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2256


commit d71ca4cd0be94e286b8b063d679444663d061668
Author: Mike Thomsen 
Date:   2017-11-07T16:24:46Z

NIFI-4578 Added strategy for dealing with nullable fields in PutHBaseRecord.




---


[GitHub] nifi-minifi pull request #98: MINIFI-314: Adds NoOpProvenanceRepository.

2017-11-07 Thread jzonthemtn
GitHub user jzonthemtn opened a pull request:

https://github.com/apache/nifi-minifi/pull/98

MINIFI-314: Adds NoOpProvenanceRepository.

Thank you for submitting a contribution to Apache NiFi - MiNiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi-minifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under minifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under minifi-assembly?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jzonthemtn/nifi-minifi MINIFI-314

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi/pull/98.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #98


commit fd74d4b5ff6d3c870157a707b31d3b6f681c55c2
Author: jzonthemtn 
Date:   2017-11-07T18:46:57Z

MINIFI-314: Adds NoOpProvenanceRepository.




---


[jira] [Commented] (NIFIREG-33) Add LDAP and JWT identity providers NiFi Registry security framework

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242610#comment-16242610
 ] 

ASF GitHub Bot commented on NIFIREG-33:
---

Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/29
  
Looks good! Going to merge


> Add LDAP and JWT identity providers NiFi Registry security framework
> 
>
> Key: NIFIREG-33
> URL: https://issues.apache.org/jira/browse/NIFIREG-33
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>
> The initial addition of a security model to the NiFi Registry framework only 
> included support for certificates as a means of establishing client identity 
> for authentication.
> In order to support more flexible methods of client authentication, this 
> ticket is to provider two new identity providers:
> * LDAPProvider - will verify username/password for authentication and allow 
> JWT token generation via the REST API
> * JWTIdentityProvider - will authenticate tokens that were generated by the 
> registry on subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry issue #29: NIFIREG-33 Add LDAP and JWT auth support

2017-11-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/29
  
Looks good! Going to merge


---


[jira] [Commented] (NIFIREG-46) Make it easy to discover the buckets accessible by current user

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-46?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242532#comment-16242532
 ] 

ASF GitHub Bot commented on NIFIREG-46:
---

Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/30
  
As PR #29 is wrapping up, I will wait until that is merged and then rebase 
this branch on the new master.


> Make it easy to discover the buckets accessible by current user
> ---
>
> Key: NIFIREG-46
> URL: https://issues.apache.org/jira/browse/NIFIREG-46
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>
> As the NiFi UI will want to provide a selection for the user in the context 
> of "which bucket would you like to use", it would be nice if there were a 
> convenient way to discover, in a single call to registry, which buckets the 
> user has access to, and what actions they can perform (read, write, etc).
> This ticket is to add a new endpoint, or extend a current endpoint, to 
> provide this information in an easily consumable format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry issue #30: NIFIREG-46 Add authorizedActions field to Bucket

2017-11-07 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/30
  
As PR #29 is wrapping up, I will wait until that is merged and then rebase 
this branch on the new master.


---


[jira] [Commented] (NIFIREG-33) Add LDAP and JWT identity providers NiFi Registry security framework

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242515#comment-16242515
 ] 

ASF GitHub Bot commented on NIFIREG-33:
---

Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/29
  
Yep, that makes sense. I updated my branch to add that.

Please squash everything into a single commit on merge. Thanks again!


> Add LDAP and JWT identity providers NiFi Registry security framework
> 
>
> Key: NIFIREG-33
> URL: https://issues.apache.org/jira/browse/NIFIREG-33
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>
> The initial addition of a security model to the NiFi Registry framework only 
> included support for certificates as a means of establishing client identity 
> for authentication.
> In order to support more flexible methods of client authentication, this 
> ticket is to provider two new identity providers:
> * LDAPProvider - will verify username/password for authentication and allow 
> JWT token generation via the REST API
> * JWTIdentityProvider - will authenticate tokens that were generated by the 
> registry on subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry issue #29: NIFIREG-33 Add LDAP and JWT auth support

2017-11-07 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/29
  
Yep, that makes sense. I updated my branch to add that.

Please squash everything into a single commit on merge. Thanks again!


---


[jira] [Created] (MINIFICPP-296) Remove Flowfile reference count based on RAII

2017-11-07 Thread marco polo (JIRA)
marco polo created MINIFICPP-296:


 Summary: Remove Flowfile reference count based on RAII
 Key: MINIFICPP-296
 URL: https://issues.apache.org/jira/browse/MINIFICPP-296
 Project: NiFi MiNiFi C++
  Issue Type: Sub-task
Reporter: marco polo
Assignee: marco polo


Remove Flowfile reference count based on RAII



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (MINIFICPP-295) Import functions in ProcessSession should be moved to vector so we don't need to delete them

2017-11-07 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo reassigned MINIFICPP-295:


Assignee: marco polo

> Import functions in ProcessSession should be moved to vector so we don't need 
> to delete them
> 
>
> Key: MINIFICPP-295
> URL: https://issues.apache.org/jira/browse/MINIFICPP-295
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Assignee: marco polo
>
> Should use the vector's destructor when the function exists to reclaim that 
> memory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFIREG-33) Add LDAP and JWT identity providers NiFi Registry security framework

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242402#comment-16242402
 ] 

ASF GitHub Bot commented on NIFIREG-33:
---

Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/29
  
This looks good, was able to get a JWT for a user from the test LDAP server 
and then use it to make another request.

Can we add the DisposableBean approach to LoginIdentityProviderFactory as 
well? Just to ensure we calling the preDestruction method for each identity 
provider.

After that I think this should be good to merge in, thanks!


> Add LDAP and JWT identity providers NiFi Registry security framework
> 
>
> Key: NIFIREG-33
> URL: https://issues.apache.org/jira/browse/NIFIREG-33
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>
> The initial addition of a security model to the NiFi Registry framework only 
> included support for certificates as a means of establishing client identity 
> for authentication.
> In order to support more flexible methods of client authentication, this 
> ticket is to provider two new identity providers:
> * LDAPProvider - will verify username/password for authentication and allow 
> JWT token generation via the REST API
> * JWTIdentityProvider - will authenticate tokens that were generated by the 
> registry on subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry issue #29: NIFIREG-33 Add LDAP and JWT auth support

2017-11-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/29
  
This looks good, was able to get a JWT for a user from the test LDAP server 
and then use it to make another request.

Can we add the DisposableBean approach to LoginIdentityProviderFactory as 
well? Just to ensure we calling the preDestruction method for each identity 
provider.

After that I think this should be good to merge in, thanks!


---


[GitHub] nifi-registry issue #29: NIFIREG-33 Add LDAP and JWT auth support

2017-11-07 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/29
  
@bbende thanks a lot for the thorough review! I fixed the integration test 
configuration so that they uncovered the runtime issues you ran into, and then 
fixed those. I made changes based on your other recommendations as well - 
thanks a lot for finding them and providing a solution for the clean shutdown 
of the authorizers. I agreed with all of those and added a commit with all 
changes. I rebased the branch off of the updated master as well.

Whenever you get a chance, you can re-review and let me know if there is 
anything else to be addressed. Thanks!


---


[jira] [Commented] (NIFIREG-33) Add LDAP and JWT identity providers NiFi Registry security framework

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242358#comment-16242358
 ] 

ASF GitHub Bot commented on NIFIREG-33:
---

Github user kevdoran commented on the issue:

https://github.com/apache/nifi-registry/pull/29
  
@bbende thanks a lot for the thorough review! I fixed the integration test 
configuration so that they uncovered the runtime issues you ran into, and then 
fixed those. I made changes based on your other recommendations as well - 
thanks a lot for finding them and providing a solution for the clean shutdown 
of the authorizers. I agreed with all of those and added a commit with all 
changes. I rebased the branch off of the updated master as well.

Whenever you get a chance, you can re-review and let me know if there is 
anything else to be addressed. Thanks!


> Add LDAP and JWT identity providers NiFi Registry security framework
> 
>
> Key: NIFIREG-33
> URL: https://issues.apache.org/jira/browse/NIFIREG-33
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>
> The initial addition of a security model to the NiFi Registry framework only 
> included support for certificates as a means of establishing client identity 
> for authentication.
> In order to support more flexible methods of client authentication, this 
> ticket is to provider two new identity providers:
> * LDAPProvider - will verify username/password for authentication and allow 
> JWT token generation via the REST API
> * JWTIdentityProvider - will authenticate tokens that were generated by the 
> registry on subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4578) PutHBaseRecord fails on null record field values

2017-11-07 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-4578:
--

 Summary: PutHBaseRecord fails on null record field values
 Key: NIFI-4578
 URL: https://issues.apache.org/jira/browse/NIFI-4578
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mike Thomsen
Assignee: Mike Thomsen


A NullPointerException is thrown when a nullable field has a null value. A 
strategy for handling this needs to be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4577) ValidateCsv - add attributes to indicate number of valid/invalid lines

2017-11-07 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4577:


 Summary: ValidateCsv - add attributes to indicate number of 
valid/invalid lines
 Key: NIFI-4577
 URL: https://issues.apache.org/jira/browse/NIFI-4577
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard
Priority: Minor


At the moment, when using the ValidateCsv processor, the number of 
valid/invalid lines in a flow file is indicated in the generated provenance 
event. It'd be useful to add this information as attributes to evaluate the 
quality of a file and allow routing based on this information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4576) "Available Processors" rewording

2017-11-07 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4576:


 Summary: "Available Processors" rewording
 Key: NIFI-4576
 URL: https://issues.apache.org/jira/browse/NIFI-4576
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Reporter: Pierre Villard
Priority: Trivial


In few places, we're using "available processors" to indicate how much cores 
are available for NiFi. I believe this can be confusing and should be reworded 
differently. "Available Cores" is an option but maybe there is another wording 
to better reflect what would be the value in an hyper-threaded environment / 
docker container / virtual machine, etc.

I believe there are at least two places to update:
- Summary / System Diagnostics / System tab
- Cluster view / System tab



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (MINIFICPP-295) Import functions in ProcessSession should be moved to vector so we don't need to delete them

2017-11-07 Thread marco polo (JIRA)
marco polo created MINIFICPP-295:


 Summary: Import functions in ProcessSession should be moved to 
vector so we don't need to delete them
 Key: MINIFICPP-295
 URL: https://issues.apache.org/jira/browse/MINIFICPP-295
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: marco polo


Should use the vector's destructor when the function exists to reclaim that 
memory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-292) GetFile uses non-portable d_type property to determine if path is a directory

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242260#comment-16242260
 ] 

ASF GitHub Bot commented on MINIFICPP-292:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/179


> GetFile uses non-portable d_type property to determine if path is a directory
> -
>
> Key: MINIFICPP-292
> URL: https://issues.apache.org/jira/browse/MINIFICPP-292
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>
> Using entry->d_type is not portable and will fail on some systems.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #179: MINIFICPP-292 Use portable method to dete...

2017-11-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/179


---


[jira] [Resolved] (NIFIREG-47) Add ability to specify proxied user in nifi-registry-client

2017-11-07 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-47?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFIREG-47.

   Resolution: Fixed
Fix Version/s: 0.0.1

> Add ability to specify proxied user in nifi-registry-client
> ---
>
> Key: NIFIREG-47
> URL: https://issues.apache.org/jira/browse/NIFIREG-47
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Trivial
> Fix For: 0.0.1
>
>
> We need to be able to specify an optional proxied user per operation in the 
> nifi-registry-client.
> For example, NiFi will make a call using the server certificate, but wants to 
> retrieve all of the buckets on behalf of an end user in NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFIREG-47) Add ability to specify proxied user in nifi-registry-client

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-47?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242244#comment-16242244
 ] 

ASF GitHub Bot commented on NIFIREG-47:
---

Github user bbende closed the pull request at:

https://github.com/apache/nifi-registry/pull/33


> Add ability to specify proxied user in nifi-registry-client
> ---
>
> Key: NIFIREG-47
> URL: https://issues.apache.org/jira/browse/NIFIREG-47
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Trivial
>
> We need to be able to specify an optional proxied user per operation in the 
> nifi-registry-client.
> For example, NiFi will make a call using the server certificate, but wants to 
> retrieve all of the buckets on behalf of an end user in NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFIREG-47) Add ability to specify proxied user in nifi-registry-client

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-47?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242243#comment-16242243
 ] 

ASF GitHub Bot commented on NIFIREG-47:
---

Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/33
  
This was merged by Mark, closing the PR


> Add ability to specify proxied user in nifi-registry-client
> ---
>
> Key: NIFIREG-47
> URL: https://issues.apache.org/jira/browse/NIFIREG-47
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Trivial
>
> We need to be able to specify an optional proxied user per operation in the 
> nifi-registry-client.
> For example, NiFi will make a call using the server certificate, but wants to 
> retrieve all of the buckets on behalf of an end user in NiFi.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry issue #33: NIFIREG-47 Improvements to nifi-registry-client

2017-11-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/33
  
This was merged by Mark, closing the PR


---


[GitHub] nifi-registry pull request #33: NIFIREG-47 Improvements to nifi-registry-cli...

2017-11-07 Thread bbende
Github user bbende closed the pull request at:

https://github.com/apache/nifi-registry/pull/33


---


[jira] [Commented] (NIFIREG-40) Create UI coasters for notifications

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242158#comment-16242158
 ] 

ASF GitHub Bot commented on NIFIREG-40:
---

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/32


> Create UI coasters for notifications
> 
>
> Key: NIFIREG-40
> URL: https://issues.apache.org/jira/browse/NIFIREG-40
> Project: NiFi Registry
>  Issue Type: Sub-task
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFIREG-40) Create UI coasters for notifications

2017-11-07 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-40?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFIREG-40.

   Resolution: Fixed
Fix Version/s: 0.0.1

> Create UI coasters for notifications
> 
>
> Key: NIFIREG-40
> URL: https://issues.apache.org/jira/browse/NIFIREG-40
> Project: NiFi Registry
>  Issue Type: Sub-task
>Reporter: Scott Aslan
>Assignee: Scott Aslan
> Fix For: 0.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #32: [NIFIREG-40] Add coasters for notifications ...

2017-11-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/32


---


[jira] [Commented] (NIFIREG-40) Create UI coasters for notifications

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242154#comment-16242154
 ] 

ASF GitHub Bot commented on NIFIREG-40:
---

Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/32
  
+1 I rebased against master and resolved the conflict in the list viewer, 
everything looks good and verified the coasters display on deleting a bucket or 
flow


> Create UI coasters for notifications
> 
>
> Key: NIFIREG-40
> URL: https://issues.apache.org/jira/browse/NIFIREG-40
> Project: NiFi Registry
>  Issue Type: Sub-task
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry issue #32: [NIFIREG-40] Add coasters for notifications to the ...

2017-11-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/32
  
+1 I rebased against master and resolved the conflict in the list viewer, 
everything looks good and verified the coasters display on deleting a bucket or 
flow


---


[jira] [Commented] (MINIFICPP-46) Convert StreamFactory to a controller service

2017-11-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-46?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242101#comment-16242101
 ] 

ASF subversion and git services commented on MINIFICPP-46:
--

Commit b9ec9b6a28d14585b7e24e90ebab4dad00e3181d in nifi-minifi-cpp's branch 
refs/heads/master from Andrew I. Christianson
[ https://git-wip-us.apache.org/repos/asf?p=nifi-minifi-cpp.git;h=b9ec9b6 ]

MINIFI-291 Added standard Lua packages to scripting environment

This closes #178.

Signed-off-by: Marc Parisi 


> Convert StreamFactory to a controller service
> -
>
> Key: MINIFICPP-46
> URL: https://issues.apache.org/jira/browse/MINIFICPP-46
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>
> Once https://github.com/apache/nifi-minifi-cpp/pull/83/files is merged, we 
> can create more controller services. We can move the socket factory over to a 
> controller service which will allow us to better control socket 
> creation/reuse. Possibilities include limiting the number of sockets created, 
> what happens when that max is reached, more easily link services so that 
> secure sockets are created without the need for a static configuration in 
> minifi.properies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #178: MINIFI-291 Added standard Lua packages to...

2017-11-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/178


---


[jira] [Commented] (MINIFICPP-46) Convert StreamFactory to a controller service

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-46?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242102#comment-16242102
 ] 

ASF GitHub Bot commented on MINIFICPP-46:
-

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/178


> Convert StreamFactory to a controller service
> -
>
> Key: MINIFICPP-46
> URL: https://issues.apache.org/jira/browse/MINIFICPP-46
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>
> Once https://github.com/apache/nifi-minifi-cpp/pull/83/files is merged, we 
> can create more controller services. We can move the socket factory over to a 
> controller service which will allow us to better control socket 
> creation/reuse. Possibilities include limiting the number of sockets created, 
> what happens when that max is reached, more easily link services so that 
> secure sockets are created without the need for a static configuration in 
> minifi.properies



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4496) Improve performance of CSVReader

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242089#comment-16242089
 ] 

ASF GitHub Bot commented on NIFI-4496:
--

Github user jdye64 commented on the issue:

https://github.com/apache/nifi/pull/2245
  
@mattyb149 I'm seeing invalid output when I run run an existing flow with 
this PR. I had an existing flow that used ConvertRecord and Apache Commons CSV. 
That was working fine and giving me the output I expected. However when I 
switched to using the Jackson implementation all of the output was empty. I 
have attached a screenshot from my debugger session in hopes that will help 
shed some light into what is going on.

https://user-images.githubusercontent.com/2127235/32498256-32f8ffc6-c39d-11e7-86dd-cde8f7d3a758.png;>



> Improve performance of CSVReader
> 
>
> Key: NIFI-4496
> URL: https://issues.apache.org/jira/browse/NIFI-4496
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> During some throughput testing, it was noted that the CSVReader was not as 
> fast as desired, processing less than 50k records per second. A look at [this 
> benchmark|https://github.com/uniVocity/csv-parsers-comparison] implies that 
> the Apache Commons CSV parser (used by CSVReader) is quite slow compared to 
> others.
> From that benchmark it appears that CSVReader could be enhanced by using a 
> different CSV parser under the hood. Perhaps Jackson is the best choice, as 
> it is fast when values are quoted, and is a mature and maintained codebase.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2245: NIFI-4496: Added JacksonCSVRecordReader to allow choice of...

2017-11-07 Thread jdye64
Github user jdye64 commented on the issue:

https://github.com/apache/nifi/pull/2245
  
@mattyb149 I'm seeing invalid output when I run run an existing flow with 
this PR. I had an existing flow that used ConvertRecord and Apache Commons CSV. 
That was working fine and giving me the output I expected. However when I 
switched to using the Jackson implementation all of the output was empty. I 
have attached a screenshot from my debugger session in hopes that will help 
shed some light into what is going on.

https://user-images.githubusercontent.com/2127235/32498256-32f8ffc6-c39d-11e7-86dd-cde8f7d3a758.png;>



---


[jira] [Commented] (NIFIREG-33) Add LDAP and JWT identity providers NiFi Registry security framework

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242066#comment-16242066
 ] 

ASF GitHub Bot commented on NIFIREG-33:
---

Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/29#discussion_r149377495
  
--- Diff: 
nifi-registry-properties/src/main/java/org/apache/nifi/registry/properties/NiFiRegistryProperties.java
 ---
@@ -49,6 +49,8 @@
 public static final String SECURITY_NEED_CLIENT_AUTH = 
"nifi.registry.security.needClientAuth";
 public static final String SECURITY_AUTHORIZERS_CONFIGURATION_FILE = 
"nifi.registry.security.authorizers.configuration.file";
 public static final String SECURITY_AUTHORIZER = 
"nifi.registry.security.authorizer";
+public static final String 
SECURITY_IDENTITY_PROVIDER_CONFIGURATION_FILE = 
"nifi.registry.security.identity.provider.configuration.file";
--- End diff --

Yep, good catch. Will update this.


> Add LDAP and JWT identity providers NiFi Registry security framework
> 
>
> Key: NIFIREG-33
> URL: https://issues.apache.org/jira/browse/NIFIREG-33
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>
> The initial addition of a security model to the NiFi Registry framework only 
> included support for certificates as a means of establishing client identity 
> for authentication.
> In order to support more flexible methods of client authentication, this 
> ticket is to provider two new identity providers:
> * LDAPProvider - will verify username/password for authentication and allow 
> JWT token generation via the REST API
> * JWTIdentityProvider - will authenticate tokens that were generated by the 
> registry on subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #29: NIFIREG-33 Add LDAP and JWT auth support

2017-11-07 Thread kevdoran
Github user kevdoran commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/29#discussion_r149377495
  
--- Diff: 
nifi-registry-properties/src/main/java/org/apache/nifi/registry/properties/NiFiRegistryProperties.java
 ---
@@ -49,6 +49,8 @@
 public static final String SECURITY_NEED_CLIENT_AUTH = 
"nifi.registry.security.needClientAuth";
 public static final String SECURITY_AUTHORIZERS_CONFIGURATION_FILE = 
"nifi.registry.security.authorizers.configuration.file";
 public static final String SECURITY_AUTHORIZER = 
"nifi.registry.security.authorizer";
+public static final String 
SECURITY_IDENTITY_PROVIDER_CONFIGURATION_FILE = 
"nifi.registry.security.identity.provider.configuration.file";
--- End diff --

Yep, good catch. Will update this.


---


[jira] [Commented] (NIFI-3402) Add ETag Support to InvokeHTTP

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241925#comment-16241925
 ] 

ASF GitHub Bot commented on NIFI-3402:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2150#discussion_r149351709
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java
 ---
@@ -1093,6 +1133,19 @@ private Charset getCharsetFromMediaType(MediaType 
contentType) {
 return contentType != null ? 
contentType.charset(StandardCharsets.UTF_8) : StandardCharsets.UTF_8;
 }
 
+/**
+ * Retrieve the directory in which OkHttp should cache responses. This 
method opts
+ * to use a temp directory to write the cache, which means that the 
cache will be written
+ * to a new location each time this processor is scheduled.
+ *
+ * Ref: https://github.com/square/okhttp/wiki/Recipes#response-caching
+ *
+ * @return the directory in which the ETag cache should be written
+ */
+private static File getETagCacheDir() {
+return Files.createTempDir();
--- End diff --

@MikeThomsen - true but one temporary folder is created each time the 
processor is started. We could assume that the processor is going to be 
stopped/started a lot. But I agree, this is a edge case. I'm fine with the 
current implementation. If I really want to be meticulous I could suggest to 
add this information in the property description :)


> Add ETag Support to InvokeHTTP
> --
>
> Key: NIFI-3402
> URL: https://issues.apache.org/jira/browse/NIFI-3402
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Trivial
>
> Unlike GetHTTP, when running in "source" mode InvokeHTTP doesn't support 
> ETags.  It will pull from a URL as often as it is scheduled to run.  When 
> running with an input relationship, it would potentially make sense to not 
> use the ETag.  But at least in "source" mode, it seems like it should at 
> least be an option.
> To maintain backwards compatibility and support the non-"source" usage, I'd 
> suggest creating a new "Use ETag" property that defaults to false...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2150: NIFI-3402: Added etag support to InvokeHTTP

2017-11-07 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2150#discussion_r149351709
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java
 ---
@@ -1093,6 +1133,19 @@ private Charset getCharsetFromMediaType(MediaType 
contentType) {
 return contentType != null ? 
contentType.charset(StandardCharsets.UTF_8) : StandardCharsets.UTF_8;
 }
 
+/**
+ * Retrieve the directory in which OkHttp should cache responses. This 
method opts
+ * to use a temp directory to write the cache, which means that the 
cache will be written
+ * to a new location each time this processor is scheduled.
+ *
+ * Ref: https://github.com/square/okhttp/wiki/Recipes#response-caching
+ *
+ * @return the directory in which the ETag cache should be written
+ */
+private static File getETagCacheDir() {
+return Files.createTempDir();
--- End diff --

@MikeThomsen - true but one temporary folder is created each time the 
processor is started. We could assume that the processor is going to be 
stopped/started a lot. But I agree, this is a edge case. I'm fine with the 
current implementation. If I really want to be meticulous I could suggest to 
add this information in the property description :)


---


[jira] [Commented] (NIFI-3402) Add ETag Support to InvokeHTTP

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241839#comment-16241839
 ] 

ASF GitHub Bot commented on NIFI-3402:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2150#discussion_r149336584
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java
 ---
@@ -1093,6 +1133,19 @@ private Charset getCharsetFromMediaType(MediaType 
contentType) {
 return contentType != null ? 
contentType.charset(StandardCharsets.UTF_8) : StandardCharsets.UTF_8;
 }
 
+/**
+ * Retrieve the directory in which OkHttp should cache responses. This 
method opts
+ * to use a temp directory to write the cache, which means that the 
cache will be written
+ * to a new location each time this processor is scheduled.
+ *
+ * Ref: https://github.com/square/okhttp/wiki/Recipes#response-caching
+ *
+ * @return the directory in which the ETag cache should be written
+ */
+private static File getETagCacheDir() {
+return Files.createTempDir();
--- End diff --

@pvillard31 Since there is a configured size limit on the field, it should 
stick to a reasonable size since there is a sane default of 10MB. If a user is 
foolish enough to set that to 10GB, not our problem.


> Add ETag Support to InvokeHTTP
> --
>
> Key: NIFI-3402
> URL: https://issues.apache.org/jira/browse/NIFI-3402
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Trivial
>
> Unlike GetHTTP, when running in "source" mode InvokeHTTP doesn't support 
> ETags.  It will pull from a URL as often as it is scheduled to run.  When 
> running with an input relationship, it would potentially make sense to not 
> use the ETag.  But at least in "source" mode, it seems like it should at 
> least be an option.
> To maintain backwards compatibility and support the non-"source" usage, I'd 
> suggest creating a new "Use ETag" property that defaults to false...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2150: NIFI-3402: Added etag support to InvokeHTTP

2017-11-07 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2150#discussion_r149336584
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java
 ---
@@ -1093,6 +1133,19 @@ private Charset getCharsetFromMediaType(MediaType 
contentType) {
 return contentType != null ? 
contentType.charset(StandardCharsets.UTF_8) : StandardCharsets.UTF_8;
 }
 
+/**
+ * Retrieve the directory in which OkHttp should cache responses. This 
method opts
+ * to use a temp directory to write the cache, which means that the 
cache will be written
+ * to a new location each time this processor is scheduled.
+ *
+ * Ref: https://github.com/square/okhttp/wiki/Recipes#response-caching
+ *
+ * @return the directory in which the ETag cache should be written
+ */
+private static File getETagCacheDir() {
+return Files.createTempDir();
--- End diff --

@pvillard31 Since there is a configured size limit on the field, it should 
stick to a reasonable size since there is a sane default of 10MB. If a user is 
foolish enough to set that to 10GB, not our problem.


---


[jira] [Commented] (NIFI-3402) Add ETag Support to InvokeHTTP

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241657#comment-16241657
 ] 

ASF GitHub Bot commented on NIFI-3402:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2150#discussion_r149295676
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java
 ---
@@ -1093,6 +1133,19 @@ private Charset getCharsetFromMediaType(MediaType 
contentType) {
 return contentType != null ? 
contentType.charset(StandardCharsets.UTF_8) : StandardCharsets.UTF_8;
 }
 
+/**
+ * Retrieve the directory in which OkHttp should cache responses. This 
method opts
+ * to use a temp directory to write the cache, which means that the 
cache will be written
+ * to a new location each time this processor is scheduled.
+ *
+ * Ref: https://github.com/square/okhttp/wiki/Recipes#response-caching
+ *
+ * @return the directory in which the ETag cache should be written
+ */
+private static File getETagCacheDir() {
+return Files.createTempDir();
--- End diff --

Fair enough. I just thought that OS-controlled temporary folder was being 
cleaned at server reboot by default and a NiFi server is probably not supposed 
to be rebooted very often. 


> Add ETag Support to InvokeHTTP
> --
>
> Key: NIFI-3402
> URL: https://issues.apache.org/jira/browse/NIFI-3402
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Trivial
>
> Unlike GetHTTP, when running in "source" mode InvokeHTTP doesn't support 
> ETags.  It will pull from a URL as often as it is scheduled to run.  When 
> running with an input relationship, it would potentially make sense to not 
> use the ETag.  But at least in "source" mode, it seems like it should at 
> least be an option.
> To maintain backwards compatibility and support the non-"source" usage, I'd 
> suggest creating a new "Use ETag" property that defaults to false...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2150: NIFI-3402: Added etag support to InvokeHTTP

2017-11-07 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2150#discussion_r149295676
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java
 ---
@@ -1093,6 +1133,19 @@ private Charset getCharsetFromMediaType(MediaType 
contentType) {
 return contentType != null ? 
contentType.charset(StandardCharsets.UTF_8) : StandardCharsets.UTF_8;
 }
 
+/**
+ * Retrieve the directory in which OkHttp should cache responses. This 
method opts
+ * to use a temp directory to write the cache, which means that the 
cache will be written
+ * to a new location each time this processor is scheduled.
+ *
+ * Ref: https://github.com/square/okhttp/wiki/Recipes#response-caching
+ *
+ * @return the directory in which the ETag cache should be written
+ */
+private static File getETagCacheDir() {
+return Files.createTempDir();
--- End diff --

Fair enough. I just thought that OS-controlled temporary folder was being 
cleaned at server reboot by default and a NiFi server is probably not supposed 
to be rebooted very often. 


---