[jira] [Commented] (NIFI-4998) Update node and npm version.

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410841#comment-16410841
 ] 

ASF GitHub Bot commented on NIFI-4998:
--

Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/2571
  
That resolved my npm proxy issue; build succeeded.
I had no issues, but I noticed that the `nifi-jolt-transform-json-ui` 
project also references the `frontend-maven-plugin` and uses the old version 
numbers.


> Update node and npm version.
> 
>
> Key: NIFI-4998
> URL: https://issues.apache.org/jira/browse/NIFI-4998
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2571: [NIFI-4998] update node and npm version

2018-03-22 Thread patricker
Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/2571
  
That resolved my npm proxy issue; build succeeded.
I had no issues, but I noticed that the `nifi-jolt-transform-json-ui` 
project also references the `frontend-maven-plugin` and uses the old version 
numbers.


---


[jira] [Updated] (NIFI-5009) PutParquet processor requires "read filesystem" restricted component permission but should be "write filesystem" permission instead

2018-03-22 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim updated NIFI-5009:
-
Attachment: PutParquet_permission.jpg

> PutParquet processor requires "read filesystem" restricted component 
> permission but should be "write filesystem" permission instead
> ---
>
> Key: NIFI-5009
> URL: https://issues.apache.org/jira/browse/NIFI-5009
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 1.6.0
>Reporter: Andrew Lim
>Priority: Minor
> Attachments: PutParquet_permission.jpg
>
>
> Similar to the other Put*** restricted processors, this is a write 
> processor, so it should require "write filesystem" permissions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5009) PutParquet processor requires "read filesystem" restricted component permission but should be "write filesystem" permission instead

2018-03-22 Thread Andrew Lim (JIRA)
Andrew Lim created NIFI-5009:


 Summary: PutParquet processor requires "read filesystem" 
restricted component permission but should be "write filesystem" permission 
instead
 Key: NIFI-5009
 URL: https://issues.apache.org/jira/browse/NIFI-5009
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework, Core UI
Affects Versions: 1.6.0
Reporter: Andrew Lim


Similar to the other Put*** restricted processors, this is a write 
processor, so it should require "write filesystem" permissions.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5008) Components are marked with the "restricted" red shield icon, but are not tagged as "restricted".

2018-03-22 Thread Andrew Lim (JIRA)
Andrew Lim created NIFI-5008:


 Summary: Components are marked with the "restricted" red shield 
icon, but are not tagged as "restricted".
 Key: NIFI-5008
 URL: https://issues.apache.org/jira/browse/NIFI-5008
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Reporter: Andrew Lim


Without the "restricted" tag, the components will not show up when filtering by 
that tag.

These are the components that need to have the "restricted" tag added:

 
*Processors:*
 
ExecuteGroovyScript
GetHDFSSequenceFile
 
*Controller Services:*
 
KeytabCredentialsService
 
*Reporting Task:*
 
ScriptedReportingTask
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5007) pre processor schedule by primary, next processor schedule by all node; but in fact next processor only be scheduled to primary

2018-03-22 Thread Ning Sheng (JIRA)
Ning Sheng created NIFI-5007:


 Summary: pre processor schedule by primary, next processor 
schedule by all node; but in fact next processor only be scheduled to primary
 Key: NIFI-5007
 URL: https://issues.apache.org/jira/browse/NIFI-5007
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.2.0
 Environment: pre processor schedule by primary, next processor 
schedule by all node; but in fact next processor only be scheduled to primary
Reporter: Ning Sheng






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-444) Remove warnings

2018-03-22 Thread marco polo (JIRA)
marco polo created MINIFICPP-444:


 Summary: Remove warnings
 Key: MINIFICPP-444
 URL: https://issues.apache.org/jira/browse/MINIFICPP-444
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: marco polo
Assignee: marco polo


A lot of warnings have crept back into the build, especially for centos 7. We 
should remove these. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5006) Update docs to reflect 2018 where applicable

2018-03-22 Thread Aldrin Piri (JIRA)
Aldrin Piri created NIFI-5006:
-

 Summary: Update docs to reflect 2018 where applicable
 Key: NIFI-5006
 URL: https://issues.apache.org/jira/browse/NIFI-5006
 Project: Apache NiFi
  Issue Type: Task
Reporter: Aldrin Piri
 Fix For: 1.7.0


While reviewing the RC1 for NiFi 1.6.0 I noticed that docs have not been 
updated to reflect the new year.  We should update these when handling our next 
release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5005) MergeRecord processor ignoring schema types for JsonRecordSetWriter output

2018-03-22 Thread Nick Pettyjohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Pettyjohn updated NIFI-5005:
-
Description: 
Issue noticed when using a MergeRecord processor with a Record Reader of 
CSVReader and a Record Writer of JsonRecordSetWriter.

The CSVReader is configured with a Schema Access Strategy of "Use String Fields 
From Header". The JsonRecordSetWriter is given an Avro schema in the Schema 
Text property that contains a mix of string, double, and long value types.

Running sample csv data through the MergeRecord processor produces JSON in 
which all values are quoted, despite the Avro schema specifying otherwise. 
However, when using the ConvertRecord processor with the same Reader/Writer 
config, the output JSON records use the typing given in the avro schema, 
keeping long and float values unquoted.

The attached template, with corresponding avro schema and sample input csv file 
will demonstrate the issue.

  was:
Issue noticed when using a MergeRecord processor with a Record Reader of 
CSVReader and a Record Writer of JsonRecordSetWriter.

The CSVReader is configured with a Schema Access Strategy of "Use String Fields 
From Header". The JsonRecordSetWriter is given an Avro schema in the Schema 
Text property that contains a mix of string, double, and long value types.

Running sample csv data through the MergeRecord processor produces JSON in 
which all values are quoted, despite the Avro schema specifying otherwise. 
However, when using the ConvertRecord processor with the same Reader/Writer 
config, the output JSON records use the typing given in the avro schema, 
keeping long and float values unquoted.


> MergeRecord processor ignoring schema types for JsonRecordSetWriter output
> --
>
> Key: NIFI-5005
> URL: https://issues.apache.org/jira/browse/NIFI-5005
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Nick Pettyjohn
>Priority: Major
> Attachments: MergeContent_JsonOutput.xml, test.csv, test_schema.avsc
>
>
> Issue noticed when using a MergeRecord processor with a Record Reader of 
> CSVReader and a Record Writer of JsonRecordSetWriter.
> The CSVReader is configured with a Schema Access Strategy of "Use String 
> Fields From Header". The JsonRecordSetWriter is given an Avro schema in the 
> Schema Text property that contains a mix of string, double, and long value 
> types.
> Running sample csv data through the MergeRecord processor produces JSON in 
> which all values are quoted, despite the Avro schema specifying otherwise. 
> However, when using the ConvertRecord processor with the same Reader/Writer 
> config, the output JSON records use the typing given in the avro schema, 
> keeping long and float values unquoted.
> The attached template, with corresponding avro schema and sample input csv 
> file will demonstrate the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5005) MergeRecord processor ignoring schema types for JsonRecordSetWriter output

2018-03-22 Thread Nick Pettyjohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Pettyjohn updated NIFI-5005:
-
Attachment: test_schema.avsc

> MergeRecord processor ignoring schema types for JsonRecordSetWriter output
> --
>
> Key: NIFI-5005
> URL: https://issues.apache.org/jira/browse/NIFI-5005
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Nick Pettyjohn
>Priority: Major
> Attachments: MergeContent_JsonOutput.xml, test.csv, test_schema.avsc
>
>
> Issue noticed when using a MergeRecord processor with a Record Reader of 
> CSVReader and a Record Writer of JsonRecordSetWriter.
> The CSVReader is configured with a Schema Access Strategy of "Use String 
> Fields From Header". The JsonRecordSetWriter is given an Avro schema in the 
> Schema Text property that contains a mix of string, double, and long value 
> types.
> Running sample csv data through the MergeRecord processor produces JSON in 
> which all values are quoted, despite the Avro schema specifying otherwise. 
> However, when using the ConvertRecord processor with the same Reader/Writer 
> config, the output JSON records use the typing given in the avro schema, 
> keeping long and float values unquoted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5005) MergeRecord processor ignoring schema types for JsonRecordSetWriter output

2018-03-22 Thread Nick Pettyjohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Pettyjohn updated NIFI-5005:
-
Attachment: MergeContent_JsonOutput.xml

> MergeRecord processor ignoring schema types for JsonRecordSetWriter output
> --
>
> Key: NIFI-5005
> URL: https://issues.apache.org/jira/browse/NIFI-5005
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Nick Pettyjohn
>Priority: Major
> Attachments: MergeContent_JsonOutput.xml, test.csv, test_schema.avsc
>
>
> Issue noticed when using a MergeRecord processor with a Record Reader of 
> CSVReader and a Record Writer of JsonRecordSetWriter.
> The CSVReader is configured with a Schema Access Strategy of "Use String 
> Fields From Header". The JsonRecordSetWriter is given an Avro schema in the 
> Schema Text property that contains a mix of string, double, and long value 
> types.
> Running sample csv data through the MergeRecord processor produces JSON in 
> which all values are quoted, despite the Avro schema specifying otherwise. 
> However, when using the ConvertRecord processor with the same Reader/Writer 
> config, the output JSON records use the typing given in the avro schema, 
> keeping long and float values unquoted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5005) MergeRecord processor ignoring schema types for JsonRecordSetWriter output

2018-03-22 Thread Nick Pettyjohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Pettyjohn updated NIFI-5005:
-
Attachment: test.csv

> MergeRecord processor ignoring schema types for JsonRecordSetWriter output
> --
>
> Key: NIFI-5005
> URL: https://issues.apache.org/jira/browse/NIFI-5005
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Nick Pettyjohn
>Priority: Major
> Attachments: MergeContent_JsonOutput.xml, test.csv, test_schema.avsc
>
>
> Issue noticed when using a MergeRecord processor with a Record Reader of 
> CSVReader and a Record Writer of JsonRecordSetWriter.
> The CSVReader is configured with a Schema Access Strategy of "Use String 
> Fields From Header". The JsonRecordSetWriter is given an Avro schema in the 
> Schema Text property that contains a mix of string, double, and long value 
> types.
> Running sample csv data through the MergeRecord processor produces JSON in 
> which all values are quoted, despite the Avro schema specifying otherwise. 
> However, when using the ConvertRecord processor with the same Reader/Writer 
> config, the output JSON records use the typing given in the avro schema, 
> keeping long and float values unquoted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-2575) HiveQL Processors Fail due to invalid JDBC URI resolution when using Zookeeper URI

2018-03-22 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410485#comment-16410485
 ] 

Matt Burgess commented on NIFI-2575:


I'm working on a set of Hive 3 processors, where this should be fixed. Unless 
we have a contribution for Hive 2 components, I think NIFI-4963 should 
encompass this Jira as well.

> HiveQL Processors Fail due to invalid JDBC URI resolution when using 
> Zookeeper URI
> --
>
> Key: NIFI-2575
> URL: https://issues.apache.org/jira/browse/NIFI-2575
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Yolanda M. Davis
>Priority: Major
>
> When configuring a HiveQL processor using the Zookeeper URL (e.g. 
> jdbc:hive2://ydavis-hdp-nifi-test-3.openstacklocal:2181,ydavis-hdp-nifi-test-1.openstacklocal:2181,ydavis-hdp-nifi-test-2.openstacklocal:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2),
>  it appears that the JDBC driver does not properly build the the uri in the 
> expected format.  This is because HS2 is storing JDBC parameters in ZK 
> (https://issues.apache.org/jira/browse/HIVE-11581) and it is expecting the 
> driver to be able to parse and use those values to configure the connection. 
> However it appears the driver is expecting zookeeper to simply return the 
> host:port and subsequently building an invalid URI.
> This problem has result in two variation of errors. The following was 
> experienced by [~mattyb149]
> {noformat}
> 2016-08-15 12:45:12,918 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.Utils Resolved authority: 
> hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Will try to open client transport with 
> JDBC Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Could not open client transport with JDBC 
> Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,836 INFO [Timer-Driven Process Thread-2] 
> o.a.c.f.imps.CuratorFrameworkImpl Starting
> 2016-08-15 12:45:14,064 INFO [Timer-Driven Process Thread-2-EventThread] 
> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
> 2016-08-15 12:45:14,182 INFO [Curator-Framework-0] 
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2016-08-15 12:45:14,337 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] failed to process due 
> to java.lang.reflect.UndeclaredThrowableException; rolling back session: 
> java.lang.reflect.UndeclaredThrowableException
> 2016-08-15 12:45:14,346 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.hive.SelectHiveQL
> java.lang.reflect.UndeclaredThrowableException: null
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>  ~[na:na]
>   at 
> org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:255)
>  ~[na:na]
>   at sun.reflect.GeneratedMethodAccessor331.invoke(Unknown 
> Source) ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_65]
>   at java.lang.reflect.Method.invoke(Method.java:497) 
> ~[na:1.8.0_65]
>   at 
> org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:174)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy81.getConnection(Unknown Source) ~[na:na]
>   at 
> 

[jira] [Created] (NIFI-5005) MergeRecord processor ignoring schema types for JsonRecordSetWriter output

2018-03-22 Thread Nick Pettyjohn (JIRA)
Nick Pettyjohn created NIFI-5005:


 Summary: MergeRecord processor ignoring schema types for 
JsonRecordSetWriter output
 Key: NIFI-5005
 URL: https://issues.apache.org/jira/browse/NIFI-5005
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.5.0
Reporter: Nick Pettyjohn


Issue noticed when using a MergeRecord processor with a Record Reader of 
CSVReader and a Record Writer of JsonRecordSetWriter.

The CSVReader is configured with a Schema Access Strategy of "Use String Fields 
From Header". The JsonRecordSetWriter is given an Avro schema in the Schema 
Text property that contains a mix of string, double, and long value types.

Running sample csv data through the MergeRecord processor produces JSON in 
which all values are quoted, despite the Avro schema specifying otherwise. 
However, when using the ConvertRecord processor with the same Reader/Writer 
config, the output JSON records use the typing given in the avro schema, 
keeping long and float values unquoted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5004) Ability to Execute File (FTP/CIFS/SFTP) Copy jobs on Mapreduce From Nifi

2018-03-22 Thread Greg Senia (JIRA)
Greg Senia created NIFI-5004:


 Summary: Ability to Execute File (FTP/CIFS/SFTP) Copy jobs on 
Mapreduce From Nifi
 Key: NIFI-5004
 URL: https://issues.apache.org/jira/browse/NIFI-5004
 Project: Apache NiFi
  Issue Type: Wish
Reporter: Greg Senia


Would like to see Nifi run programs on MapReduce exampesl of these like 
FTP2HDFS [https://github.com/gss2002/ftp2hdfs] and CIFS2HDFS 
[https://github.com/gss2002/cifs2hdfs] as a MapReduce application where the 
final resting place is HDFS without any type of data transform on the way in. 
This would reduce overhead on the Nifi node and move the incoming data directly 
to the datanode via shortcircuit/read rites. As I currently have these two 
applications running as MR jobs now and doing this being able to do this from 
within Nifi pointing at HDFS/YARN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5003) nifi.sh install is broken in systemd operating systems

2018-03-22 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410207#comment-16410207
 ] 

Aldrin Piri commented on NIFI-5003:
---

exemplary work to support this: 
https://gist.github.com/ddewaele/54f67b0e9afaa538c31723cd2f609e14

> nifi.sh install is broken in systemd operating systems 
> ---
>
> Key: NIFI-5003
> URL: https://issues.apache.org/jira/browse/NIFI-5003
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Samuel Miller
>Priority: Major
>
> I've been unable to use systemctl after executing
> `bin/nifi.sh install nifi`
>  
> Using it generates the following errors when attempting to use either 
> systemctl commands or apt-get dist-upgrade commands:
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Max recursions depth 99 reached
>  
>  
> My system: 
> $ java -version
> java version "1.8.0_161"
> Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
> samuel@nifi-1:~$ mvn -version
> The program 'mvn' is currently not installed. To run 'mvn' please ask your 
> administrator to install the package 'maven'
> samuel@nifi-1:~$ uname -a
> Linux nifi-1.west.usermind.com 4.4.0-1052-aws #61-Ubuntu SMP Mon Feb 12 
> 23:05:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5003) nifi.sh install is broken in systemd operating systems

2018-03-22 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410194#comment-16410194
 ] 

Aldrin Piri commented on NIFI-5003:
---

>From chat, this was Ubuntu 16.04.4

> nifi.sh install is broken in systemd operating systems 
> ---
>
> Key: NIFI-5003
> URL: https://issues.apache.org/jira/browse/NIFI-5003
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Samuel Miller
>Priority: Major
>
> I've been unable to use systemctl after executing
> `bin/nifi.sh install nifi`
>  
> Using it generates the following errors when attempting to use either 
> systemctl commands or apt-get dist-upgrade commands:
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Starting nifi depends on plymouth and therefore on system facility 
> `$all' which can not be true!
> insserv: Max recursions depth 99 reached
>  
>  
> My system: 
> $ java -version
> java version "1.8.0_161"
> Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
> samuel@nifi-1:~$ mvn -version
> The program 'mvn' is currently not installed. To run 'mvn' please ask your 
> administrator to install the package 'maven'
> samuel@nifi-1:~$ uname -a
> Linux nifi-1.west.usermind.com 4.4.0-1052-aws #61-Ubuntu SMP Mon Feb 12 
> 23:05:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5003) nifi.sh install is broken in systemd operating systems

2018-03-22 Thread Samuel Miller (JIRA)
Samuel Miller created NIFI-5003:
---

 Summary: nifi.sh install is broken in systemd operating systems 
 Key: NIFI-5003
 URL: https://issues.apache.org/jira/browse/NIFI-5003
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.5.0, 1.4.0
Reporter: Samuel Miller


I've been unable to use systemctl after executing

`bin/nifi.sh install nifi`

 

Using it generates the following errors when attempting to use either systemctl 
commands or apt-get dist-upgrade commands:

insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Starting nifi depends on plymouth and therefore on system facility 
`$all' which can not be true!
insserv: Max recursions depth 99 reached

 

 

My system: 

$ java -version
java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
samuel@nifi-1:~$ mvn -version
The program 'mvn' is currently not installed. To run 'mvn' please ask your 
administrator to install the package 'maven'
samuel@nifi-1:~$ uname -a
Linux nifi-1.west.usermind.com 4.4.0-1052-aws #61-Ubuntu SMP Mon Feb 12 
23:05:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410096#comment-16410096
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176531476
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToBytes.java
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToBytes extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToBytes(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toBytes", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+
+if (!(fv.getValue() instanceof String)) {
+return fv;
--- End diff --

Agreed, the top-level caller is expecting that... but in this case, we 
cannot give the caller the correct type of data. So I think it's best to just 
throw instead of giving the caller the wrong data... if that's happening in 
ToDate then it's either a bug there as well, or perhaps there's some 
undocumented assumption being made about what else the type could be??


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176531476
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToBytes.java
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToBytes extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToBytes(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toBytes", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+
+if (!(fv.getValue() instanceof String)) {
+return fv;
--- End diff --

Agreed, the top-level caller is expecting that... but in this case, we 
cannot give the caller the correct type of data. So I think it's best to just 
throw instead of giving the caller the wrong data... if that's happening in 
ToDate then it's either a bug there as well, or perhaps there's some 
undocumented assumption being made about what else the type could be??


---


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410092#comment-16410092
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176530763
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -609,6 +623,9 @@ private static Object convertToAvroObject(final Object 
rawValue, final Schema fi
 if (rawValue instanceof byte[]) {
 return ByteBuffer.wrap((byte[]) rawValue);
 }
+if (rawValue instanceof String) {
+return ByteBuffer.wrap(((String) 
rawValue).getBytes(charset));
--- End diff --

Whoops - my bad on this one. This is #convertToAvroObject, and I was 
thinking of #normalizeValue. In this case, we are converting into the object 
that Avro wants, so a ByteBuffer is the correct thing to do.


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176530763
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -609,6 +623,9 @@ private static Object convertToAvroObject(final Object 
rawValue, final Schema fi
 if (rawValue instanceof byte[]) {
 return ByteBuffer.wrap((byte[]) rawValue);
 }
+if (rawValue instanceof String) {
+return ByteBuffer.wrap(((String) 
rawValue).getBytes(charset));
--- End diff --

Whoops - my bad on this one. This is #convertToAvroObject, and I was 
thinking of #normalizeValue. In this case, we are converting into the object 
that Avro wants, so a ByteBuffer is the correct thing to do.


---


[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410077#comment-16410077
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176526551
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 

[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176524758
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
--- End diff --

Making input optional could be quite useful. See ExecuteSQL and GetMongo 
for an example of how to support timer and event driving.


---


[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410076#comment-16410076
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176528433
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/test/java/org/apache/nifi/processors/influxdb/AbstractITInfluxDB.java
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.util.TestRunner;
+import org.influxdb.InfluxDB;
+import org.influxdb.InfluxDBFactory;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import org.junit.After;
+
+/**
+ * Base integration test class for InfluxDB processors
+ */
+public class AbstractITInfluxDB {
+protected TestRunner runner;
+protected InfluxDB influxDB;
+protected String dbName = "test";
+protected String dbUrl = "http://localhost:8086;;
+protected String user = "admin";
+protected String password = "admin";
+protected static final String DEFAULT_RETENTION_POLICY = "autogen";
+
+protected void initInfluxDB() throws InterruptedException, Exception {
+influxDB = InfluxDBFactory.connect(dbUrl,user,password);
+if ( influxDB.databaseExists(dbName) ) {
--- End diff --

I would strongly suggest moving this to the `@After` section because it'll 
make everything behave in one clean arc of setup -> test -> cleanup.


> Create InfluxDB Query Processor
> ---
>
> Key: NIFI-4927
> URL: https://issues.apache.org/jira/browse/NIFI-4927
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: measurements,, query, realtime, timeseries
>
> Create InfluxDB Query processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410070#comment-16410070
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176524914
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
--- End diff --

Based on the wording, should a user assume that all influxdb queries are 
supported or only some?


> Create InfluxDB Query Processor
> ---
>
> Key: NIFI-4927
> URL: https://issues.apache.org/jira/browse/NIFI-4927
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: measurements,, query, realtime, timeseries
>
> Create InfluxDB Query processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410074#comment-16410074
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176527325
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 

[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410068#comment-16410068
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176524758
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
--- End diff --

Making input optional could be quite useful. See ExecuteSQL and GetMongo 
for an example of how to support timer and event driving.


> Create InfluxDB Query Processor
> ---
>
> Key: NIFI-4927
> URL: https://issues.apache.org/jira/browse/NIFI-4927
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: measurements,, query, realtime, timeseries
>
> Create InfluxDB Query processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410073#comment-16410073
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176526847
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 

[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410071#comment-16410071
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176525442
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
--- End diff --

We've started making this user-defined in some other processors.


> Create InfluxDB Query Processor
> ---
>
> Key: NIFI-4927
> URL: https://issues.apache.org/jira/browse/NIFI-4927
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: measurements,, query, realtime, timeseries
>
> Create InfluxDB Query processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410069#comment-16410069
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176525929
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 

[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176524914
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
--- End diff --

Based on the wording, should a user assume that all influxdb queries are 
supported or only some?


---


[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176527325
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("Failed queries that are retryable exception are 
routed to this relationship").build();
+
+private static final Set relationships;
+private static final List 

[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176525442
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
--- End diff --

We've started making this user-defined in some other processors.


---


[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410072#comment-16410072
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176529115
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/test/java/org/apache/nifi/processors/influxdb/ITExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import static org.junit.Assert.assertEquals;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunners;
+import org.influxdb.InfluxDB;
+import org.influxdb.dto.QueryResult;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Integration test for executing InfluxDB queries. Please ensure that the 
InfluxDB is running
+ * on local host with default port and has database test with table test. 
Please set user
+ * and password if applicable before running the integration tests.
+ */
+public class ITExecuteInfluxDBQuery extends AbstractITInfluxDB {
+
+@Before
+public void setUp() throws Exception {
+runner = TestRunners.newTestRunner(ExecuteInfluxDBQuery.class);
+initializeRunner();
--- End diff --

This can be merged into the other init function. It should at least come 
after the database init code so that you don't spend any time spinning up 
testing infra on the NiFi side if the database isn't working.


> Create InfluxDB Query Processor
> ---
>
> Key: NIFI-4927
> URL: https://issues.apache.org/jira/browse/NIFI-4927
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: measurements,, query, realtime, timeseries
>
> Create InfluxDB Query processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4927) Create InfluxDB Query Processor

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410075#comment-16410075
 ] 

ASF GitHub Bot commented on NIFI-4927:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176526183
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 

[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176528433
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/test/java/org/apache/nifi/processors/influxdb/AbstractITInfluxDB.java
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.util.TestRunner;
+import org.influxdb.InfluxDB;
+import org.influxdb.InfluxDBFactory;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import org.junit.After;
+
+/**
+ * Base integration test class for InfluxDB processors
+ */
+public class AbstractITInfluxDB {
+protected TestRunner runner;
+protected InfluxDB influxDB;
+protected String dbName = "test";
+protected String dbUrl = "http://localhost:8086;;
+protected String user = "admin";
+protected String password = "admin";
+protected static final String DEFAULT_RETENTION_POLICY = "autogen";
+
+protected void initInfluxDB() throws InterruptedException, Exception {
+influxDB = InfluxDBFactory.connect(dbUrl,user,password);
+if ( influxDB.databaseExists(dbName) ) {
--- End diff --

I would strongly suggest moving this to the `@After` section because it'll 
make everything behave in one clean arc of setup -> test -> cleanup.


---


[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176525929
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("Failed queries that are retryable exception are 
routed to this relationship").build();
+
+private static final Set relationships;
+private static final List 

[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176526551
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("Failed queries that are retryable exception are 
routed to this relationship").build();
+
+private static final Set relationships;
+private static final List 

[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176526847
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("Failed queries that are retryable exception are 
routed to this relationship").build();
+
+private static final Set relationships;
+private static final List 

[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176529115
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/test/java/org/apache/nifi/processors/influxdb/ITExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import static org.junit.Assert.assertEquals;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunners;
+import org.influxdb.InfluxDB;
+import org.influxdb.dto.QueryResult;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Integration test for executing InfluxDB queries. Please ensure that the 
InfluxDB is running
+ * on local host with default port and has database test with table test. 
Please set user
+ * and password if applicable before running the integration tests.
+ */
+public class ITExecuteInfluxDBQuery extends AbstractITInfluxDB {
+
+@Before
+public void setUp() throws Exception {
+runner = TestRunners.newTestRunner(ExecuteInfluxDBQuery.class);
+initializeRunner();
--- End diff --

This can be merged into the other init function. It should at least come 
after the database init code so that you don't spend any time spinning up 
testing infra on the NiFi side if the database isn't working.


---


[GitHub] nifi pull request #2562: NIFI-4927 - InfluxDB Query Processor

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2562#discussion_r176526183
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.influxdb.dto.Query;
+import org.influxdb.dto.QueryResult;
+import com.google.gson.Gson;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.net.SocketTimeoutException;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","get", "read", "query", "timeseries"})
+@CapabilityDescription("Processor to execute InfluxDB query from the 
content of a FlowFile.  Please check details of the supported queries in 
InfluxDB documentation (https://www.influxdb.com/).")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+@WritesAttribute(attribute = 
ExecuteInfluxDBQuery.INFLUX_DB_EXECUTED_QUERY, description = "InfluxDB executed 
query"),
+})
+public class ExecuteInfluxDBQuery extends AbstractInfluxDBProcessor {
+
+public static final String INFLUX_DB_EXECUTED_QUERY = 
"influxdb.executed.query";
+
+public static final PropertyDescriptor INFLUX_DB_QUERY_RESULT_TIMEUNIT 
= new PropertyDescriptor.Builder()
+.name("influxdb-query-result-time-unit")
+.displayName("Query Result Time Units")
+.description("The time unit of query results from the 
InfluxDB")
+.defaultValue(TimeUnit.NANOSECONDS.name())
+.required(true)
+.expressionLanguageSupported(true)
+.allowableValues(Arrays.stream(TimeUnit.values()).map( v -> 
v.name()).collect(Collectors.toSet()))
+.sensitive(false)
+.build();
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("Falied InfluxDB queries are routed to this 
relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("Failed queries that are retryable exception are 
routed to this relationship").build();
+
+private static final Set relationships;
+private static final List 

[jira] [Commented] (NIFIREG-147) Add Keycloak authentication method

2018-03-22 Thread Kevin Doran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409976#comment-16409976
 ] 

Kevin Doran commented on NIFIREG-147:
-

[~ror6ax] if you or someone you know is interested in working on this idea, it 
could probably be added by implementing a custom implementation of some 
providers in the nifif-registry-security-api module [1]. These are interfaces 
defined to allow extensions to by added to the NiFi Registry. Particularly, if 
you implemented an IdentityProvider [2] and UserGroupProvider [3] backed by 
Keycloak, you could add that jar and configure NiFi REgistry to use your 
extension.

[1] 
https://github.com/apache/nifi-registry/tree/master/nifi-registry-security-api/src/main/java/org/apache/nifi/registry/security

[2] 
https://github.com/apache/nifi-registry/blob/master/nifi-registry-security-api/src/main/java/org/apache/nifi/registry/security/authentication/IdentityProvider.java

[3] 
https://github.com/apache/nifi-registry/blob/master/nifi-registry-security-api/src/main/java/org/apache/nifi/registry/security/authorization/UserGroupProvider.java

> Add Keycloak authentication method
> --
>
> Key: NIFIREG-147
> URL: https://issues.apache.org/jira/browse/NIFIREG-147
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Gregory Reshetniak
>Priority: Major
>
> Keycloak does implement a lot of related functionality, including groups, 
> users and such. It would be great to have first-class integration available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-2853) Improve ListHDFS state tracking

2018-03-22 Thread Sivaprasanna Sethuraman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409944#comment-16409944
 ] 

Sivaprasanna Sethuraman commented on NIFI-2853:
---

It's a bug. Created a ticket NIFI-5000 and also raised a PR.

> Improve ListHDFS state tracking
> ---
>
> Key: NIFI-2853
> URL: https://issues.apache.org/jira/browse/NIFI-2853
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Bryan Bende
>Priority: Minor
>
> Currently ListHDFS tracks two properties in state management, 
> "listing.timestamp" and "emitted.timestamp". In the 1.0.0 release, the 
> directory property now supports expression language which means the directory 
> being listed could dynamically change on any execution of the processor. 
> The processor should be changed to store state specific to the directory that 
> was listed, for example "listing.timestamp.dir1" and "emitted.timestamp.dir1".
> This would also help in a clustered scenario... currently ListHDFS has to be 
> run on primary node only, otherwise each node will be overwriting each others 
> state and producing unexpected results. With the above improvement, if the 
> directory evaluated to a unique path for each node, it would store the state 
> of each of those path separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5000) ListHDFS doesn't list files from updated 'directory'

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409933#comment-16409933
 ] 

ASF GitHub Bot commented on NIFI-5000:
--

GitHub user zenfenan opened a pull request:

https://github.com/apache/nifi/pull/2576

NIFI-5000: ListHDFS properly lists files from updated directory path

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zenfenan/nifi NIFI-5000

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2576.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2576


commit 84d8e2e6f9af2ca59f05990197a7a13e648ebf2a
Author: zenfenan 
Date:   2018-03-22T17:13:52Z

NIFI-5000: ListHDFS properly lists files from updated directory path




> ListHDFS doesn't list files from updated 'directory'
> 
>
> Key: NIFI-5000
> URL: https://issues.apache.org/jira/browse/NIFI-5000
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
> Attachments: 
> 0001-SNTDA-5000-ListHDFS-properly-lists-files-from-update.patch
>
>
> ListHDFS lists files and saves the latest listed files' modified time - 
> latestTimestampListed and latestTimestampEmitted in the `StateMap`. It is 
> overriding `onPropertyModified` to check if the `Directory` or the `File 
> Filter` has been modified and if they are indeed modified, it will reset the 
> statemap variables to `-1L` so as to list all the files from the updated 
> `Directory` or according to the updated `File Filter`. However it is not 
> working as intended.
> *Scenario:*
>  # Create two directories in HDFS
>  ## > hdfs dfs -mkdir /test1
>  ## > hdfs dfs -mkdir /test2
>  # Write files to the above directories in the following order:
>  ## > hdfs dfs -put sample.txt */test1/t1_1.txt* 
>  ## > hdfs dfs -put sample.txt */test2/t2_1.txt*
>  ## > hdfs dfs -put sample.txt */test1/t1_2.txt*
>  # Configure ListHDFS and set *Directory* to */test1* and start the 
> processor. It will produce two flowfiles: *t1_1.txt* and *t1_2.txt*
>  # Stop the processor. Configure and set *Directory* to */test2*. Ideally the 
> state variables (listed and emitted timestamp) should be reset and they 
> should list the file *t2_1.txt* but it is not.
>  # Now put one more file to test2:
>  ## > hdfs dfs -put sample.txt */test2/2_2.txt*
>  # This would have listed the file *t2_2.txt*. File t2_1.txt is missed
> Little debugging helped me found that the `onPropertyModified` indeed works 
> as intended but somewhere else the code still reads the last saved state i.e. 
> the modified time of */test1/t1_2.txt*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2576: NIFI-5000: ListHDFS properly lists files from updat...

2018-03-22 Thread zenfenan
GitHub user zenfenan opened a pull request:

https://github.com/apache/nifi/pull/2576

NIFI-5000: ListHDFS properly lists files from updated directory path

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zenfenan/nifi NIFI-5000

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2576.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2576


commit 84d8e2e6f9af2ca59f05990197a7a13e648ebf2a
Author: zenfenan 
Date:   2018-03-22T17:13:52Z

NIFI-5000: ListHDFS properly lists files from updated directory path




---


[jira] [Updated] (NIFI-5000) ListHDFS doesn't list files from updated 'directory'

2018-03-22 Thread Sivaprasanna Sethuraman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaprasanna Sethuraman updated NIFI-5000:
--
Status: Patch Available  (was: In Progress)

> ListHDFS doesn't list files from updated 'directory'
> 
>
> Key: NIFI-5000
> URL: https://issues.apache.org/jira/browse/NIFI-5000
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0, 1.4.0, 1.3.0, 1.2.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
> Attachments: 
> 0001-SNTDA-5000-ListHDFS-properly-lists-files-from-update.patch
>
>
> ListHDFS lists files and saves the latest listed files' modified time - 
> latestTimestampListed and latestTimestampEmitted in the `StateMap`. It is 
> overriding `onPropertyModified` to check if the `Directory` or the `File 
> Filter` has been modified and if they are indeed modified, it will reset the 
> statemap variables to `-1L` so as to list all the files from the updated 
> `Directory` or according to the updated `File Filter`. However it is not 
> working as intended.
> *Scenario:*
>  # Create two directories in HDFS
>  ## > hdfs dfs -mkdir /test1
>  ## > hdfs dfs -mkdir /test2
>  # Write files to the above directories in the following order:
>  ## > hdfs dfs -put sample.txt */test1/t1_1.txt* 
>  ## > hdfs dfs -put sample.txt */test2/t2_1.txt*
>  ## > hdfs dfs -put sample.txt */test1/t1_2.txt*
>  # Configure ListHDFS and set *Directory* to */test1* and start the 
> processor. It will produce two flowfiles: *t1_1.txt* and *t1_2.txt*
>  # Stop the processor. Configure and set *Directory* to */test2*. Ideally the 
> state variables (listed and emitted timestamp) should be reset and they 
> should list the file *t2_1.txt* but it is not.
>  # Now put one more file to test2:
>  ## > hdfs dfs -put sample.txt */test2/2_2.txt*
>  # This would have listed the file *t2_2.txt*. File t2_1.txt is missed
> Little debugging helped me found that the `onPropertyModified` indeed works 
> as intended but somewhere else the code still reads the last saved state i.e. 
> the modified time of */test1/t1_2.txt*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5000) ListHDFS doesn't list files from updated 'directory'

2018-03-22 Thread Sivaprasanna Sethuraman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaprasanna Sethuraman updated NIFI-5000:
--
Attachment: 0001-SNTDA-5000-ListHDFS-properly-lists-files-from-update.patch

> ListHDFS doesn't list files from updated 'directory'
> 
>
> Key: NIFI-5000
> URL: https://issues.apache.org/jira/browse/NIFI-5000
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
> Attachments: 
> 0001-SNTDA-5000-ListHDFS-properly-lists-files-from-update.patch
>
>
> ListHDFS lists files and saves the latest listed files' modified time - 
> latestTimestampListed and latestTimestampEmitted in the `StateMap`. It is 
> overriding `onPropertyModified` to check if the `Directory` or the `File 
> Filter` has been modified and if they are indeed modified, it will reset the 
> statemap variables to `-1L` so as to list all the files from the updated 
> `Directory` or according to the updated `File Filter`. However it is not 
> working as intended.
> *Scenario:*
>  # Create two directories in HDFS
>  ## > hdfs dfs -mkdir /test1
>  ## > hdfs dfs -mkdir /test2
>  # Write files to the above directories in the following order:
>  ## > hdfs dfs -put sample.txt */test1/t1_1.txt* 
>  ## > hdfs dfs -put sample.txt */test2/t2_1.txt*
>  ## > hdfs dfs -put sample.txt */test1/t1_2.txt*
>  # Configure ListHDFS and set *Directory* to */test1* and start the 
> processor. It will produce two flowfiles: *t1_1.txt* and *t1_2.txt*
>  # Stop the processor. Configure and set *Directory* to */test2*. Ideally the 
> state variables (listed and emitted timestamp) should be reset and they 
> should list the file *t2_1.txt* but it is not.
>  # Now put one more file to test2:
>  ## > hdfs dfs -put sample.txt */test2/2_2.txt*
>  # This would have listed the file *t2_2.txt*. File t2_1.txt is missed
> Little debugging helped me found that the `onPropertyModified` indeed works 
> as intended but somewhere else the code still reads the last saved state i.e. 
> the modified time of */test1/t1_2.txt*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4149) Indicate if EL is evaluated against FFs or not

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409875#comment-16409875
 ] 

ASF GitHub Bot commented on NIFI-4149:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2205
  
Hey @pvillard31 I definitely where this is going. I agree that another 
PR/JIRA can be issued for the more elaborate indication of processor-specific 
variables in the EL. The only issue that I have with this as-is, is that if we 
have processor that still indicates `.expressionLanguageSupported(true)`, in 
the UI all that it says is "Expression Language Scope: NONE" and that believes 
me to believe that Expression Language is not supported. I think if the 
processor still indicates `true` instead of the Scope, that we still show in 
the UI: "Expression Language Supported: true" or whatever it is that we show 
currently.

The only other note, which is very minor, is that in the UI I would avoid 
showing the Scope as NONE, VARIABLE_REGISTRY, FLOWFILE_ATTRIBUTES and instead 
use human-friendly syntax: "Not Supported", "Variable Registry Only" and 
"Variable Registry and FlowFile Attributes" - perhaps just update 
ExpressionLanguageScope enum to contain a "description" or something and then 
populate the DTO with that? Thoughts on that?


> Indicate if EL is evaluated against FFs or not
> --
>
> Key: NIFI-4149
> URL: https://issues.apache.org/jira/browse/NIFI-4149
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> With the addition of EL in a lot of places to improve SDLC and workflow 
> staging, it becomes important to indicate to users if the expression language 
> enabled on a property will be evaluated against the attributes of incoming 
> flow files or if it will only be evaluated against various variable stores 
> (env variables, variable registry, etc).
> Actually, the expression language (without evaluation against flow files) 
> could be allowed on any property by default, and evaluation against flow 
> files would be what is actually indicated in the UI as we are doing today. 
> Adopting this approach could solve a lot of JIRA/PRs we are seeing to add EL 
> on some specific properties (without evaluation against FFs).
> Having expression language to access external values could make sense on any 
> property for any user. However evaluating the expression language against FFs 
> is clearly a more complex challenge when it comes to session management and 
> such.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-156) ContextPath for NiFi Registry UI

2018-03-22 Thread Damian Czaja (JIRA)
Damian Czaja created NIFIREG-156:


 Summary: ContextPath for NiFi Registry UI
 Key: NIFIREG-156
 URL: https://issues.apache.org/jira/browse/NIFIREG-156
 Project: NiFi Registry
  Issue Type: Improvement
Affects Versions: 0.1.0
 Environment: NiFi registry 0.1.0 + HAProxy on Docker
Reporter: Damian Czaja


I am trying to deploy NiFi registry behind a reverse proxy, behind an context 
path i.e.:
 /my-nifi-registry

I added the X-ProxyContextPath header to my HAProxy configuration:
 {{http-request set-header X-ProxyContextPath /my-nifi-registry}}
 but when accessing the UI I get 404 errors for all js, css etc. files and the 
UI isn't loading, because the contextPath isn't used.

>From the code I saw, that only the API is using the X-ProxyContextPath or 
>X-Forwarded-For, but the UI isn't. I think it would be useful to add support 
>also for the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2205: NIFI-4149 - [WIP] - Indicate if EL is evaluated against FF...

2018-03-22 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2205
  
Hey @pvillard31 I definitely where this is going. I agree that another 
PR/JIRA can be issued for the more elaborate indication of processor-specific 
variables in the EL. The only issue that I have with this as-is, is that if we 
have processor that still indicates `.expressionLanguageSupported(true)`, in 
the UI all that it says is "Expression Language Scope: NONE" and that believes 
me to believe that Expression Language is not supported. I think if the 
processor still indicates `true` instead of the Scope, that we still show in 
the UI: "Expression Language Supported: true" or whatever it is that we show 
currently.

The only other note, which is very minor, is that in the UI I would avoid 
showing the Scope as NONE, VARIABLE_REGISTRY, FLOWFILE_ATTRIBUTES and instead 
use human-friendly syntax: "Not Supported", "Variable Registry Only" and 
"Variable Registry and FlowFile Attributes" - perhaps just update 
ExpressionLanguageScope enum to contain a "description" or something and then 
populate the DTO with that? Thoughts on that?


---


[jira] [Closed] (NIFIREG-118) Create a NiFi Registry Docker Image

2018-03-22 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri closed NIFIREG-118.
---
Resolution: Duplicate

> Create a NiFi Registry Docker Image
> ---
>
> Key: NIFIREG-118
> URL: https://issues.apache.org/jira/browse/NIFIREG-118
> Project: NiFi Registry
>  Issue Type: Task
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Minor
>
> Adding a supporting Dockerfile for Registry would help many users work 
> through some of the quick testing and evaluation of Registry in conjunction 
> with the NiFi image with the assistance of config scripts and/or 
> docker-compose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (MINIFICPP-439) Allow cmake3 to take precedence over cmake in bootstrap

2018-03-22 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-439.
---
   Resolution: Done
Fix Version/s: 0.5.0

> Allow cmake3 to take precedence over cmake in bootstrap
> ---
>
> Key: MINIFICPP-439
> URL: https://issues.apache.org/jira/browse/MINIFICPP-439
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Major
> Fix For: 0.5.0
>
>
> For yum based distros, CMake 3+ is provided under command cmake3.  Currently 
> bootstrap, checks for and prefers the command cmake.  This can be problematic 
> in yum environments where both cmake and cmake3 are installed.  Inverting 
> these, to prefer cmake3 would be helpful for such cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4995) Release Apache NiFi 1.6.0

2018-03-22 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409817#comment-16409817
 ] 

Joseph Witt commented on NIFI-4995:
---

RC1 Vote is out.  

> Release Apache NiFi 1.6.0
> -
>
> Key: NIFI-4995
> URL: https://issues.apache.org/jira/browse/NIFI-4995
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Affects Versions: 1.6.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Blocker
> Fix For: 1.6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409812#comment-16409812
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176484111
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToBytes.java
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToBytes extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToBytes(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toBytes", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+
+if (!(fv.getValue() instanceof String)) {
+return fv;
--- End diff --

Makes sense to me, but I copied that from ToDate, seemed like since it's a 
Stream, the top-level caller is expecting a new object back rather than 
catching an exception?


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176484111
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToBytes.java
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToBytes extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToBytes(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toBytes", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+
+if (!(fv.getValue() instanceof String)) {
+return fv;
--- End diff --

Makes sense to me, but I copied that from ToDate, seemed like since it's a 
Stream, the top-level caller is expecting a new object back rather than 
catching an exception?


---


[jira] [Updated] (NIFI-4833) Add ScanHBase processor

2018-03-22 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4833:
--
Summary: Add ScanHBase processor  (was: NIFI-4833 Add ScanHBase processor)

> Add ScanHBase processor
> ---
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
> Fix For: 1.6.0
>
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi pull request #118: MINIFI-444 C2 Data Model and REST API

2018-03-22 Thread kevdoran
Github user kevdoran closed the pull request at:

https://github.com/apache/nifi-minifi/pull/118


---


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409799#comment-16409799
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176482056
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -609,6 +623,9 @@ private static Object convertToAvroObject(final Object 
rawValue, final Schema fi
 if (rawValue instanceof byte[]) {
 return ByteBuffer.wrap((byte[]) rawValue);
 }
+if (rawValue instanceof String) {
+return ByteBuffer.wrap(((String) 
rawValue).getBytes(charset));
--- End diff --

In the clause above, a byte[] is wrapped in a ByteBuffer as well (that's 
where I got the code from), won't we be returning two different objects in that 
case?


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176482056
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -609,6 +623,9 @@ private static Object convertToAvroObject(final Object 
rawValue, final Schema fi
 if (rawValue instanceof byte[]) {
 return ByteBuffer.wrap((byte[]) rawValue);
 }
+if (rawValue instanceof String) {
+return ByteBuffer.wrap(((String) 
rawValue).getBytes(charset));
--- End diff --

In the clause above, a byte[] is wrapped in a ByteBuffer as well (that's 
where I got the code from), won't we be returning two different objects in that 
case?


---


[jira] [Commented] (NIFI-4995) Release Apache NiFi 1.6.0

2018-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409736#comment-16409736
 ] 

ASF subversion and git services commented on NIFI-4995:
---

Commit 49a71f4740c9fac38958961f78dd3cde874b0e45 in nifi's branch 
refs/heads/NIFI-4995-RC1 from [~joewitt]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=49a71f4 ]

NIFI-4995-RC1 prepare for next development iteration


> Release Apache NiFi 1.6.0
> -
>
> Key: NIFI-4995
> URL: https://issues.apache.org/jira/browse/NIFI-4995
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Affects Versions: 1.6.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Blocker
> Fix For: 1.6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4995) Release Apache NiFi 1.6.0

2018-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409735#comment-16409735
 ] 

ASF subversion and git services commented on NIFI-4995:
---

Commit 99bc762a181892aa9ac50b0c6c81e8159b052137 in nifi's branch 
refs/heads/NIFI-4995-RC1 from [~joewitt]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=99bc762 ]

NIFI-4995-RC1 prepare release nifi-1.6.0-RC1


> Release Apache NiFi 1.6.0
> -
>
> Key: NIFI-4995
> URL: https://issues.apache.org/jira/browse/NIFI-4995
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Affects Versions: 1.6.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Blocker
> Fix For: 1.6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2113
  
@JPercivall FYI, part of the reason I want to get this done is I'm planning 
on doing a whole new set of ES processors that are based on bring the total 
CRUD functionality over to the official Elastic-provided APIs. The current set 
of processors use the transport API (deprecated) and make manual REST calls 
that AFAIK don't do master detection and stuff like that which comes baked-in 
with the Elastic APIs.


---


[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409650#comment-16409650
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2113
  
@JPercivall FYI, part of the reason I want to get this done is I'm planning 
on doing a whole new set of ES processors that are based on bring the total 
CRUD functionality over to the official Elastic-provided APIs. The current set 
of processors use the transport API (deprecated) and make manual REST calls 
that AFAIK don't do master detection and stuff like that which comes baked-in 
with the Elastic APIs.


> Create a new ElasticSearch processor that supports the JSON DSL
> ---
>
> Key: NIFI-4325
> URL: https://issues.apache.org/jira/browse/NIFI-4325
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> The existing ElasticSearch processors use the Lucene-style syntax for 
> querying, not the JSON DSL. A new processor is needed that can take a full 
> JSON query and execute it. It should also support aggregation queries in this 
> syntax. A user needs to be able to take a query as-is from Kibana and drop it 
> into NiFi and have it just run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409648#comment-16409648
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2113
  
Thanks.


> Create a new ElasticSearch processor that supports the JSON DSL
> ---
>
> Key: NIFI-4325
> URL: https://issues.apache.org/jira/browse/NIFI-4325
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> The existing ElasticSearch processors use the Lucene-style syntax for 
> querying, not the JSON DSL. A new processor is needed that can take a full 
> JSON query and execute it. It should also support aggregation queries in this 
> syntax. A user needs to be able to take a query as-is from Kibana and drop it 
> into NiFi and have it just run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2113
  
Thanks.


---


[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409627#comment-16409627
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/2113
  
Hey @MikeThomsen, I'm planning on reviewing this tomorrow evening


> Create a new ElasticSearch processor that supports the JSON DSL
> ---
>
> Key: NIFI-4325
> URL: https://issues.apache.org/jira/browse/NIFI-4325
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> The existing ElasticSearch processors use the Lucene-style syntax for 
> querying, not the JSON DSL. A new processor is needed that can take a full 
> JSON query and execute it. It should also support aggregation queries in this 
> syntax. A user needs to be able to take a query as-is from Kibana and drop it 
> into NiFi and have it just run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.

2018-03-22 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/2113
  
Hey @MikeThomsen, I'm planning on reviewing this tomorrow evening


---


[jira] [Commented] (NIFI-4035) Implement record-based Solr processors

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409625#comment-16409625
 ] 

ASF GitHub Bot commented on NIFI-4035:
--

Github user abhinavrohatgi30 commented on the issue:

https://github.com/apache/nifi/pull/2561
  
I'm done with the changes that @bbende  and @MikeThomsen have suggested


> Implement record-based Solr processors
> --
>
> Key: NIFI-4035
> URL: https://issues.apache.org/jira/browse/NIFI-4035
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Priority: Minor
>
> Now that we have record readers and writers, we should implement variants of 
> the existing Solr processors that record-based...
> Processors to consider:
> * PutSolrRecord - uses a configured record reader to read an incoming flow 
> file and insert records to Solr
> * GetSolrRecord - extracts records from Solr and uses a configured record 
> writer to write them to a flow file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409618#comment-16409618
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176438047
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToString.java
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToString extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToString(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toString", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+final Charset charset = 
getCharset(this.charsetSegment, context);
+Object value = fv.getValue();
+final String stringValue;
+
+if (value instanceof Object[]) {
+Object[] o = (Object[]) value;
+if (o.length > 0) {
+
+byte[] dest = new byte[o.length];
+for (int i = 0; i < o.length; i++) {
+dest[i] = (byte) o[i];
+}
+stringValue = new String(dest, charset);
+} else {
+stringValue = ""; // Empty array = empty string
+}
+} else if (!(fv.getValue() instanceof byte[])) {
+return fv;
--- End diff --

This probably warrants throwing an Exception. It seems wrong to me to have 
the user explicitly indicating that they want a conversion to a String and then 
return something different, like an Integer.


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409615#comment-16409615
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176436923
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
 ---
@@ -270,11 +290,33 @@ public static boolean isRecordTypeCompatible(final 
Object value) {
 return (Object[]) value;
 }
 
+if (value instanceof String && 
RecordFieldType.BYTE.getDataType().equals(elementDataType)) {
+byte[] src = ((String) value).getBytes(charset);
+Byte[] dest = new Byte[src.length];
+for (int i = 0; i < src.length; i++) {
+dest[i] = src[i];
+}
+return dest;
+}
+
+if (value instanceof byte[]) {
+byte[] src = (byte[]) value;
+Byte[] dest = new Byte[src.length];
+for (int i = 0; i < src.length; i++) {
+dest[i] = src[i];
+}
+return dest;
+}
+
 throw new IllegalTypeConversionException("Cannot convert value [" 
+ value + "] of type " + value.getClass() + " to Object Array for field " + 
fieldName);
 }
 
-public static boolean isArrayTypeCompatible(final Object value) {
-return value != null && value instanceof Object[];
+public static boolean isArrayTypeCompatible(final Object value, final 
DataType elementDataType) {
+return value != null
+// Either an object array or a String to be converted to 
byte[] or a ByteBuffer (from Avro, e.g.)
+&& (value instanceof Object[]
+|| (value instanceof String && 
RecordFieldType.BYTE.getDataType().equals(elementDataType))
+|| value instanceof ByteBuffer);
--- End diff --

I don't think we should be supporting ByteBuffer here, just byte[]. The 
more we allow for, the more complex this gets and the more error-prone and less 
consistent it will become. While Avro may use ByteBuffers, when we use an Avro 
Reader to create a Record, we should be doing the conversion there from 
ByteBuffer to byte[].


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409620#comment-16409620
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176439185
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToBytes.java
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToBytes extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToBytes(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toBytes", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+
+if (!(fv.getValue() instanceof String)) {
+return fv;
--- End diff --

We should probably be throwing an Exception in this case? The user in this 
case is attempting to coerce a type that is not valid to coerce. Or otherwise 
filter it out from the results. It seems wrong to me to just ignore the 
conversion.


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409616#comment-16409616
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176437278
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
 ---
@@ -474,6 +555,14 @@ public static String toString(final Object value, 
final String format) {
 return Arrays.toString((Object[]) value);
 }
 
+if (value instanceof byte[]) {
+return new String((byte[]) value, charset);
+}
+if (value instanceof ByteBuffer) {
--- End diff --

Same as above.


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2561: NIFI-4035 Implement record-based Solr processors

2018-03-22 Thread abhinavrohatgi30
Github user abhinavrohatgi30 commented on the issue:

https://github.com/apache/nifi/pull/2561
  
I'm done with the changes that @bbende  and @MikeThomsen have suggested


---


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409623#comment-16409623
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176439422
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToBytes.java
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToBytes extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToBytes(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toBytes", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+
+if (!(fv.getValue() instanceof String)) {
+return fv;
+}
+
+final Charset charset = 
getCharset(this.charsetSegment, context);
+
+final byte[] bytesValue;
+try {
+Byte[] src = (Byte[]) 
DataTypeUtils.toArray(fv.getValue(), fv.getField().getFieldName(), 
RecordFieldType.BYTE.getDataType(), charset);
+bytesValue = new byte[src.length];
+for(int i=0;i Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409622#comment-16409622
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176437061
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
 ---
@@ -432,6 +478,37 @@ public static String toString(final Object value, 
final Supplier for
 return formatDate((java.util.Date) value, format);
 }
 
+if (value instanceof byte[]) {
+return new String((byte[])value, charset);
+}
+
+if (value instanceof ByteBuffer) {
--- End diff --

Same as above, I think we should avoid the use of ByteBuffer here


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409619#comment-16409619
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176440529
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -609,6 +623,9 @@ private static Object convertToAvroObject(final Object 
rawValue, final Schema fi
 if (rawValue instanceof byte[]) {
 return ByteBuffer.wrap((byte[]) rawValue);
 }
+if (rawValue instanceof String) {
+return ByteBuffer.wrap(((String) 
rawValue).getBytes(charset));
--- End diff --

I would prefer to avoid ByteBuffer here and instead use just byte[]


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409617#comment-16409617
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176437638
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
 ---
@@ -1100,4 +1189,16 @@ public static boolean isScalarValue(final DataType 
dataType, final Object value)
 
 return true;
 }
+
+public static Charset getCharset(String charsetName) {
+if(charsetName == null) {
+return StandardCharsets.UTF_8;
+} else {
+try {
+return Charset.forName(charsetName);
+} catch(Exception e) {
--- End diff --

If given an invalid character set, I think I would prefer to just throw the 
Exception. If there is a typo somewhere, this can lead to some very unexpected 
results that are difficult to track down.


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility methods to convert values, 
> I believe these methods need to be updated to support conversion between 
> String and byte[] values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4857) Record components do not support String <-> byte[] conversions

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409621#comment-16409621
 ] 

ASF GitHub Bot commented on NIFI-4857:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176438643
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToString.java
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToString extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToString(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toString", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+final Charset charset = 
getCharset(this.charsetSegment, context);
+Object value = fv.getValue();
+final String stringValue;
+
+if (value instanceof Object[]) {
+Object[] o = (Object[]) value;
+if (o.length > 0) {
+
+byte[] dest = new byte[o.length];
+for (int i = 0; i < o.length; i++) {
+dest[i] = (byte) o[i];
+}
+stringValue = new String(dest, charset);
+} else {
+stringValue = ""; // Empty array = empty string
+}
+} else if (!(fv.getValue() instanceof byte[])) {
+return fv;
+} else {
+try {
+stringValue = 
DataTypeUtils.toString(fv.getValue(), (String) null, charset);
+} catch (final Exception e) {
+return fv;
--- End diff --

If any RuntimeException is thrown here, I don't think we want to silently 
ignore it. Should probably let it fly.


> Record components do not support String <-> byte[] conversions
> --
>
> Key: NIFI-4857
> URL: https://issues.apache.org/jira/browse/NIFI-4857
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> When trying to perform a conversion of a field between a String and a byte 
> array, various errors are reporting (depending on where the conversion is 
> taking place). Here are some examples:
> 1) CSVReader, if a column with String values is specified in the schema as 
> "bytes"
> 2) ConvertRecord, if an input field is of type String and the output field is 
> of type "bytes"
> 3) ConvertRecord, if an input field is of type "bytes" and the output field 
> is of type "String"
> Many/most/all of the record components use utility 

[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176437278
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
 ---
@@ -474,6 +555,14 @@ public static String toString(final Object value, 
final String format) {
 return Arrays.toString((Object[]) value);
 }
 
+if (value instanceof byte[]) {
+return new String((byte[]) value, charset);
+}
+if (value instanceof ByteBuffer) {
--- End diff --

Same as above.


---


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176436923
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
 ---
@@ -270,11 +290,33 @@ public static boolean isRecordTypeCompatible(final 
Object value) {
 return (Object[]) value;
 }
 
+if (value instanceof String && 
RecordFieldType.BYTE.getDataType().equals(elementDataType)) {
+byte[] src = ((String) value).getBytes(charset);
+Byte[] dest = new Byte[src.length];
+for (int i = 0; i < src.length; i++) {
+dest[i] = src[i];
+}
+return dest;
+}
+
+if (value instanceof byte[]) {
+byte[] src = (byte[]) value;
+Byte[] dest = new Byte[src.length];
+for (int i = 0; i < src.length; i++) {
+dest[i] = src[i];
+}
+return dest;
+}
+
 throw new IllegalTypeConversionException("Cannot convert value [" 
+ value + "] of type " + value.getClass() + " to Object Array for field " + 
fieldName);
 }
 
-public static boolean isArrayTypeCompatible(final Object value) {
-return value != null && value instanceof Object[];
+public static boolean isArrayTypeCompatible(final Object value, final 
DataType elementDataType) {
+return value != null
+// Either an object array or a String to be converted to 
byte[] or a ByteBuffer (from Avro, e.g.)
+&& (value instanceof Object[]
+|| (value instanceof String && 
RecordFieldType.BYTE.getDataType().equals(elementDataType))
+|| value instanceof ByteBuffer);
--- End diff --

I don't think we should be supporting ByteBuffer here, just byte[]. The 
more we allow for, the more complex this gets and the more error-prone and less 
consistent it will become. While Avro may use ByteBuffers, when we use an Avro 
Reader to create a Record, we should be doing the conversion there from 
ByteBuffer to byte[].


---


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176440529
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java
 ---
@@ -609,6 +623,9 @@ private static Object convertToAvroObject(final Object 
rawValue, final Schema fi
 if (rawValue instanceof byte[]) {
 return ByteBuffer.wrap((byte[]) rawValue);
 }
+if (rawValue instanceof String) {
+return ByteBuffer.wrap(((String) 
rawValue).getBytes(charset));
--- End diff --

I would prefer to avoid ByteBuffer here and instead use just byte[]


---


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176439185
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToBytes.java
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToBytes extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToBytes(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toBytes", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+
+if (!(fv.getValue() instanceof String)) {
+return fv;
--- End diff --

We should probably be throwing an Exception in this case? The user in this 
case is attempting to coerce a type that is not valid to coerce. Or otherwise 
filter it out from the results. It seems wrong to me to just ignore the 
conversion.


---


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176438643
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToString.java
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToString extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToString(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toString", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+final Charset charset = 
getCharset(this.charsetSegment, context);
+Object value = fv.getValue();
+final String stringValue;
+
+if (value instanceof Object[]) {
+Object[] o = (Object[]) value;
+if (o.length > 0) {
+
+byte[] dest = new byte[o.length];
+for (int i = 0; i < o.length; i++) {
+dest[i] = (byte) o[i];
+}
+stringValue = new String(dest, charset);
+} else {
+stringValue = ""; // Empty array = empty string
+}
+} else if (!(fv.getValue() instanceof byte[])) {
+return fv;
+} else {
+try {
+stringValue = 
DataTypeUtils.toString(fv.getValue(), (String) null, charset);
+} catch (final Exception e) {
+return fv;
--- End diff --

If any RuntimeException is thrown here, I don't think we want to silently 
ignore it. Should probably let it fly.


---


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176439422
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToBytes.java
 ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToBytes extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToBytes(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toBytes", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+
+if (!(fv.getValue() instanceof String)) {
+return fv;
+}
+
+final Charset charset = 
getCharset(this.charsetSegment, context);
+
+final byte[] bytesValue;
+try {
+Byte[] src = (Byte[]) 
DataTypeUtils.toArray(fv.getValue(), fv.getField().getFieldName(), 
RecordFieldType.BYTE.getDataType(), charset);
+bytesValue = new byte[src.length];
+for(int i=0;i

[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176438047
  
--- Diff: 
nifi-commons/nifi-record-path/src/main/java/org/apache/nifi/record/path/functions/ToString.java
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.record.path.functions;
+
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPathEvaluationContext;
+import org.apache.nifi.record.path.StandardFieldValue;
+import org.apache.nifi.record.path.paths.RecordPathSegment;
+import org.apache.nifi.record.path.util.RecordPathUtils;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.nio.charset.Charset;
+import java.util.stream.Stream;
+
+public class ToString extends RecordPathSegment {
+
+private final RecordPathSegment recordPath;
+private final RecordPathSegment charsetSegment;
+
+public ToString(final RecordPathSegment recordPath, final 
RecordPathSegment charsetSegment, final boolean absolute) {
+super("toString", null, absolute);
+this.recordPath = recordPath;
+this.charsetSegment = charsetSegment;
+}
+
+@Override
+public Stream evaluate(RecordPathEvaluationContext 
context) {
+final Stream fieldValues = 
recordPath.evaluate(context);
+return fieldValues.filter(fv -> fv.getValue() != null)
+.map(fv -> {
+final Charset charset = 
getCharset(this.charsetSegment, context);
+Object value = fv.getValue();
+final String stringValue;
+
+if (value instanceof Object[]) {
+Object[] o = (Object[]) value;
+if (o.length > 0) {
+
+byte[] dest = new byte[o.length];
+for (int i = 0; i < o.length; i++) {
+dest[i] = (byte) o[i];
+}
+stringValue = new String(dest, charset);
+} else {
+stringValue = ""; // Empty array = empty string
+}
+} else if (!(fv.getValue() instanceof byte[])) {
+return fv;
--- End diff --

This probably warrants throwing an Exception. It seems wrong to me to have 
the user explicitly indicating that they want a conversion to a String and then 
return something different, like an Integer.


---


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176437061
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
 ---
@@ -432,6 +478,37 @@ public static String toString(final Object value, 
final Supplier for
 return formatDate((java.util.Date) value, format);
 }
 
+if (value instanceof byte[]) {
+return new String((byte[])value, charset);
+}
+
+if (value instanceof ByteBuffer) {
--- End diff --

Same as above, I think we should avoid the use of ByteBuffer here


---


[GitHub] nifi pull request #2570: NIFI-4857: Support String<->byte[] conversion

2018-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2570#discussion_r176437638
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
 ---
@@ -1100,4 +1189,16 @@ public static boolean isScalarValue(final DataType 
dataType, final Object value)
 
 return true;
 }
+
+public static Charset getCharset(String charsetName) {
+if(charsetName == null) {
+return StandardCharsets.UTF_8;
+} else {
+try {
+return Charset.forName(charsetName);
+} catch(Exception e) {
--- End diff --

If given an invalid character set, I think I would prefer to just throw the 
Exception. If there is a typo somewhere, this can lead to some very unexpected 
results that are difficult to track down.


---


[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.

2018-03-22 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2113
  
@JPercivall Any chance we can close this out?


---


[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409605#comment-16409605
 ] 

ASF GitHub Bot commented on NIFI-4325:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2113
  
@JPercivall Any chance we can close this out?


> Create a new ElasticSearch processor that supports the JSON DSL
> ---
>
> Key: NIFI-4325
> URL: https://issues.apache.org/jira/browse/NIFI-4325
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>
> The existing ElasticSearch processors use the Lucene-style syntax for 
> querying, not the JSON DSL. A new processor is needed that can take a full 
> JSON query and execute it. It should also support aggregation queries in this 
> syntax. A user needs to be able to take a query as-is from Kibana and drop it 
> into NiFi and have it just run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5002) LdapUserGroupProvider Support

2018-03-22 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-5002:
--
Description: 
In the time since LDAP support was added to the NiFI Docker image, the user 
group sync feature via LdapUserGroupProvider was added to NiFi (NIFI-4059). We 
should update the Docker startup configuration scripts to configure the 
authorizers.xml file to use the LdapUserGroupProvider when LDAP authentication 
is configured.

We might want to use the ConfigurableCompositeUserGroupProvider so that 
certificate identities are still supported (sometimes used for initial admin 
identity or server identities, even when LDAP is used for end user identities).

  was:Since LDAP support was added to the NiFI Docker image, the user-group 
sync feature was added to NiFi (). We should update the Docker startup scripts 
to configure the authorizers.xml to use the LDAP user group provider when using 
LDAP.


> LdapUserGroupProvider Support
> -
>
> Key: NIFI-5002
> URL: https://issues.apache.org/jira/browse/NIFI-5002
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Kevin Doran
>Priority: Major
>
> In the time since LDAP support was added to the NiFI Docker image, the user 
> group sync feature via LdapUserGroupProvider was added to NiFi (NIFI-4059). 
> We should update the Docker startup configuration scripts to configure the 
> authorizers.xml file to use the LdapUserGroupProvider when LDAP 
> authentication is configured.
> We might want to use the ConfigurableCompositeUserGroupProvider so that 
> certificate identities are still supported (sometimes used for initial admin 
> identity or server identities, even when LDAP is used for end user 
> identities).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5002) LdapUserGroupProvider Support

2018-03-22 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-5002:
--
Description: Since LDAP support was added to the NiFI Docker image, the 
user-group sync feature was added to NiFi (). We should update the Docker 
startup scripts to configure the authorizers.xml to use the LDAP user group 
provider when using LDAP.  (was: Since LDAP support was added to Docker, the 
user-group sync feature was added to NiFi. We should update the Docker startup 
scripts to configure the authorizers.xml to use the LDAP user group provider 
when using LDAP.)

> LdapUserGroupProvider Support
> -
>
> Key: NIFI-5002
> URL: https://issues.apache.org/jira/browse/NIFI-5002
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Kevin Doran
>Priority: Major
>
> Since LDAP support was added to the NiFI Docker image, the user-group sync 
> feature was added to NiFi (). We should update the Docker startup scripts to 
> configure the authorizers.xml to use the LDAP user group provider when using 
> LDAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-155) Cannot access nifi once secure

2018-03-22 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409534#comment-16409534
 ] 

Bryan Bende commented on NIFIREG-155:
-

[~scottdhowell3]based on the mailing list discussions, I believe you got this 
working, can this Jira be closed?

> Cannot access nifi once secure
> --
>
> Key: NIFIREG-155
> URL: https://issues.apache.org/jira/browse/NIFIREG-155
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
> Environment: Amazon Linux 2 LTS
> OpenJDK 1.8.0_161-b14
> T2.small 
>Reporter: Scott Howell 
>Priority: Minor
>
> I have setup nifi-registry unsecure and was able to access the UI. When I 
> switched to a secured instances of nifii-registry the ui is no longer 
> available. There are no errors in the nifi-registry-app.log. 
>  
> I am running this through an ELB on AWS. I was able to use this when it was 
> unsecure but not when running securely. I am also not able to see the 
> healthcheck hitting nifi-registry like I do when looking at my nifi 
> instances. 
>  
> Since this project is so new I figured I would reach out and see if this 
> capability doesn't work yet or if I am doing something wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-152) Support storing flows in Object Store/S3

2018-03-22 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409532#comment-16409532
 ] 

Bryan Bende commented on NIFIREG-152:
-

[~spotty] if you are interested in working on this idea, the 
FlowPersistenceProvider is a pluggable extension point:

[https://github.com/apache/nifi-registry/blob/master/nifi-registry-provider-api/src/main/java/org/apache/nifi/registry/flow/FlowPersistenceProvider.java]

You could implement an S3FlowPersistenceProvider.

> Support storing flows in Object Store/S3
> 
>
> Key: NIFIREG-152
> URL: https://issues.apache.org/jira/browse/NIFIREG-152
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.2.0
>Reporter: Daniel Oakley
>Priority: Major
>
> It would be nice if the registry had an option to store saved flow versions 
> as immutable objects in an S3 object store or similar. It would mean less 
> local storage configuration needed which is useful when running under 
> Docker/Kubernetes for example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-153) The angular router module not properly injected.

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409528#comment-16409528
 ] 

ASF GitHub Bot commented on NIFIREG-153:


Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/106


> The angular router module not properly injected.
> 
>
> Key: NIFIREG-153
> URL: https://issues.apache.org/jira/browse/NIFIREG-153
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
> Fix For: 0.2.0
>
>
> The angular router module not properly injected in the 
> nf-registry-page-not-found
> nf-registry-users-administration
> components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFIREG-153) The angular router module not properly injected.

2018-03-22 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFIREG-153.
-
   Resolution: Fixed
Fix Version/s: 0.2.0

> The angular router module not properly injected.
> 
>
> Key: NIFIREG-153
> URL: https://issues.apache.org/jira/browse/NIFIREG-153
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
> Fix For: 0.2.0
>
>
> The angular router module not properly injected in the 
> nf-registry-page-not-found
> nf-registry-users-administration
> components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #106: [NIFIREG-153] inject angular router module ...

2018-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/106


---


[jira] [Commented] (NIFIREG-153) The angular router module not properly injected.

2018-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409526#comment-16409526
 ] 

ASF GitHub Bot commented on NIFIREG-153:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/106
  
Looks good, will merge


> The angular router module not properly injected.
> 
>
> Key: NIFIREG-153
> URL: https://issues.apache.org/jira/browse/NIFIREG-153
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
>
> The angular router module not properly injected in the 
> nf-registry-page-not-found
> nf-registry-users-administration
> components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #106: [NIFIREG-153] inject angular router module into re...

2018-03-22 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/106
  
Looks good, will merge


---


[jira] [Created] (NIFI-5002) LdapUserGroupProvider Support

2018-03-22 Thread Kevin Doran (JIRA)
Kevin Doran created NIFI-5002:
-

 Summary: LdapUserGroupProvider Support
 Key: NIFI-5002
 URL: https://issues.apache.org/jira/browse/NIFI-5002
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Kevin Doran


Since LDAP support was added to Docker, the user-group sync feature was added 
to NiFi. We should update the Docker startup scripts to configure the 
authorizers.xml to use the LDAP user group provider when using LDAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >