[jira] [Updated] (NIFI-4928) Upgrade to current version of BouncyCastle (1.55 -> 1.59)

2018-03-02 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4928:

Description: BouncyCastle 1.59 is now available. Currently we are using 
1.55. See [Section 2.1.1 - 
2.4.5|https://www.bouncycastle.org/releasenotes.html] for details of new 
features and bug fixes.   (was: BouncyCastle 1.59 is now available. Currently 
we are using 1.55. See [https://www.bouncycastle.org/releasenotes.html|Section 
2.1.1 - 2.4.5] for details of new features and bug fixes. )

> Upgrade to current version of BouncyCastle (1.55 -> 1.59)
> -
>
> Key: NIFI-4928
> URL: https://issues.apache.org/jira/browse/NIFI-4928
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: dependencies, security
>
> BouncyCastle 1.59 is now available. Currently we are using 1.55. See [Section 
> 2.1.1 - 2.4.5|https://www.bouncycastle.org/releasenotes.html] for details of 
> new features and bug fixes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4928) Upgrade to current version of BouncyCastle (1.55 -> 1.59)

2018-03-02 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4928:

Description: BouncyCastle 1.59 is now available. Currently we are using 
1.55. See [https://www.bouncycastle.org/releasenotes.html|Section 2.1.1 - 
2.4.5] for details of new features and bug fixes.   (was: The existing Maven 
dependencies are for {{org.bouncycastle:bcprov-jdk16:1.46}} and 
{{org.bouncycastle:bcpg-jdk16:1.46}}. While {{jdk16}} looks "newer" than 
{{jdk15on}}, this was actually a legacy mistake on the part of BouncyCastle 
versioning. The correct and current version of BouncyCastle is {{jdk15on}}, as 
evidenced by the age of the releases:

* jdk15on: 03/2012 - 10/2015 "The Bouncy Castle Crypto package is a Java 
implementation of cryptographic algorithms. This jar contains JCE provider and 
lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 
1.8." (http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on)
* jdk16: 11/2007 - 02/2011 "The Bouncy Castle Crypto package is a Java 
implementation of cryptographic algorithms. This jar contains JCE provider and 
lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.6." 
(http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16))

> Upgrade to current version of BouncyCastle (1.55 -> 1.59)
> -
>
> Key: NIFI-4928
> URL: https://issues.apache.org/jira/browse/NIFI-4928
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: dependencies, security
>
> BouncyCastle 1.59 is now available. Currently we are using 1.55. See 
> [https://www.bouncycastle.org/releasenotes.html|Section 2.1.1 - 2.4.5] for 
> details of new features and bug fixes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4928) Upgrade to current version of BouncyCastle (1.55 -> 1.59)

2018-03-02 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4928:

Fix Version/s: (was: 0.5.0)

> Upgrade to current version of BouncyCastle (1.55 -> 1.59)
> -
>
> Key: NIFI-4928
> URL: https://issues.apache.org/jira/browse/NIFI-4928
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: dependencies, security
>
> The existing Maven dependencies are for 
> {{org.bouncycastle:bcprov-jdk16:1.46}} and 
> {{org.bouncycastle:bcpg-jdk16:1.46}}. While {{jdk16}} looks "newer" than 
> {{jdk15on}}, this was actually a legacy mistake on the part of BouncyCastle 
> versioning. The correct and current version of BouncyCastle is {{jdk15on}}, 
> as evidenced by the age of the releases:
> * jdk15on: 03/2012 - 10/2015 "The Bouncy Castle Crypto package is a Java 
> implementation of cryptographic algorithms. This jar contains JCE provider 
> and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to 
> JDK 1.8." (http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on)
> * jdk16: 11/2007 - 02/2011 "The Bouncy Castle Crypto package is a Java 
> implementation of cryptographic algorithms. This jar contains JCE provider 
> and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.6." 
> (http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4928) Upgrade to current version of BouncyCastle (1.55 -> 1.59)

2018-03-02 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4928:

Affects Version/s: (was: 0.4.1)
   1.5.0

> Upgrade to current version of BouncyCastle (1.55 -> 1.59)
> -
>
> Key: NIFI-4928
> URL: https://issues.apache.org/jira/browse/NIFI-4928
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: dependencies, security
>
> The existing Maven dependencies are for 
> {{org.bouncycastle:bcprov-jdk16:1.46}} and 
> {{org.bouncycastle:bcpg-jdk16:1.46}}. While {{jdk16}} looks "newer" than 
> {{jdk15on}}, this was actually a legacy mistake on the part of BouncyCastle 
> versioning. The correct and current version of BouncyCastle is {{jdk15on}}, 
> as evidenced by the age of the releases:
> * jdk15on: 03/2012 - 10/2015 "The Bouncy Castle Crypto package is a Java 
> implementation of cryptographic algorithms. This jar contains JCE provider 
> and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to 
> JDK 1.8." (http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on)
> * jdk16: 11/2007 - 02/2011 "The Bouncy Castle Crypto package is a Java 
> implementation of cryptographic algorithms. This jar contains JCE provider 
> and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.6." 
> (http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4928) Upgrade to current version of BouncyCastle (1.55 -> 1.59)

2018-03-02 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4928:

Summary: Upgrade to current version of BouncyCastle (1.55 -> 1.59)  (was: 
Upgrade to correct version of BouncyCastle (1.55 -> 1.59))

> Upgrade to current version of BouncyCastle (1.55 -> 1.59)
> -
>
> Key: NIFI-4928
> URL: https://issues.apache.org/jira/browse/NIFI-4928
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Affects Versions: 0.4.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: dependencies, security
> Fix For: 0.5.0
>
>
> The existing Maven dependencies are for 
> {{org.bouncycastle:bcprov-jdk16:1.46}} and 
> {{org.bouncycastle:bcpg-jdk16:1.46}}. While {{jdk16}} looks "newer" than 
> {{jdk15on}}, this was actually a legacy mistake on the part of BouncyCastle 
> versioning. The correct and current version of BouncyCastle is {{jdk15on}}, 
> as evidenced by the age of the releases:
> * jdk15on: 03/2012 - 10/2015 "The Bouncy Castle Crypto package is a Java 
> implementation of cryptographic algorithms. This jar contains JCE provider 
> and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to 
> JDK 1.8." (http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on)
> * jdk16: 11/2007 - 02/2011 "The Bouncy Castle Crypto package is a Java 
> implementation of cryptographic algorithms. This jar contains JCE provider 
> and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.6." 
> (http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4928) Upgrade to correct version of BouncyCastle (1.55 -> 1.59)

2018-03-02 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-4928:
---

 Summary: Upgrade to correct version of BouncyCastle (1.55 -> 1.59)
 Key: NIFI-4928
 URL: https://issues.apache.org/jira/browse/NIFI-4928
 Project: Apache NiFi
  Issue Type: Task
  Components: Core Framework
Affects Versions: 0.4.1
Reporter: Andy LoPresto
Assignee: Andy LoPresto
 Fix For: 0.5.0


The existing Maven dependencies are for {{org.bouncycastle:bcprov-jdk16:1.46}} 
and {{org.bouncycastle:bcpg-jdk16:1.46}}. While {{jdk16}} looks "newer" than 
{{jdk15on}}, this was actually a legacy mistake on the part of BouncyCastle 
versioning. The correct and current version of BouncyCastle is {{jdk15on}}, as 
evidenced by the age of the releases:

* jdk15on: 03/2012 - 10/2015 "The Bouncy Castle Crypto package is a Java 
implementation of cryptographic algorithms. This jar contains JCE provider and 
lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 
1.8." (http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on)
* jdk16: 11/2007 - 02/2011 "The Bouncy Castle Crypto package is a Java 
implementation of cryptographic algorithms. This jar contains JCE provider and 
lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.6." 
(http://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Summary: QueryDatabaseTable throws SqlException after reading from DB2 
table  (was: QueryDatabaseTable throws SqlException after reading entire DB2 
table)

> QueryDatabaseTable throws SqlException after reading from DB2 table
> ---
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
> Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
> at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
> at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
> at 

[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], this 
particular exception could be avoided by adding this setting (semicolon 
included) to the JDBC connection URL:
{code:java}
allowNextOnExhaustedResultSet=1;{code}
But it didn't make a difference in my case.

I also 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/3/18 12:13 AM:
-

In {{QueryDatabaseTable.java}}, method {{onTrigger}}, line 278, a JDBC result 
set is created but not used to control the _while_ loop two lines below. In 
fact, the {{resultSet}} is handled to another method, 
{{JdbcCommon.convertToAvroStream}}, which does its job and returns the number 
of rows it used to populate the output Avro file.

Because only the number of rows is returned, {{QueryDatabaseTable}} doesn't 
know the last {{rs.next()}} was false, which means (at least for the DB2 JDBC 
Driver) the result set is now closed. Instead of breaking out of the while 
loop, the processor once again calls {{convertToAvroStream}} and a 
{{SqlException}} is thrown soon after.

_Notes:_
 * Perhaps this logic works fine for other databases, but considering 
{{resultSet}} was not created with try-with-resouces and I couldn't find any 
explicit {{resultSet.close()}}, it stands to reason we could also have a 
resource leakage here.
 * Using {{resultSet.isAfterLast()}} may not be a good idea because the support 
for that method is optional for {{TYPE_FORWARD_ONLY}} result sets.
 * Checking the {{resultSet}} is closed right after entering 
{{JdbcCommon.convertToAvroStream}} could work as a quick fix, but it would make 
the whole thing even harder to understand and maintain. Maybe some refactoring 
would be in order?


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file.

Since no other indication the last resultSet.next() returned false (and as a 
consequence the resultSet it's now closed) is passed back to 
QueryDatabaseTable, so it cannot know it should break out of the while loop and 
once again calls convertToAvroStream. This time the method throws an exception 
when trying to create the Schema (first line of JdbcCommon.convertToAvroStream, 
line 256).

Perhaps this logic works fine with other databases, but considering the 
resultSet was created without using try-with-resoruces and I couldn't find any 
explicit resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:36 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file.

Since no other indication the last resultSet.next() returned false (and as a 
consequence the resultSet it's now closed) is passed back to 
QueryDatabaseTable, so it cannot know it should break out of the while loop and 
once again calls convertToAvroStream. This time the method throws an exception 
when trying to create the Schema (first line of JdbcCommon.convertToAvroStream, 
line 256).

Perhaps this logic works fine with other databases, but considering the 
resultSet was created without using try-with-resoruces and I couldn't find any 
explicit resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file.

Since no other indication the last resultSet.next() returned false (and it's 
now closed) is passed back to QueryDatabaseTable, so it cannot know it should 
break out of the while loop and once again calls convertToAvroStream. This time 
the method throws an exception when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream, line 256).

Perhaps this logic works fine with other databases, but considering the 
resultSet was created without using try-with-resoruces and I couldn't find any 
explicit resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:35 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file.

Since no other indication the last resultSet.next() returned false (and it's 
now closed) is passed back to QueryDatabaseTable, so it cannot know it should 
break out of the while loop and once again calls convertToAvroStream. This time 
the method throws an exception when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream, line 256).

Perhaps this logic works fine with other databases, but considering the 
resultSet was created without using try-with-resoruces and I couldn't find any 
explicit resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file but no 
other indication the resultSet.next() returned false. It means 
QueryDatabaseTable cannot know it should break out of the while loop and once 
again calls convertToAvroStream. This time the method throws an exception when 
trying to create the Schema (first line of JdbcCommon.convertToAvroStream, line 
256), which makes sense considering last time rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:31 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file but no 
other indication the resultSet.next() returned false. It means 
QueryDatabaseTable cannot know it should break out of the while loop and once 
again calls convertToAvroStream. This time the method throws an exception when 
trying to create the Schema (first line of JdbcCommon.convertToAvroStream, line 
256), which makes sense considering last time rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream. which does its job 
and returns the number of rows it used to populate the output Avro file. 
QueryDatabaseTable doesn't use that number to decide when it should break out 
of the while loop and once again calls convertToAvroStream. This time the 
method throws an exception when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream, line 256), which makes sense considering last 
time rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> 

[jira] [Commented] (NIFI-3093) HIVE Support for ExecuteSQL/QueryDatabaseTable/GenerateTableFetch

2018-03-02 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384308#comment-16384308
 ] 

Gardella Juan Pablo commented on NIFI-3093:
---

[~mattyb149]/[~markap14] I've created a PR at 
[https://github.com/apache/nifi/pull/2507], I've tested against HIVE and it 
worked fine. Could you review it please?

> HIVE Support for ExecuteSQL/QueryDatabaseTable/GenerateTableFetch
> -
>
> Key: NIFI-3093
> URL: https://issues.apache.org/jira/browse/NIFI-3093
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Major
>
> Update Query Database Table so that it can pull data from HIVE tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2507: Nifi 3093

2018-03-02 Thread gardellajuanpablo
GitHub user gardellajuanpablo opened a pull request:

https://github.com/apache/nifi/pull/2507

Nifi 3093

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gardellajuanpablo/nifi NIFI-3093

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2507.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2507


commit a50d6a35b78ba6d08e4c04efdb22ce6ee7c7af0d
Author: Gardella Juan Pablo 
Date:   2018-03-02T23:15:18Z

NIFI-3093 HIVE Support for ExecuteSQL/QueryDatabaseTable/GenerateTableFetch

Applied a simply approach to catch not supported driver methods. It is a 
different
approach than https://github.com/apache/nifi/pull/1281, but that pull was 
very useful.

The fix does not change any API, so it does not break any class that use 
them.

Tested all components against HIVE.

commit 2621a7df3a9731879ac003c3320af21712db2c78
Author: Gardella Juan Pablo 
Date:   2018-03-02T23:22:39Z

Merge remote-tracking branch 'upstream/master' into NIFI-3093




---


[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:14 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream. which does its job 
and returns the number of rows it used to populate the output Avro file. 
QueryDatabaseTable doesn't use that number to decide when it should break out 
of the while loop and once again calls convertToAvroStream. This time the 
method throws an exception when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream, line 256), which makes sense considering last 
time rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop and once 
again convertToAvroStream is called. This time the latter throws an exception 
when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream), which makes sense considering the last 
rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:11 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop and once 
again convertToAvroStream is called. This time the latter throws an exception 
when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream), which makes sense considering the last 
rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> 

[jira] [Created] (NIFI-4927) Create InfluxDB Query Processor

2018-03-02 Thread Mans Singh (JIRA)
Mans Singh created NIFI-4927:


 Summary: Create InfluxDB Query Processor
 Key: NIFI-4927
 URL: https://issues.apache.org/jira/browse/NIFI-4927
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Affects Versions: 1.5.0
Reporter: Mans Singh
Assignee: Mans Singh


Create InfluxDB Query processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 10:57 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)

[jira] [Commented] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-02 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384282#comment-16384282
 ] 

Gardella Juan Pablo commented on NIFI-4893:
---

[~markap14] let me know if you see any problem.

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 10:54 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
consumed by another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. The convertToAvroStream does its job but returns only the 
number of rows it consumed to populate the Avro file, but QueryDatabaseTable 
doesn't use that number to decide if it should break out of the while loop or 
not.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the resultSet would be left open or not.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
> at 
> 

[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Environment: 
ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7
JDBC driver db2jcc4-10.5.0.6

  was:
ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there is more 
> than one table with the same name on the database (in different schemas)
> ---
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> I get errors like this when my AvroReader is set to use the 'Schema Text' 
> property: 
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed 
> to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], this 
particular exception could be avoided by adding this setting (semicolon 
included) to the JDBC connection URL:
{code:java}
allowNextOnExhaustedResultSet=1;{code}
But it didn't make a difference.

I also tried to set 

[jira] [Created] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)
Marcio Sugar created NIFI-4926:
--

 Summary: QueryDatabaseTable throws SqlException after reading 
entire DB2 table
 Key: NIFI-4926
 URL: https://issues.apache.org/jira/browse/NIFI-4926
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.5.0
 Environment: ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7
JDBC driver db2jcc4-10.5.0.6
Reporter: Marcio Sugar


I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table:

 

 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
 


[jira] [Created] (NIFI-4925) Ranger Authorizer - Memory Leak

2018-03-02 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-4925:
-

 Summary: Ranger Authorizer - Memory Leak
 Key: NIFI-4925
 URL: https://issues.apache.org/jira/browse/NIFI-4925
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Matt Gilman
Assignee: Matt Gilman


Authorization requests/results are now explicitly audited. This change was due 
to the fact that the Ranger was auditing a lot of false positives previously. 
This is partly because the NiFi uses authorization to check which features the 
user may have permissions to. This check is used to enable/disable various 
parts of the UI. The remainder of the false positives came from the authorizer 
not knowing the entire context of the request. For instance, when a Processor 
has no policy we check its parent and so on.

The memory leak is due to the authorizer holding onto authorization results 
that are never destined for auditing. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Summary: PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there 
is more than one table with the same name on the database (in different 
schemas)  (was: PutDatabaseRecord throws ArrayIndexOutOfBoundsException where 
there is more than one table with the same name on the database (in different 
schemas))

> PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there is more 
> than one table with the same name on the database (in different schemas)
> ---
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> I get errors like this when my AvroReader is set to use the 'Schema Text' 
> property: 
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed 
> to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383989#comment-16383989
 ] 

Marcio Sugar edited comment on NIFI-4924 at 3/2/18 7:14 PM:


Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.

However, even after fixing the typo I'm still getting a failure, so it seems 
this is not the root cause of my problem.


was (Author: msugar):
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.

However, even after fixing the typo I'm still getting the 
ArrayIndexOutOfBoundsexception. So it seems this is not the root cause of my 
problem.

> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> I get errors like this when my AvroReader is set to use the 'Schema Text' 
> property: 
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed 
> to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Description: 
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

I get errors like this when my AvroReader is set to use the 'Schema Text' 
property: 
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}

  was:
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
 


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> I get errors like this when my AvroReader is set to use the 'Schema Text' 
> property: 
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed 
> to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> 

[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Description: 
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
 

  was:
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> When I set the AvroReader to use the 'Schema Text' property, I get errors 
> like this:
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  

[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Description: 
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.

  was:
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
 

Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
 So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

 

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
 

This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>

[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Description: 
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
 

Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
 So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

 

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
 

This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.

  was:
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:

 
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter:

 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
 

So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:

 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
 

This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>  

[jira] [Created] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)
Marcio Sugar created NIFI-4924:
--

 Summary: PutDatabaseRecord throws ArrayIndexOutOfBoundsException 
where there is more than one table with the same name on the database (in 
different schemas)
 Key: NIFI-4924
 URL: https://issues.apache.org/jira/browse/NIFI-4924
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.5.0
 Environment: ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7
Reporter: Marcio Sugar


I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:

 
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter:

 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
 

So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:

 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
 

This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4923) Create new bundle for HDFS processors to support Hadoop 3.x

2018-03-02 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-4923:
--
Description: Current HDFS proccoessors use hadoop-client 2.7.3, which 
cannot be used with a hadoop 3 cluster using the default client configs.  At a 
minimum, new compression codecs were added that are specified in core-site.xml, 
which do not exist in hadoop-client 2.7.3.

> Create new bundle for HDFS processors to support Hadoop 3.x
> ---
>
> Key: NIFI-4923
> URL: https://issues.apache.org/jira/browse/NIFI-4923
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Current HDFS proccoessors use hadoop-client 2.7.3, which cannot be used with 
> a hadoop 3 cluster using the default client configs.  At a minimum, new 
> compression codecs were added that are specified in core-site.xml, which do 
> not exist in hadoop-client 2.7.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4923) Create new bundle for HDFS processors to support Hadoop 3.x

2018-03-02 Thread Jeff Storck (JIRA)
Jeff Storck created NIFI-4923:
-

 Summary: Create new bundle for HDFS processors to support Hadoop 
3.x
 Key: NIFI-4923
 URL: https://issues.apache.org/jira/browse/NIFI-4923
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Jeff Storck
Assignee: Jeff Storck






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4922) Add badges to the README file

2018-03-02 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4922:
-
Status: Patch Available  (was: Open)

> Add badges to the README file
> -
>
> Key: NIFI-4922
> URL: https://issues.apache.org/jira/browse/NIFI-4922
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
> Attachments: Screen Shot 2018-03-02 at 6.10.20 PM.png
>
>
> Add the following badges to the README file:
>  * Number of docker pulls and link to docker hub
>  * Latest version available in Maven Central
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4922) Add badges to the README file

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383837#comment-16383837
 ] 

ASF GitHub Bot commented on NIFI-4922:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2506

NIFI-4922 - Add badges to the README file

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi badges

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2506.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2506


commit 39a1e7ecafaedae054dcf39f8e96d928b46dff34
Author: Pierre Villard 
Date:   2018-03-02T17:08:29Z

NIFI-4922 - Add badges to the README file

Signed-off-by: Pierre Villard 




> Add badges to the README file
> -
>
> Key: NIFI-4922
> URL: https://issues.apache.org/jira/browse/NIFI-4922
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
> Attachments: Screen Shot 2018-03-02 at 6.10.20 PM.png
>
>
> Add the following badges to the README file:
>  * Number of docker pulls and link to docker hub
>  * Latest version available in Maven Central
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2506: NIFI-4922 - Add badges to the README file

2018-03-02 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2506

NIFI-4922 - Add badges to the README file

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi badges

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2506.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2506


commit 39a1e7ecafaedae054dcf39f8e96d928b46dff34
Author: Pierre Villard 
Date:   2018-03-02T17:08:29Z

NIFI-4922 - Add badges to the README file

Signed-off-by: Pierre Villard 




---


[jira] [Created] (NIFI-4922) Add badges to the README file

2018-03-02 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4922:


 Summary: Add badges to the README file
 Key: NIFI-4922
 URL: https://issues.apache.org/jira/browse/NIFI-4922
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Reporter: Pierre Villard
Assignee: Pierre Villard
 Attachments: Screen Shot 2018-03-02 at 6.10.20 PM.png

Add the following badges to the README file:
 * Number of docker pulls and link to docker hub
 * Latest version available in Maven Central

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383783#comment-16383783
 ] 

ASF subversion and git services commented on NIFI-4773:
---

Commit dd58a376c9050bdb280e29125cce4c55701b29df in nifi's branch 
refs/heads/master from [~ca9mbu]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=dd58a37 ]

NIFI-4773 - Fixed column type map initialization in QueryDatabaseTable

Signed-off-by: Pierre Villard 

This closes #2504.


> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383784#comment-16383784
 ] 

ASF GitHub Bot commented on NIFI-4773:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2504


> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4773:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2504: NIFI-4773: Fixed column type map initialization in ...

2018-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2504


---


[jira] [Commented] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383780#comment-16383780
 ] 

ASF GitHub Bot commented on NIFI-4773:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2504
  
+1, merging to master, thanks @mattyb149 !


> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2504: NIFI-4773: Fixed column type map initialization in QueryDa...

2018-03-02 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2504
  
+1, merging to master, thanks @mattyb149 !


---


[jira] [Commented] (NIFI-3380) Multiple Versions of the Same Component

2018-03-02 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383776#comment-16383776
 ] 

Jorge Machado commented on NIFI-3380:
-

I guys, I saw this and I have multiple version of the same custom processor. 
When I try to check this out from the registry it says to me: 

 

Multiple versions of processorName exist. No exact match for 
default:processorName:unversioned.

 

 

How do I set a default version ? 

> Multiple Versions of the Same Component
> ---
>
> Key: NIFI-3380
> URL: https://issues.apache.org/jira/browse/NIFI-3380
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Bryan Bende
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.2.0
>
> Attachments: nifi-example-processors-nar-1.0.nar, 
> nifi-example-processors-nar-2.0.nar, nifi-example-service-api-nar-1.0.nar, 
> nifi-example-service-api-nar-2.0.nar, nifi-example-service-nar-1.0.nar, 
> nifi-example-service-nar-1.1.nar, nifi-example-service-nar-2.0.nar
>
>
> This ticket is to track the work for supporting multiple versions of the same 
> component within NiFi. The overall design for this feature is described in 
> detail at the following wiki page:
> https://cwiki.apache.org/confluence/display/NIFI/Multiple+Versions+of+the+Same+Extension
> This ticket will track only the core NiFi work, and a separate ticket will be 
> created to track enhancements for the NAR Maven Plugin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4920) Change Version & Revert Local Changes incorrectly reset sensitive properties

2018-03-02 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4920:
-
   Resolution: Fixed
Fix Version/s: 1.6.0
   Status: Resolved  (was: Patch Available)

> Change Version & Revert Local Changes incorrectly reset sensitive properties
> 
>
> Key: NIFI-4920
> URL: https://issues.apache.org/jira/browse/NIFI-4920
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.6.0
>
>
> Create a flow with a processor that has a sensitive property and set a value, 
> save v1 of the flow to registry.
> Make some local changes to the flow and then revert local changes and notice 
> the sensitive property is no longer set.
> Set the sensitive property again, make some other change to the flow, and 
> save v2.
> Change version back to v1 and notice the sensitive property is no longer set 
> again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4920) Change Version & Revert Local Changes incorrectly reset sensitive properties

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383769#comment-16383769
 ] 

ASF GitHub Bot commented on NIFI-4920:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2505


> Change Version & Revert Local Changes incorrectly reset sensitive properties
> 
>
> Key: NIFI-4920
> URL: https://issues.apache.org/jira/browse/NIFI-4920
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.6.0
>
>
> Create a flow with a processor that has a sensitive property and set a value, 
> save v1 of the flow to registry.
> Make some local changes to the flow and then revert local changes and notice 
> the sensitive property is no longer set.
> Set the sensitive property again, make some other change to the flow, and 
> save v2.
> Change version back to v1 and notice the sensitive property is no longer set 
> again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4920) Change Version & Revert Local Changes incorrectly reset sensitive properties

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383768#comment-16383768
 ] 

ASF GitHub Bot commented on NIFI-4920:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2505
  
Thanks @bbende! Code looks good. Verified that i was able to recreate issue 
and that it went away with the patch. +1 merged to master.


> Change Version & Revert Local Changes incorrectly reset sensitive properties
> 
>
> Key: NIFI-4920
> URL: https://issues.apache.org/jira/browse/NIFI-4920
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.6.0
>
>
> Create a flow with a processor that has a sensitive property and set a value, 
> save v1 of the flow to registry.
> Make some local changes to the flow and then revert local changes and notice 
> the sensitive property is no longer set.
> Set the sensitive property again, make some other change to the flow, and 
> save v2.
> Change version back to v1 and notice the sensitive property is no longer set 
> again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4920) Change Version & Revert Local Changes incorrectly reset sensitive properties

2018-03-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383767#comment-16383767
 ] 

ASF subversion and git services commented on NIFI-4920:
---

Commit 179e967b47920173c013d81411c6086ac1bff326 in nifi's branch 
refs/heads/master from [~bbende]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=179e967 ]

NIFI-4920 Skipping sensitive properties when updating component properties from 
versioned component. This closes #2505.

Signed-off-by: Mark Payne 


> Change Version & Revert Local Changes incorrectly reset sensitive properties
> 
>
> Key: NIFI-4920
> URL: https://issues.apache.org/jira/browse/NIFI-4920
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.6.0
>
>
> Create a flow with a processor that has a sensitive property and set a value, 
> save v1 of the flow to registry.
> Make some local changes to the flow and then revert local changes and notice 
> the sensitive property is no longer set.
> Set the sensitive property again, make some other change to the flow, and 
> save v2.
> Change version back to v1 and notice the sensitive property is no longer set 
> again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2505: NIFI-4920 Skipping sensitive properties when updating comp...

2018-03-02 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2505
  
Thanks @bbende! Code looks good. Verified that i was able to recreate issue 
and that it went away with the patch. +1 merged to master.


---


[GitHub] nifi pull request #2505: NIFI-4920 Skipping sensitive properties when updati...

2018-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2505


---


[GitHub] nifi pull request #2504: NIFI-4773: Fixed column type map initialization in ...

2018-03-02 Thread mgaido91
Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2504#discussion_r171887381
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -197,6 +198,12 @@ public void setup(final ProcessContext context) {
 maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
 }
 
+@OnStopped
+public void stop() {
+// Reset the column type map in case properties change
+setupComplete.set(false);
--- End diff --

I see, thanks for your explaination


---


[jira] [Commented] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383746#comment-16383746
 ] 

ASF GitHub Bot commented on NIFI-4773:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2504#discussion_r171886997
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -197,6 +198,12 @@ public void setup(final ProcessContext context) {
 maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
 }
 
+@OnStopped
+public void stop() {
+// Reset the column type map in case properties change
+setupComplete.set(false);
--- End diff --

That's what it used to do, NIFI-4773 was to do the opposite. It's not 
recommended to try to connect to external systems in `@OnScheduled` due to 
timeouts and other possible errors. Instead we moved it into onTrigger but only 
to be done once.


> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383751#comment-16383751
 ] 

ASF GitHub Bot commented on NIFI-4773:
--

Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2504#discussion_r171887381
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -197,6 +198,12 @@ public void setup(final ProcessContext context) {
 maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
 }
 
+@OnStopped
+public void stop() {
+// Reset the column type map in case properties change
+setupComplete.set(false);
--- End diff --

I see, thanks for your explaination


> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2504: NIFI-4773: Fixed column type map initialization in ...

2018-03-02 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2504#discussion_r171886997
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -197,6 +198,12 @@ public void setup(final ProcessContext context) {
 maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
 }
 
+@OnStopped
+public void stop() {
+// Reset the column type map in case properties change
+setupComplete.set(false);
--- End diff --

That's what it used to do, NIFI-4773 was to do the opposite. It's not 
recommended to try to connect to external systems in `@OnScheduled` due to 
timeouts and other possible errors. Instead we moved it into onTrigger but only 
to be done once.


---


[jira] [Updated] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-03-02 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-2630:
-
   Resolution: Fixed
Fix Version/s: 1.6.0
   Status: Resolved  (was: Patch Available)

> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Fix For: 1.6.0
>
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-4920) Change Version & Revert Local Changes incorrectly reset sensitive properties

2018-03-02 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende reassigned NIFI-4920:
-

Assignee: Bryan Bende

> Change Version & Revert Local Changes incorrectly reset sensitive properties
> 
>
> Key: NIFI-4920
> URL: https://issues.apache.org/jira/browse/NIFI-4920
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> Create a flow with a processor that has a sensitive property and set a value, 
> save v1 of the flow to registry.
> Make some local changes to the flow and then revert local changes and notice 
> the sensitive property is no longer set.
> Set the sensitive property again, make some other change to the flow, and 
> save v2.
> Change version back to v1 and notice the sensitive property is no longer set 
> again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4920) Change Version & Revert Local Changes incorrectly reset sensitive properties

2018-03-02 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4920:
--
Status: Patch Available  (was: Open)

> Change Version & Revert Local Changes incorrectly reset sensitive properties
> 
>
> Key: NIFI-4920
> URL: https://issues.apache.org/jira/browse/NIFI-4920
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Bryan Bende
>Priority: Major
>
> Create a flow with a processor that has a sensitive property and set a value, 
> save v1 of the flow to registry.
> Make some local changes to the flow and then revert local changes and notice 
> the sensitive property is no longer set.
> Set the sensitive property again, make some other change to the flow, and 
> save v2.
> Change version back to v1 and notice the sensitive property is no longer set 
> again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4920) Change Version & Revert Local Changes incorrectly reset sensitive properties

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383728#comment-16383728
 ] 

ASF GitHub Bot commented on NIFI-4920:
--

GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/2505

NIFI-4920 Skipping sensitive properties when updating component prope…

…rties from versioned component

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4920

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2505.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2505


commit 0b23d961ae3ff089f469a1e8768c854d10777ce8
Author: Bryan Bende 
Date:   2018-03-02T15:52:22Z

NIFI-4920 Skipping sensitive properties when updating component properties 
from versioned component




> Change Version & Revert Local Changes incorrectly reset sensitive properties
> 
>
> Key: NIFI-4920
> URL: https://issues.apache.org/jira/browse/NIFI-4920
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Bryan Bende
>Priority: Major
>
> Create a flow with a processor that has a sensitive property and set a value, 
> save v1 of the flow to registry.
> Make some local changes to the flow and then revert local changes and notice 
> the sensitive property is no longer set.
> Set the sensitive property again, make some other change to the flow, and 
> save v2.
> Change version back to v1 and notice the sensitive property is no longer set 
> again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2505: NIFI-4920 Skipping sensitive properties when updati...

2018-03-02 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/2505

NIFI-4920 Skipping sensitive properties when updating component prope…

…rties from versioned component

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4920

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2505.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2505


commit 0b23d961ae3ff089f469a1e8768c854d10777ce8
Author: Bryan Bende 
Date:   2018-03-02T15:52:22Z

NIFI-4920 Skipping sensitive properties when updating component properties 
from versioned component




---


[jira] [Commented] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383712#comment-16383712
 ] 

ASF GitHub Bot commented on NIFI-4773:
--

Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2504#discussion_r171882320
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -197,6 +198,12 @@ public void setup(final ProcessContext context) {
 maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
 }
 
+@OnStopped
+public void stop() {
+// Reset the column type map in case properties change
+setupComplete.set(false);
--- End diff --

can't we just do the setup in `@OnScheduled` and move the setupComplete 
flag only to `GenerateTableFetch` or remove it? I think the code would be more 
straightforward like this. What do you think?


> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2504: NIFI-4773: Fixed column type map initialization in ...

2018-03-02 Thread mgaido91
Github user mgaido91 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2504#discussion_r171882320
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -197,6 +198,12 @@ public void setup(final ProcessContext context) {
 maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
 }
 
+@OnStopped
+public void stop() {
+// Reset the column type map in case properties change
+setupComplete.set(false);
--- End diff --

can't we just do the setup in `@OnScheduled` and move the setupComplete 
flag only to `GenerateTableFetch` or remove it? I think the code would be more 
straightforward like this. What do you think?


---


[jira] [Commented] (NIFI-4882) CSVRecordReader should utilize specified date/time/timestamp format at its convertSimpleIfPossible method

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383700#comment-16383700
 ] 

ASF GitHub Bot commented on NIFI-4882:
--

Github user derekstraka commented on the issue:

https://github.com/apache/nifi/pull/2473
  
@ijokarumawak - I believe I have addressed all of your comments.


> CSVRecordReader should utilize specified date/time/timestamp format at its 
> convertSimpleIfPossible method
> -
>
> Key: NIFI-4882
> URL: https://issues.apache.org/jira/browse/NIFI-4882
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Derek Straka
>Priority: Major
>
> CSVRecordReader.convertSimpleIfPossible method is used by ValidateRecord. The 
> method does not coerce values to the target schema field type if the raw 
> string representation in the input CSV file is not compatible.
> The type compatibility check is implemented as follows. But it does not use 
> user specified date/time/timestamp format:
> {code}
> // This will return 'false' for input '01/01/1900' when user 
> specified custom format 'MM/dd/'
> if (DataTypeUtils.isCompatibleDataType(trimmed, dataType)) {
> // The LAZY_DATE_FORMAT should be used to check 
> compatibility, too.
> return DataTypeUtils.convertType(trimmed, dataType, 
> LAZY_DATE_FORMAT, LAZY_TIME_FORMAT, LAZY_TIMESTAMP_FORMAT, fieldName);
> } else {
> return value;
> }
> {code}
> If input date strings have different format than the default format 
> '-MM-dd', then ValidateRecord processor can not validate input records.
> JacksonCSVRecordReader has the identical methods with CSVRecordReader. Those 
> classes should have an abstract class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2473: NIFI-4882: Resolve issue with parsing custom date, time, a...

2018-03-02 Thread derekstraka
Github user derekstraka commented on the issue:

https://github.com/apache/nifi/pull/2473
  
@ijokarumawak - I believe I have addressed all of your comments.


---


[GitHub] nifi pull request #2499: Nifi-4918 JMS Connection Factory setting the dynami...

2018-03-02 Thread jugi92
GitHub user jugi92 reopened a pull request:

https://github.com/apache/nifi/pull/2499

Nifi-4918 JMS Connection Factory setting the dynamic Properties wrong

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jugi92/nifi NIFI-4918

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2499.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2499


commit d3e62ed82acc856681d94da559060b5ce823e961
Author: Julian Gimbel 
Date:   2018-02-28T12:39:32Z

looping over several methods to try and fit the dynamic attribute.
If failed, use first method and throw error if not working.

commit 255c8478fb2b4957bcc49c41ca069149387f3b32
Author: Julian Gimbel 
Date:   2018-02-28T12:39:32Z

NIFI-4918 JMS Connection Factory setting the dynamic Properties wrong.
Now looping over several methods to try and fit the dynamic attribute.
If failed, use first method and throw error if not working.

commit 7446c9e5f447a4669a89bb87ec509fd70606b91f
Author: Julian Gimbel 
Date:   2018-02-28T16:12:27Z

Merge branch 'NIFI-4918' of https://github.com/jugi92/nifi into NIFI-4918




---


[GitHub] nifi pull request #2499: Nifi-4918 JMS Connection Factory setting the dynami...

2018-03-02 Thread jugi92
Github user jugi92 closed the pull request at:

https://github.com/apache/nifi/pull/2499


---


[GitHub] nifi pull request #2499: Nifi-4918 JMS Connection Factory setting the dynami...

2018-03-02 Thread jugi92
Github user jugi92 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2499#discussion_r171875056
  
--- Diff: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ---
@@ -257,21 +256,26 @@ private void 
setConnectionFactoryProperties(ConfigurationContext context) {
  */
 private void setProperty(String propertyName, Object propertyValue) {
 String methodName = this.toMethodName(propertyName);
-Method method = Utils.findMethod(methodName, 
this.connectionFactory.getClass());
-if (method != null) {
+Method[] methods = Utils.findMethods(methodName, 
this.connectionFactory.getClass());
+if (methods != null && methods.length < 0) {
--- End diff --

Yes sorry, it was supposed to be methods.length > 0
I will change and open a new pull request


---


[jira] [Commented] (NIFI-4921) better support for promoting NiFi processor parameters between dev and prod environments

2018-03-02 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383684#comment-16383684
 ] 

Joseph Witt commented on NIFI-4921:
---

+1 to bende's suggested model.

The gist is we want to allow the parameters that can be heavily 
instance/deployment environment specific to be decoupled from the flow 
configuration itself.  The approach he suggests would allow that.

> better support for promoting NiFi processor parameters between dev and prod 
> environments
> 
>
> Key: NIFI-4921
> URL: https://issues.apache.org/jira/browse/NIFI-4921
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Flow Versioning, SDLC
>Affects Versions: 1.5.0
>Reporter: Boris Tyukin
>Priority: Minor
>
> Need a better way to promote processor parameters, like "Concurrent tasks" 
> from development to production environments. 
> Bryan Bende suggested:
> I think we may want to consider making the concurrent tasks work
> similar to variables, in that we capture the concurrent tasks that the
> flow was developed with and would use it initially, but then if you
> have modified this value in the target environment it would not
> trigger a local change and would be retained across upgrades so that
> you don't have to reset it.
>  
> I would extend this Jira to similar parameters, that need to be changed 
> manually now when promoting flows to production from dev/test environments 
> and they cannot use expression language or variables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4921) better support for promoting NiFi processor parameters between dev and prod environments

2018-03-02 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4921:
--
Affects Version/s: (was: 1.6.0)
   1.5.0

> better support for promoting NiFi processor parameters between dev and prod 
> environments
> 
>
> Key: NIFI-4921
> URL: https://issues.apache.org/jira/browse/NIFI-4921
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Flow Versioning, SDLC
>Affects Versions: 1.5.0
>Reporter: Boris Tyukin
>Priority: Minor
>
> Need a better way to promote processor parameters, like "Concurrent tasks" 
> from development to production environments. 
> Bryan Bende suggested:
> I think we may want to consider making the concurrent tasks work
> similar to variables, in that we capture the concurrent tasks that the
> flow was developed with and would use it initially, but then if you
> have modified this value in the target environment it would not
> trigger a local change and would be retained across upgrades so that
> you don't have to reset it.
>  
> I would extend this Jira to similar parameters, that need to be changed 
> manually now when promoting flows to production from dev/test environments 
> and they cannot use expression language or variables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4773:
---
Status: Patch Available  (was: Reopened)

> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383662#comment-16383662
 ] 

ASF GitHub Bot commented on NIFI-4773:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2504

NIFI-4773: Fixed column type map initialization in QueryDatabaseTable

I added unit tests to both QDT and GTF, the former to show the issue is 
fixed (the test fails without the fix), and the latter to show that the fix 
does not introduce a regresstion to GTF (since there was a change to the shared 
base class).

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4773

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2504.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2504


commit e11e2f841b005836a16b54bfff9d2f112d978efe
Author: Matthew Burgess 
Date:   2018-03-02T14:49:25Z

NIFI-4773: Fixed column type map initialization in QueryDatabaseTable




> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2504: NIFI-4773: Fixed column type map initialization in ...

2018-03-02 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2504

NIFI-4773: Fixed column type map initialization in QueryDatabaseTable

I added unit tests to both QDT and GTF, the former to show the issue is 
fixed (the test fails without the fix), and the latter to show that the fix 
does not introduce a regresstion to GTF (since there was a change to the shared 
base class).

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4773

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2504.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2504


commit e11e2f841b005836a16b54bfff9d2f112d978efe
Author: Matthew Burgess 
Date:   2018-03-02T14:49:25Z

NIFI-4773: Fixed column type map initialization in QueryDatabaseTable




---


[GitHub] nifi pull request #2499: Nifi-4918 JMS Connection Factory setting the dynami...

2018-03-02 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2499#discussion_r171863515
  
--- Diff: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ---
@@ -257,21 +256,26 @@ private void 
setConnectionFactoryProperties(ConfigurationContext context) {
  */
 private void setProperty(String propertyName, Object propertyValue) {
 String methodName = this.toMethodName(propertyName);
-Method method = Utils.findMethod(methodName, 
this.connectionFactory.getClass());
-if (method != null) {
+Method[] methods = Utils.findMethods(methodName, 
this.connectionFactory.getClass());
+if (methods != null && methods.length < 0) {
--- End diff --

methods.length can never be less than 0 here... was that supposed to be > 0 
perhaps?


---


[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-03-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383646#comment-16383646
 ] 

ASF subversion and git services commented on NIFI-2630:
---

Commit 42e6fa42a38b8208e5aebc22968b62b4a2856e2e in nifi's branch 
refs/heads/master from [~boardm26]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=42e6fa4 ]

NIFI-2630 Allow PublishJMS to send TextMessages
- Added configurable character set encoding for JMS TextMessages
- Improved PublishJMS/ConsumeJMS documentation
- Validate character set in property validator instead of OnScheduled


> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383650#comment-16383650
 ] 

ASF GitHub Bot commented on NIFI-2630:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2458
  
Thanks @mosermw! All looks good. Was able to verify functionality of both 
TextMessage and BytesMessage with existing queues and new queues via ActiveMQ. 
+1 merged to master.


> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383649#comment-16383649
 ] 

ASF GitHub Bot commented on NIFI-2630:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2458


> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-03-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383647#comment-16383647
 ] 

ASF subversion and git services commented on NIFI-2630:
---

Commit 74bb341abce78d3e4823f47f894ea6122db5213f in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=74bb341 ]

NIFI-2630: Changed name of queue in unit test to be unique in order to avoid 
getting messages from another test if the other tests fails to properly 
shutdown the connection. This closes #2458.


> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2458: NIFI-2630 Allow PublishJMS to send TextMessages

2018-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2458


---


[GitHub] nifi issue #2458: NIFI-2630 Allow PublishJMS to send TextMessages

2018-03-02 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2458
  
Thanks @mosermw! All looks good. Was able to verify functionality of both 
TextMessage and BytesMessage with existing queues and new queues via ActiveMQ. 
+1 merged to master.


---


[jira] [Updated] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes

2018-03-02 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4916:
-
Fix Version/s: 1.6.0

> Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
> -
>
> Key: NIFI-4916
> URL: https://issues.apache.org/jira/browse/NIFI-4916
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: NiFi 1.5.0
>Reporter: Fabio Coutinho
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png
>
>
> When converting a flowfile containing an XLS file to CSV, the newly generated 
> flowfiles do not inherit the attributes from the original one.
> Without the original flowfile's attributes, important information retrieved 
> before conversion (for example, file metadata) cannot be used after the file 
> is converted. I have attached 2 image files showing the attributes before and 
> after conversion. Please note that the input file has a lot of metadata 
> retrieved from Amazon S3 that does not exist on the new flowfile.
> I believe that like most other NiFi processors, the original attributes 
> should be copied to new flowfiles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3599) Add nifi.properties value to globally set the default backpressure size threshold for each connection

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383596#comment-16383596
 ] 

ASF GitHub Bot commented on NIFI-3599:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2497
  
@mosermw I think this is a pretty reasonable change. The only thing that 
gives me pause is putting this information into AboutDTO. To me, it doesn't 
feel like the right place for this kind of information. Admittedly, I don't 
know off the top of my head where would be the best place for it. @mcgilman do 
you have any thoughts on where something like that should go?


> Add nifi.properties value to globally set the default backpressure size 
> threshold for each connection
> -
>
> Key: NIFI-3599
> URL: https://issues.apache.org/jira/browse/NIFI-3599
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Dyer
>Assignee: Michael Moser
>Priority: Major
>
> By default each new connection added to the workflow canvas will have a 
> default backpressure size threshold of 10,000 objects. While the threshold 
> can be changed on a connection level it would be convenient to have a global 
> mechanism for setting that value to something other than 10,000. This 
> enhancement would add a property to nifi.properties that would allow for this 
> threshold to be set globally unless otherwise overridden at the connection 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2497: NIFI-3599 Allowed back pressure object count and data size...

2018-03-02 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2497
  
@mosermw I think this is a pretty reasonable change. The only thing that 
gives me pause is putting this information into AboutDTO. To me, it doesn't 
feel like the right place for this kind of information. Admittedly, I don't 
know off the top of my head where would be the best place for it. @mcgilman do 
you have any thoughts on where something like that should go?


---


[jira] [Created] (NIFI-4921) better support for promoting NiFi processor parameters between dev and prod environments

2018-03-02 Thread Boris Tyukin (JIRA)
Boris Tyukin created NIFI-4921:
--

 Summary: better support for promoting NiFi processor parameters 
between dev and prod environments
 Key: NIFI-4921
 URL: https://issues.apache.org/jira/browse/NIFI-4921
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Flow Versioning, SDLC
Affects Versions: 1.6.0
Reporter: Boris Tyukin


Need a better way to promote processor parameters, like "Concurrent tasks" from 
development to production environments. 

Bryan Bende suggested:

I think we may want to consider making the concurrent tasks work
similar to variables, in that we capture the concurrent tasks that the
flow was developed with and would use it initially, but then if you
have modified this value in the target environment it would not
trigger a local change and would be retained across upgrades so that
you don't have to reset it.

 

I would extend this Jira to similar parameters, that need to be changed 
manually now when promoting flows to production from dev/test environments and 
they cannot use expression language or variables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (NIFI-4773) Database Fetch processor setup is incorrect

2018-03-02 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reopened NIFI-4773:


Reopening due to regression introduced with the previous PR. The logic to 
re-populate the column type map is only executed when the Max value columns 
property is modified. The map should be repopulated when connection pool, table 
name, etc. is modified (any time the processor would be pointing at a different 
set of columns).

> Database Fetch processor setup is incorrect
> ---
>
> Key: NIFI-4773
> URL: https://issues.apache.org/jira/browse/NIFI-4773
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Wynner
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The QueryDatabaseTable processor attempts to make a database connection 
> during setup (OnScheduled), this can cause issues with the flow when errors 
> occur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2502: NIFI-4165: Added RemoveFlowFilesWithMissingContent.java an...

2018-03-02 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2502
  
@alopresto This is not really for a corrupted flowfile repository but 
rather for a flowfile repo that points to content that no longer exists. So the 
easiest thing would be to create a GenerateFlowFile that generates at least 1 
byte of data, then stop NiFi with data queued up and blow away the content 
repo. or change your nifi.properties to look at a different location for 
the content repo.


---


[jira] [Commented] (NIFI-4165) Update NiFi FlowFile Repository Toolkit to provide ability to remove FlowFiles whose content is missing

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383582#comment-16383582
 ] 

ASF GitHub Bot commented on NIFI-4165:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2502
  
@alopresto This is not really for a corrupted flowfile repository but 
rather for a flowfile repo that points to content that no longer exists. So the 
easiest thing would be to create a GenerateFlowFile that generates at least 1 
byte of data, then stop NiFi with data queued up and blow away the content 
repo. or change your nifi.properties to look at a different location for 
the content repo.


> Update NiFi FlowFile Repository Toolkit to provide ability to remove 
> FlowFiles whose content is missing
> ---
>
> Key: NIFI-4165
> URL: https://issues.apache.org/jira/browse/NIFI-4165
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> The FlowFile Repo toolkit has the ability to address issues with flowfile 
> repo corruption due to sudden power loss. Another problem that has been known 
> to occur is if content goes missing from the content repository for whatever 
> reason (say some process deletes some of the files) then the FlowFile Repo 
> can contain a lot of FlowFiles whose content is missing. This causes a lot of 
> problems with stack traces being dumped to logs and the flow taking a really 
> long time to get back to normal. We should update the toolkit to provide a 
> mechanism for pointing to a FlowFile Repo and Content Repo, then writing out 
> a new FlowFile Repo that removes any FlowFile whose content is missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes

2018-03-02 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4916:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
> -
>
> Key: NIFI-4916
> URL: https://issues.apache.org/jira/browse/NIFI-4916
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: NiFi 1.5.0
>Reporter: Fabio Coutinho
>Assignee: Pierre Villard
>Priority: Major
> Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png
>
>
> When converting a flowfile containing an XLS file to CSV, the newly generated 
> flowfiles do not inherit the attributes from the original one.
> Without the original flowfile's attributes, important information retrieved 
> before conversion (for example, file metadata) cannot be used after the file 
> is converted. I have attached 2 image files showing the attributes before and 
> after conversion. Please note that the input file has a lot of metadata 
> retrieved from Amazon S3 that does not exist on the new flowfile.
> I believe that like most other NiFi processors, the original attributes 
> should be copied to new flowfiles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383579#comment-16383579
 ] 

ASF GitHub Bot commented on NIFI-4916:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2500


> Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
> -
>
> Key: NIFI-4916
> URL: https://issues.apache.org/jira/browse/NIFI-4916
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: NiFi 1.5.0
>Reporter: Fabio Coutinho
>Assignee: Pierre Villard
>Priority: Major
> Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png
>
>
> When converting a flowfile containing an XLS file to CSV, the newly generated 
> flowfiles do not inherit the attributes from the original one.
> Without the original flowfile's attributes, important information retrieved 
> before conversion (for example, file metadata) cannot be used after the file 
> is converted. I have attached 2 image files showing the attributes before and 
> after conversion. Please note that the input file has a lot of metadata 
> retrieved from Amazon S3 that does not exist on the new flowfile.
> I believe that like most other NiFi processors, the original attributes 
> should be copied to new flowfiles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2500: NIFI-4916 - ConvertExcelToCSVProcessor inherit pare...

2018-03-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2500


---


[jira] [Commented] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes

2018-03-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383578#comment-16383578
 ] 

ASF GitHub Bot commented on NIFI-4916:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2500
  
Agreed, looks good to me too. +1 merged to master. Thanks @pvillard31 !


> Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
> -
>
> Key: NIFI-4916
> URL: https://issues.apache.org/jira/browse/NIFI-4916
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: NiFi 1.5.0
>Reporter: Fabio Coutinho
>Assignee: Pierre Villard
>Priority: Major
> Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png
>
>
> When converting a flowfile containing an XLS file to CSV, the newly generated 
> flowfiles do not inherit the attributes from the original one.
> Without the original flowfile's attributes, important information retrieved 
> before conversion (for example, file metadata) cannot be used after the file 
> is converted. I have attached 2 image files showing the attributes before and 
> after conversion. Please note that the input file has a lot of metadata 
> retrieved from Amazon S3 that does not exist on the new flowfile.
> I believe that like most other NiFi processors, the original attributes 
> should be copied to new flowfiles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2500: NIFI-4916 - ConvertExcelToCSVProcessor inherit parent attr...

2018-03-02 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2500
  
Agreed, looks good to me too. +1 merged to master. Thanks @pvillard31 !


---


[jira] [Commented] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes

2018-03-02 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383577#comment-16383577
 ] 

ASF subversion and git services commented on NIFI-4916:
---

Commit c58b02518699c68bf73b77c70e563712db3fe12c in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=c58b025 ]

NIFI-4916 - ConvertExcelToCSVProcessor inherit parent attributes. This closes 
#2500.

Signed-off-by: Mark Payne 


> Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
> -
>
> Key: NIFI-4916
> URL: https://issues.apache.org/jira/browse/NIFI-4916
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: NiFi 1.5.0
>Reporter: Fabio Coutinho
>Assignee: Pierre Villard
>Priority: Major
> Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png
>
>
> When converting a flowfile containing an XLS file to CSV, the newly generated 
> flowfiles do not inherit the attributes from the original one.
> Without the original flowfile's attributes, important information retrieved 
> before conversion (for example, file metadata) cannot be used after the file 
> is converted. I have attached 2 image files showing the attributes before and 
> after conversion. Please note that the input file has a lot of metadata 
> retrieved from Amazon S3 that does not exist on the new flowfile.
> I believe that like most other NiFi processors, the original attributes 
> should be copied to new flowfiles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)