[jira] [Updated] (NIFI-9225) Some JMS unit tests are failing since nifi 1.14.0

2021-09-18 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-9225:
--
Description: 
{noformat}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] Tests run: 10, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 21.21 
s <<< FAILURE! - in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] 
validateMessageRedeliveryWhenNotAcked(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)
  Time elapsed: 0.038 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[2]> but was:<[1]>
at 
org.apache.nifi.jms.processors.JMSPublisherConsumerTest.validateMessageRedeliveryWhenNotAcked(JMSPublisherConsumerTest.java:458)

[ERROR] 
testMultipleThreads(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)  
Time elapsed: 20.003 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 2 
milliseconds
at 
app//org.apache.nifi.jms.processors.JMSPublisherConsumerTest.testMultipleThreads(JMSPublisherConsumerTest.java:374)
Nifi 1.14.0 by renaming JMSPublisherConsumerTest to be covered by mvn test 
command.
{noformat}

Nifi 1.13.2 success 
{noformat}
[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ 
nifi-jms-processors ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.781 s 
- in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Running org.apache.nifi.jms.processors.ConsumeJMSManualTest
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0 s - 
in org.apache.nifi.jms.processors.ConsumeJMSManualTest
[INFO] Running 
org.apache.nifi.jms.processors.ConnectionFactoryConfigValidatorTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.022 s 
- in org.apache.nifi.jms.processors.ConnectionFactoryConfigValidatorTest
[INFO] 
[INFO] Results:
{noformat}
Be aware that JMS is not safe to be used in Nifi 1.14.0 (at least by looking at 
unit tests.) 

Slack channel: https://apachenifi.slack.com/archives/C0L9S92JY/p1631974074041400

  was:
{noformat}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] Tests run: 10, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 21.21 
s <<< FAILURE! - in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] 
validateMessageRedeliveryWhenNotAcked(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)
  Time elapsed: 0.038 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[2]> but was:<[1]>
at 
org.apache.nifi.jms.processors.JMSPublisherConsumerTest.validateMessageRedeliveryWhenNotAcked(JMSPublisherConsumerTest.java:458)

[ERROR] 
testMultipleThreads(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)  
Time elapsed: 20.003 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 2 
milliseconds
at 
app//org.apache.nifi.jms.processors.JMSPublisherConsumerTest.testMultipleThreads(JMSPublisherConsumerTest.java:374)
Nifi 1.14.0 by renaming JMSPublisherConsumerTest to be covered by mvn test 
command.
{noformat}

Nifi 1.13.2 success 
{noformat}
[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ 
nifi-jms-processors ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.781 s 
- in 

[jira] [Updated] (NIFI-9225) Some JMS unit tests are failing since nifi 1.14.0

2021-09-18 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-9225:
--
Description: 
{noformat}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] Tests run: 10, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 21.21 
s <<< FAILURE! - in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] 
validateMessageRedeliveryWhenNotAcked(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)
  Time elapsed: 0.038 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[2]> but was:<[1]>
at 
org.apache.nifi.jms.processors.JMSPublisherConsumerTest.validateMessageRedeliveryWhenNotAcked(JMSPublisherConsumerTest.java:458)

[ERROR] 
testMultipleThreads(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)  
Time elapsed: 20.003 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 2 
milliseconds
at 
app//org.apache.nifi.jms.processors.JMSPublisherConsumerTest.testMultipleThreads(JMSPublisherConsumerTest.java:374)
Nifi 1.14.0 by renaming JMSPublisherConsumerTest to be covered by mvn test 
command.
{noformat}

Nifi 1.13.2 success 
{noformat}
[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ 
nifi-jms-processors ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.781 s 
- in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Running org.apache.nifi.jms.processors.ConsumeJMSManualTest
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0 s - 
in org.apache.nifi.jms.processors.ConsumeJMSManualTest
[INFO] Running 
org.apache.nifi.jms.processors.ConnectionFactoryConfigValidatorTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.022 s 
- in org.apache.nifi.jms.processors.ConnectionFactoryConfigValidatorTest
[INFO] 
[INFO] Results:
{noformat}
Be aware that JMS is not safe to be used in Nifi 1.14.0 (at least by looking at 
unit tests.) 

  was:

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] Tests run: 10, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 21.21 
s <<< FAILURE! - in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] 
validateMessageRedeliveryWhenNotAcked(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)
  Time elapsed: 0.038 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[2]> but was:<[1]>
at 
org.apache.nifi.jms.processors.JMSPublisherConsumerTest.validateMessageRedeliveryWhenNotAcked(JMSPublisherConsumerTest.java:458)

[ERROR] 
testMultipleThreads(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)  
Time elapsed: 20.003 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 2 
milliseconds
at 
app//org.apache.nifi.jms.processors.JMSPublisherConsumerTest.testMultipleThreads(JMSPublisherConsumerTest.java:374)
Nifi 1.14.0 by renaming JMSPublisherConsumerTest to be covered by mvn test 
command.

Nifi 1.13.2 success 

[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ 
nifi-jms-processors ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.781 s 
- in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Running org.apache.nifi.jms.processors.ConsumeJMSManualTest
[WARNING] Tests 

[jira] [Updated] (NIFI-9225) Some JMS unit tests are failing since nifi 1.14.0

2021-09-18 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-9225:
--
Description: 
To reproduce the issue rename or create a copy of {{JMSPublisherConsumerIT}} as 
{{JMSPublisherConsumerTest}}. Then execute from root:

{noformat}
git checkout rel/nifi-1.14.0
mvn test -pl :nifi-jms-processors  
git checkout rel/nifi-1.13.2
mvn test -pl :nifi-jms-processors 
{noformat}

I had to fix groovy-eclipse-batch at 1.13.2, but after use a valid version the 
issue can be reproduced.

{noformat}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] Tests run: 10, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 21.21 
s <<< FAILURE! - in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] 
validateMessageRedeliveryWhenNotAcked(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)
  Time elapsed: 0.038 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[2]> but was:<[1]>
at 
org.apache.nifi.jms.processors.JMSPublisherConsumerTest.validateMessageRedeliveryWhenNotAcked(JMSPublisherConsumerTest.java:458)

[ERROR] 
testMultipleThreads(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)  
Time elapsed: 20.003 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 2 
milliseconds
at 
app//org.apache.nifi.jms.processors.JMSPublisherConsumerTest.testMultipleThreads(JMSPublisherConsumerTest.java:374)
Nifi 1.14.0 by renaming JMSPublisherConsumerTest to be covered by mvn test 
command.
{noformat}

Nifi 1.13.2 success 
{noformat}
[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ 
nifi-jms-processors ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.781 s 
- in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Running org.apache.nifi.jms.processors.ConsumeJMSManualTest
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0 s - 
in org.apache.nifi.jms.processors.ConsumeJMSManualTest
[INFO] Running 
org.apache.nifi.jms.processors.ConnectionFactoryConfigValidatorTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.022 s 
- in org.apache.nifi.jms.processors.ConnectionFactoryConfigValidatorTest
[INFO] 
[INFO] Results:
{noformat}
Be aware that JMS is not safe to be used in Nifi 1.14.0 (at least by looking at 
unit tests.) 

Slack channel: https://apachenifi.slack.com/archives/C0L9S92JY/p1631974074041400

  was:
{noformat}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] Tests run: 10, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 21.21 
s <<< FAILURE! - in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] 
validateMessageRedeliveryWhenNotAcked(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)
  Time elapsed: 0.038 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[2]> but was:<[1]>
at 
org.apache.nifi.jms.processors.JMSPublisherConsumerTest.validateMessageRedeliveryWhenNotAcked(JMSPublisherConsumerTest.java:458)

[ERROR] 
testMultipleThreads(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)  
Time elapsed: 20.003 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 2 
milliseconds
at 
app//org.apache.nifi.jms.processors.JMSPublisherConsumerTest.testMultipleThreads(JMSPublisherConsumerTest.java:374)
Nifi 1.14.0 by renaming JMSPublisherConsumerTest to be covered by mvn test 
command.
{noformat}

Nifi 1.13.2 success 
{noformat}
[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ 
nifi-jms-processors ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---

[jira] [Created] (NIFI-9225) Some JMS unit tests are failing since nifi 1.14.0

2021-09-18 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-9225:
-

 Summary: Some JMS unit tests are failing since nifi 1.14.0
 Key: NIFI-9225
 URL: https://issues.apache.org/jira/browse/NIFI-9225
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.14.0
Reporter: Gardella Juan Pablo



[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] Tests run: 10, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 21.21 
s <<< FAILURE! - in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[ERROR] 
validateMessageRedeliveryWhenNotAcked(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)
  Time elapsed: 0.038 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<[2]> but was:<[1]>
at 
org.apache.nifi.jms.processors.JMSPublisherConsumerTest.validateMessageRedeliveryWhenNotAcked(JMSPublisherConsumerTest.java:458)

[ERROR] 
testMultipleThreads(org.apache.nifi.jms.processors.JMSPublisherConsumerTest)  
Time elapsed: 20.003 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 2 
milliseconds
at 
app//org.apache.nifi.jms.processors.JMSPublisherConsumerTest.testMultipleThreads(JMSPublisherConsumerTest.java:374)
Nifi 1.14.0 by renaming JMSPublisherConsumerTest to be covered by mvn test 
command.

Nifi 1.13.2 success 

[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ 
nifi-jms-processors ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 s 
- in org.apache.nifi.jms.cf.JMSConnectionFactoryProviderTest
[INFO] Running org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.781 s 
- in org.apache.nifi.jms.processors.JMSPublisherConsumerTest
[INFO] Running org.apache.nifi.jms.processors.ConsumeJMSManualTest
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0 s - 
in org.apache.nifi.jms.processors.ConsumeJMSManualTest
[INFO] Running 
org.apache.nifi.jms.processors.ConnectionFactoryConfigValidatorTest
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.022 s 
- in org.apache.nifi.jms.processors.ConnectionFactoryConfigValidatorTest
[INFO] 
[INFO] Results:

Be aware that JMS is not safe to be used in Nifi 1.14.0 (at least by looking at 
unit tests.) 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-5070) java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

2021-05-01 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-5070:
--

The issue is real, keep it open Github Bot!! :)

> java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed
> --
>
> Key: NIFI-5070
> URL: https://issues.apache.org/jira/browse/NIFI-5070
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Gardella Juan Pablo
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Discovered during NIFI-5049. According [ResultSet.next() 
> javadoc|https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html#next%E2%80%93]:
> _When a call to the {{next}} method returns {{false}}, the cursor is 
> positioned after the last row. Any invocation of a {{ResultSet}} method which 
> requires a current row will result in a {{SQLException}} being thrown. If the 
> result set type is {{TYPE_FORWARD_ONLY}}, it is vendor specified whether 
> their JDBC driver implementation will return {{false}} or throw an 
> {{SQLException}} on a subsequent call to {{next}}._
> With Phoenix Database and QueryDatabaseTable the exception 
> {{java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed}} is raised.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8446) PutDabaseRecord is not mapping properly the columns

2021-04-20 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-8446:
--
Description: 
I have a simple flow which query a mongo collection with defined as:

{noformat}
clients={_id: ObjectID, id: integer}
{noformat}
And I want to store that output to Postgres table {{clients}} defined as {{_id: 
varchar(24), id: integer}}.

Exception:
{noformat}
2021-04-20 08:33:44,546 ERROR [Timer-Driven Process Thread-4] 
o.a.n.p.standard.PutDatabaseRecord 
PutDatabaseRecord[id=a7b8d415-ac9b-36f0-bf0e-56296bfa5b30] Failed to put 
Records to database for 
StandardFlowFileRecord[uuid=f17fdeca-ef7e-41ad-ab40-bedba3981b75,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1618918416018-3, container=default, 
section=3], offset=2, 
length=435013],offset=0,name=8e0b55c5-cab3-4eab-bf7f-40c9ef0442d0,size=435013]. 
Routing to failure.: java.lang.NumberFormatException: For input string: 
"6052e6f8ee02d84b96ba6633"
java.lang.NumberFormatException: For input string: "6052e6f8ee02d84b96ba6633"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.toInteger(DataTypeUtils.java:1594)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:200)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:153)
at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:149)
at 
org.apache.nifi.processors.standard.PutDatabaseRecord.executeDML(PutDatabaseRecord.java:709)
at 
org.apache.nifi.processors.standard.PutDatabaseRecord.putToDatabase(PutDatabaseRecord.java:841)
at 
org.apache.nifi.processors.standard.PutDatabaseRecord.onTrigger(PutDatabaseRecord.java:487)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}

The flow simply uses {{GetMongoRecord}} -> {{PutDatabaseRecord}}. This flow 
worked fine until Nifi 1.12.1 and it is not working on 1.13.2. It tries to 
convert {{_id}} to integer.

Tickets that touches  {{PutDatabaseRecord}} since 1.12.1 are: NIFI-8237, 
NIFI-8223, NIFI-8172, NIFI-8142, NIFI-8023, NIFI-8146, NIFI-8031.

  was:
I have a simple flow which query a mongo collection with defined as:

{noformat}
clients={_id: ObjectID, id: integer}
{noformat}
And I want to store that output to Postgres table {{clients}} defined as {{_id: 
varchar(24), id: integer}}.

The flow simply uses {{GetMongoRecord}} -> {{PutDatabaseRecord}}. This flow 
worked fine until Nifi 1.12.1 and it is not working on 1.13.2. It tries to 
convert {{_id}} to integer.

Tickets that touches  {{PutDatabaseRecord}} since 1.12.1 are: NIFI-8237, 
NIFI-8223, NIFI-8172, NIFI-8142, NIFI-8023, NIFI-8146, NIFI-8031.


> PutDabaseRecord is not mapping properly the columns
> ---
>
> Key: NIFI-8446
> URL: https://issues.apache.org/jira/browse/NIFI-8446
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.2
>Reporter: Gardella Juan Pablo
>Priority: Major
>
> I have a simple flow which query a mongo collection with defined as:
> {noformat}
> clients={_id: ObjectID, id: integer}
> {noformat}
> And I want to store that output to Postgres table {{clients}} defined as 
> {{_id: varchar(24), id: integer}}.
> Exception:
> {noformat}
> 2021-04-20 08:33:44,546 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.p.standard.PutDatabaseRecord 
> PutDatabaseRecord[id=a7b8d415-ac9b-36f0-bf0e-56296bfa5b30] Failed to put 
> Records to database 

[jira] [Created] (NIFI-8446) PutDabaseRecord is not mapping properly the columns

2021-04-20 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-8446:
-

 Summary: PutDabaseRecord is not mapping properly the columns
 Key: NIFI-8446
 URL: https://issues.apache.org/jira/browse/NIFI-8446
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.13.2
Reporter: Gardella Juan Pablo


I have a simple flow which query a mongo collection with defined as:

{noformat}
clients={_id: ObjectID, id: integer}
{noformat}
And I want to store that output to Postgres table {{clients}} defined as {{_id: 
varchar(24), id: integer}}.

The flow simply uses {{GetMongoRecord}} -> {{PutDatabaseRecord}}. This flow 
worked fine until Nifi 1.12.1 and it is not working on 1.13.2. It tries to 
convert {{_id}} to integer.

Tickets that touches  {{PutDatabaseRecord}} since 1.12.1 are: NIFI-8237, 
NIFI-8223, NIFI-8172, NIFI-8142, NIFI-8023, NIFI-8146, NIFI-8031.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8024) EncryptedFileSystemRepository EOFException on null ContentClaim

2020-11-18 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-8024:
--
Description: 
_emphasized text_The following thread on the NiFi Users list describes issues 
with the EncryptedFileSystemRepository throwing an EOFException when attempting 
to process a null ContentClaim:

[https://lists.apache.org/thread.html/raad9f257ab16dc5b533f89a41d44a05baaf5fe94375ba463c2d8407b%40%3Cusers.nifi.apache.org%3E]

The following log message indicates a null ContentClaim being passed to 
getRecordId():

o.a.n.c.r.c.EncryptedFileSystemRepository Cannot determine record ID from null 
content claim or claim with missing/empty resource claim ID; using 
timestamp-generated ID: nifi-ecr-ts-34280100226468680+0

The standard FileSystemRepository read() method checks for the presence of a 
null ContentClaim and returns an empty ByteArrayInputStream, but the 
EncryptedFileSystemRepository read() method attempts to decrypt the empty 
contents, resulting in the EOFException.

Updating the EncryptedFileSystemRepository read() method to return an empty 
ByteArrayInputStream for a null ContentClaim should resolve the problem.

  was:
The following thread on the NiFi Users list describes issues with the 
EncryptedFileSystemRepository throwing an EOFException when attempting to 
process a null ContentClaim:

[https://lists.apache.org/thread.html/raad9f257ab16dc5b533f89a41d44a05baaf5fe94375ba463c2d8407b%40%3Cusers.nifi.apache.org%3E]

The following log message indicates a null ContentClaim being passed to 
getRecordId():

o.a.n.c.r.c.EncryptedFileSystemRepository Cannot determine record ID from null 
content claim or claim with missing/empty resource claim ID; using 
timestamp-generated ID: nifi-ecr-ts-34280100226468680+0

The standard FileSystemRepository read() method checks for the presence of a 
null ContentClaim and returns an empty ByteArrayInputStream, but the 
EncryptedFileSystemRepository read() method attempts to decrypt the empty 
contents, resulting in the EOFException.

Updating the EncryptedFileSystemRepository read() method to return an empty 
ByteArrayInputStream for a null ContentClaim should resolve the problem.


> EncryptedFileSystemRepository EOFException on null ContentClaim
> ---
>
> Key: NIFI-8024
> URL: https://issues.apache.org/jira/browse/NIFI-8024
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.1
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>
> _emphasized text_The following thread on the NiFi Users list describes issues 
> with the EncryptedFileSystemRepository throwing an EOFException when 
> attempting to process a null ContentClaim:
> [https://lists.apache.org/thread.html/raad9f257ab16dc5b533f89a41d44a05baaf5fe94375ba463c2d8407b%40%3Cusers.nifi.apache.org%3E]
> The following log message indicates a null ContentClaim being passed to 
> getRecordId():
> o.a.n.c.r.c.EncryptedFileSystemRepository Cannot determine record ID from 
> null content claim or claim with missing/empty resource claim ID; using 
> timestamp-generated ID: nifi-ecr-ts-34280100226468680+0
> The standard FileSystemRepository read() method checks for the presence of a 
> null ContentClaim and returns an empty ByteArrayInputStream, but the 
> EncryptedFileSystemRepository read() method attempts to decrypt the empty 
> contents, resulting in the EOFException.
> Updating the EncryptedFileSystemRepository read() method to return an empty 
> ByteArrayInputStream for a null ContentClaim should resolve the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7755) RedisStateProvider is not mentioned in docs

2020-08-20 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-7755:
-

 Summary: RedisStateProvider is not mentioned in docs
 Key: NIFI-7755
 URL: https://issues.apache.org/jira/browse/NIFI-7755
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Reporter: Gardella Juan Pablo


I am reviewing the release notes and it is impressive the amount of effort in 
this release. I just see that there is RedisStateProvider class mentioned at 
https://issues.apache.org/jira/browse/NIFI-7471. According the docs, 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#embedded_zookeeper
 only 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7755) RedisStateProvider is not mentioned in docs

2020-08-20 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7755:
--
Description: RedisStateProvider class mentioned at 
https://issues.apache.org/jira/browse/NIFI-7471. According the docs, 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#embedded_zookeeper
 only   (was: I am reviewing the release notes and it is impressive the amount 
of effort in this release. I just see that there is RedisStateProvider class 
mentioned at https://issues.apache.org/jira/browse/NIFI-7471. According the 
docs, 
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#embedded_zookeeper
 only )

> RedisStateProvider is not mentioned in docs
> ---
>
> Key: NIFI-7755
> URL: https://issues.apache.org/jira/browse/NIFI-7755
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Gardella Juan Pablo
>Priority: Major
>
> RedisStateProvider class mentioned at 
> https://issues.apache.org/jira/browse/NIFI-7471. According the docs, 
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#embedded_zookeeper
>  only 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-07-01 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149325#comment-17149325
 ] 

Gardella Juan Pablo commented on NIFI-7563:
---

[~jfrazee] all your suggestions were included at 
https://github.com/apache/nifi/pull/4378. Sorry about using another PR. I had 
problems with the rebase.

> Optimize the usage of JMS sessions and message producers
> 
>
> Key: NIFI-7563
> URL: https://issues.apache.org/jira/browse/NIFI-7563
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0, 1.8.0, 1.7.1, 1.10.0, 1.9.2, 1.11.4
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>  Time Spent: 27h 10m
>  Remaining Estimate: 45h
>
> Below an scenario to reproduce the non optimize usage of JMS resources. 
> Suppose it is required to publish 1 message to the destination {{D}} using 
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
>  The message is a flow file in the processor input queue.
> It is important to know that internally the processor is using 
> [CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
>  to reuse objects and a 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  to be able to use in thread safe manner. For JMS publishers, the default 
> configuration is to cache connections, sessions (only 1) and message 
> producers.
> *Preconditions*
>  # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
> defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
> Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
> That will limit the scenario only when {{jms_replyTo}} attribute is defined.
>  # For simplicity, the processor is the first time it processes messages.
> *Scenario*
>  # Processor picks the message. The 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  is created.
>  # Connection {{C1}} and session {{S1}} are created. The 
> [Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] 
> {{M1_S1}} is created and 
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  {{MP_S1}} created too. Required to deliver first message at 
> [JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
>  # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching 
> connection factory is created at 
> [AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
>  # An attempt to create a new connection and a new session are requested to 
> the connection factory to create destination defined in the header 
> {{jms_destination}} at 
> [JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
>  Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it 
> is required to check internal logic in CachingConnectionFactory to understand 
> why not). A new session {{S2}} is created and stored in the 
> {{CachingConnectionFactory}} as the new cached session.
>  # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
> in the cache, it is physically closed and {{MP_S1}}.
>  # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, 
> all resources should be reused.
> The scenario if it is applied to N consecutive messages create a lot of 
> sessions and message producers. 
> We found this issue by adding an 
> [Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
> v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
> optimal usage of resources. For example, only one message producer per 
> connection. In below scenario we will be created N producers for the same 
> connection. Also in a Nifi flow that connects a 
> 

[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-07-01 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--

Latest PR at https://github.com/apache/nifi/pull/4378

> Optimize the usage of JMS sessions and message producers
> 
>
> Key: NIFI-7563
> URL: https://issues.apache.org/jira/browse/NIFI-7563
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0, 1.8.0, 1.7.1, 1.10.0, 1.9.2, 1.11.4
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>  Time Spent: 25h 10m
>  Remaining Estimate: 47h
>
> Below an scenario to reproduce the non optimize usage of JMS resources. 
> Suppose it is required to publish 1 message to the destination {{D}} using 
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
>  The message is a flow file in the processor input queue.
> It is important to know that internally the processor is using 
> [CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
>  to reuse objects and a 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  to be able to use in thread safe manner. For JMS publishers, the default 
> configuration is to cache connections, sessions (only 1) and message 
> producers.
> *Preconditions*
>  # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
> defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
> Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
> That will limit the scenario only when {{jms_replyTo}} attribute is defined.
>  # For simplicity, the processor is the first time it processes messages.
> *Scenario*
>  # Processor picks the message. The 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  is created.
>  # Connection {{C1}} and session {{S1}} are created. The 
> [Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] 
> {{M1_S1}} is created and 
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  {{MP_S1}} created too. Required to deliver first message at 
> [JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
>  # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching 
> connection factory is created at 
> [AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
>  # An attempt to create a new connection and a new session are requested to 
> the connection factory to create destination defined in the header 
> {{jms_destination}} at 
> [JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
>  Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it 
> is required to check internal logic in CachingConnectionFactory to understand 
> why not). A new session {{S2}} is created and stored in the 
> {{CachingConnectionFactory}} as the new cached session.
>  # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
> in the cache, it is physically closed and {{MP_S1}}.
>  # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, 
> all resources should be reused.
> The scenario if it is applied to N consecutive messages create a lot of 
> sessions and message producers. 
> We found this issue by adding an 
> [Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
> v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
> optimal usage of resources. For example, only one message producer per 
> connection. In below scenario we will be created N producers for the same 
> connection. Also in a Nifi flow that connects a 
> [ConsumeJMS|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.ConsumeJMS/]
>  with a PublishJMS. Notice {{ConsumeJMS}} populates by default 

[jira] [Commented] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-25 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144860#comment-17144860
 ] 

Gardella Juan Pablo commented on NIFI-7563:
---

Validated against Apache ActiveMQ 5.x and 
[Solace|https://docs.solace.com/index.html].

> Optimize the usage of JMS sessions and message producers
> 
>
> Key: NIFI-7563
> URL: https://issues.apache.org/jira/browse/NIFI-7563
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0, 1.8.0, 1.7.1, 1.10.0, 1.9.2, 1.11.4
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>  Time Spent: 24h 10m
>  Remaining Estimate: 48h
>
> Below an scenario to reproduce the non optimize usage of JMS resources. 
> Suppose it is required to publish 1 message to the destination {{D}} using 
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
>  The message is a flow file in the processor input queue.
> It is important to know that internally the processor is using 
> [CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
>  to reuse objects and a 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  to be able to use in thread safe manner. For JMS publishers, the default 
> configuration is to cache connections, sessions (only 1) and message 
> producers.
> *Preconditions*
>  # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
> defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
> Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
> That will limit the scenario only when {{jms_replyTo}} attribute is defined.
>  # For simplicity, the processor is the first time it processes messages.
> *Scenario*
>  # Processor picks the message. The 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  is created.
>  # Connection {{C1}} and session {{S1}} are created. The 
> [Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] 
> {{M1_S1}} is created and 
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  {{MP_S1}} created too. Required to deliver first message at 
> [JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
>  # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching 
> connection factory is created at 
> [AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
>  # An attempt to create a new connection and a new session are requested to 
> the connection factory to create destination defined in the header 
> {{jms_destination}} at 
> [JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
>  Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it 
> is required to check internal logic in CachingConnectionFactory to understand 
> why not). A new session {{S2}} is created and stored in the 
> {{CachingConnectionFactory}} as the new cached session.
>  # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
> in the cache, it is physically closed and {{MP_S1}}.
>  # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, 
> all resources should be reused.
> The scenario if it is applied to N consecutive messages create a lot of 
> sessions and message producers. 
> We found this issue by adding an 
> [Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
> v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
> optimal usage of resources. For example, only one message producer per 
> connection. In below scenario we will be created N producers for the same 
> connection. Also in a Nifi flow that connects a 
> 

[jira] [Commented] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140739#comment-17140739
 ] 

Gardella Juan Pablo commented on NIFI-7563:
---

[~jfrazee] it should not affect any other part of the component. It does not 
modify current logic, it only reuse the same session for that particular 
scenario.

> Optimize the usage of JMS sessions and message producers
> 
>
> Key: NIFI-7563
> URL: https://issues.apache.org/jira/browse/NIFI-7563
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0, 1.8.0, 1.7.1, 1.10.0, 1.9.2, 1.11.4
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>  Time Spent: 24h 10m
>  Remaining Estimate: 48h
>
> Below an scenario to reproduce the non optimize usage of JMS resources. 
> Suppose it is required to publish 1 message to the destination {{D}} using 
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
>  The message is a flow file in the processor input queue.
> It is important to know that internally the processor is using 
> [CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
>  to reuse objects and a 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  to be able to use in thread safe manner. For JMS publishers, the default 
> configuration is to cache connections, sessions (only 1) and message 
> producers.
> *Preconditions*
>  # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
> defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
> Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
> That will limit the scenario only when {{jms_replyTo}} attribute is defined.
>  # For simplicity, the processor is the first time it processes messages.
> *Scenario*
>  # Processor picks the message. The 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  is created.
>  # Connection {{C1}} and session {{S1}} are created. The 
> [Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] 
> {{M1_S1}} is created and 
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  {{MP_S1}} created too. Required to deliver first message at 
> [JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
>  # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching 
> connection factory is created at 
> [AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
>  # An attempt to create a new connection and a new session are requested to 
> the connection factory to create destination defined in the header 
> {{jms_destination}} at 
> [JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
>  Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it 
> is required to check internal logic in CachingConnectionFactory to understand 
> why not). A new session {{S2}} is created and stored in the 
> {{CachingConnectionFactory}} as the new cached session.
>  # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
> in the cache, it is physically closed and {{MP_S1}}.
>  # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, 
> all resources should be reused.
> The scenario if it is applied to N consecutive messages create a lot of 
> sessions and message producers. 
> We found this issue by adding an 
> [Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
> v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
> optimal usage of resources. For example, only one message producer per 
> connection. In below scenario we will be created N producers for the same 
> connection. Also in a Nifi flow that connects a 
> 

[jira] [Comment Edited] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140739#comment-17140739
 ] 

Gardella Juan Pablo edited comment on NIFI-7563 at 6/19/20, 6:18 PM:
-

[~jfrazee] it should not affect any other part of the component. It does not 
modify current logic, it only reuses the same session for that particular 
scenario.


was (Author: gardellajuanpablo):
[~jfrazee] it should not affect any other part of the component. It does not 
modify current logic, it only reuse the same session for that particular 
scenario.

> Optimize the usage of JMS sessions and message producers
> 
>
> Key: NIFI-7563
> URL: https://issues.apache.org/jira/browse/NIFI-7563
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0, 1.8.0, 1.7.1, 1.10.0, 1.9.2, 1.11.4
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>  Time Spent: 24h 10m
>  Remaining Estimate: 48h
>
> Below an scenario to reproduce the non optimize usage of JMS resources. 
> Suppose it is required to publish 1 message to the destination {{D}} using 
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
>  The message is a flow file in the processor input queue.
> It is important to know that internally the processor is using 
> [CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
>  to reuse objects and a 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  to be able to use in thread safe manner. For JMS publishers, the default 
> configuration is to cache connections, sessions (only 1) and message 
> producers.
> *Preconditions*
>  # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
> defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
> Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
> That will limit the scenario only when {{jms_replyTo}} attribute is defined.
>  # For simplicity, the processor is the first time it processes messages.
> *Scenario*
>  # Processor picks the message. The 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  is created.
>  # Connection {{C1}} and session {{S1}} are created. The 
> [Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] 
> {{M1_S1}} is created and 
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  {{MP_S1}} created too. Required to deliver first message at 
> [JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
>  # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching 
> connection factory is created at 
> [AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
>  # An attempt to create a new connection and a new session are requested to 
> the connection factory to create destination defined in the header 
> {{jms_destination}} at 
> [JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
>  Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it 
> is required to check internal logic in CachingConnectionFactory to understand 
> why not). A new session {{S2}} is created and stored in the 
> {{CachingConnectionFactory}} as the new cached session.
>  # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
> in the cache, it is physically closed and {{MP_S1}}.
>  # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, 
> all resources should be reused.
> The scenario if it is applied to N consecutive messages create a lot of 
> sessions and message producers. 
> We found this issue by adding an 
> [Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
> v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
> optimal usage of 

[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Status: Patch Available  (was: In Progress)

> Optimize the usage of JMS sessions and message producers
> 
>
> Key: NIFI-7563
> URL: https://issues.apache.org/jira/browse/NIFI-7563
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4, 1.9.2, 1.10.0, 1.7.1, 1.8.0, 1.6.0
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Below an scenario to reproduce the non optimize usage of JMS resources. 
> Suppose it is required to publish 1 message to the destination {{D}} using 
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
>  The message is a flow file in the processor input queue.
> It is important to know that internally the processor is using 
> [CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
>  to reuse objects and a 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  to be able to use in thread safe manner. For JMS publishers, the default 
> configuration is to cache connections, sessions (only 1) and message 
> producers.
> *Preconditions*
>  # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
> defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
> Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
> That will limit the scenario only when {{jms_replyTo}} attribute is defined.
>  # For simplicity, the processor is the first time it processes messages.
> *Scenario*
>  # Processor picks the message. The 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  is created.
>  # Connection {{C1}} and session {{S1}} are created. The 
> [Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] 
> {{M1_S1}} is created and 
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  {{MP_S1}} created too. Required to deliver first message at 
> [JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
>  # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching 
> connection factory is created at 
> [AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
>  # An attempt to create a new connection and a new session are requested to 
> the connection factory to create destination defined in the header 
> {{jms_destination}} at 
> [JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
>  Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it 
> is required to check internal logic in CachingConnectionFactory to understand 
> why not). A new session {{S2}} is created and stored in the 
> {{CachingConnectionFactory}} as the new cached session.
>  # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
> in the cache, it is physically closed and {{MP_S1}}.
>  # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, 
> all resources should be reused.
> The scenario if it is applied to N consecutive messages create a lot of 
> sessions and message producers. 
> We found this issue by adding an 
> [Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
> v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
> optimal usage of resources. For example, only one message producer per 
> connection. In below scenario we will be created N producers for the same 
> connection. Also in a Nifi flow that connects a 
> [ConsumeJMS|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.ConsumeJMS/]
>  with a PublishJMS. Notice {{ConsumeJMS}} populates by default 
> 

[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Description: 
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
That will limit the scenario only when {{jms_replyTo}} attribute is defined.
 # For simplicity, the processor is the first time it processes messages.

*Scenario*
 # Processor picks the message. The 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 is created.
 # Connection {{C1}} and session {{S1}} are created. The 
[Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] {{M1_S1}} 
is created and 
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 {{MP_S1}} created too. Required to deliver first message at 
[JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
 # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching connection 
factory is created at 
[AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
 # An attempt to create a new connection and a new session are requested to the 
connection factory to create destination defined in the header 
{{jms_destination}} at 
[JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
 Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it is 
required to check internal logic in CachingConnectionFactory to understand why 
not). A new session {{S2}} is created and stored in the 
{{CachingConnectionFactory}} as the new cached session.
 # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
in the cache, it is physically closed and {{MP_S1}}.
 # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, all 
resources should be reused.

The scenario if it is applied to N consecutive messages create a lot of 
sessions and message producers. 

We found this issue by adding an 
[Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
optimal usage of resources. For example, only one message producer per 
connection. In below scenario we will be created N producers for the same 
connection. Also in a Nifi flow that connects a 
[ConsumeJMS|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.ConsumeJMS/]
 with a PublishJMS. Notice {{ConsumeJMS}} populates by default 
{{jms_destination}} flowfile attribute which, if it is not removed, it is 
processed by {{PublishJMS}} processor (by solving NIFI-7564 should not happen 
any more).

  was:
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 

[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Description: 
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
That will limit the scenario only when {{jms_replyTo}} attribute is defined.
 # For simplicity, the processor is the first time it processes messages.

*Scenario*
 # Processor picks the message. The 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 is created.
 # Connection {{C1}} and session {{S1}} are created. The 
[Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] {{M1_S1}} 
is created and 
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 {{MP_S1}} created too. Required to deliver first message at 
[JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
 # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching connection 
factory is created at 
[AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
 # An attempt to create a new connection and a new session are requested to the 
connection factory to create destination defined in the header 
{{jms_destination}} at 
[JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
 Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it is 
required to check internal logic in CachingConnectionFactory to understand why 
not). A new session {{S2}} is created and stored in the 
{{CachingConnectionFactory}} as the new cached session.
 # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
in the cache, it is physically closed and {{MP_S1}}.
 # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, all 
resources should be reused.

The scenario if it is applied to N consecutive messages create a lot of 
sessions and message producers. 

We found this issue by adding an 
[Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
optimal usage of resources. For example, only one message producer per 
connection. In below scenario we will be created N producers for the same 
connection. Also in a Nifi flow that connects a 
[ConsumeJMS|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.ConsumeJMS/]
 with a PublishJMS. Notice {{ConsumeJMS}} populates by default 
{{jms_destination}} flowfile attribute.

  was:
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 

[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Description: 
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
Also notice {{jms_destination}} should be ignored, as suggested at NIFI-7564. 
That will limit the scenario only when {{jms_replyTo}} attribute is defined.
 # For simplicity, the processor is the first time it processes messages.

*Scenario*
 # Processor picks the message. The 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 is created.
 # Connection {{C1}} and session {{S1}} are created. The 
[Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] {{M1_S1}} 
is created and 
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 {{MP_S1}} created too. Required to deliver first message at 
[JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
 # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching connection 
factory is created at 
[AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
 # An attempt to create a new connection and a new session are requested to the 
connection factory to create destination defined in the header 
{{jms_destination}} at 
[JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
 Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it is 
required to check internal logic in CachingConnectionFactory to understand why 
not). A new session {{S2}} is created and stored in the 
{{CachingConnectionFactory}} as the new cached session.
 # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
in the cache, it is physically closed and {{MP_S1}}.
 # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, all 
resources should be reused.

The scenario if it is applied to N consecutive messages create a lot of 
sessions and message producers. We found this issue by adding an 
[Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
optimal usage of resources. For example, only one message producer per 
connection. In below scenario we will be created N producers for the same 
connection.

  was:
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to 

[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Description: 
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
Also notice jms_destination should be ignored, as suggested at NIFI-7564.
 # For simplicity, the processor is the first time it processes messages.

*Scenario*
 # Processor picks the message. The 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 is created.
 # Connection {{C1}} and session {{S1}} are created. The 
[Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] {{M1_S1}} 
is created and 
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 {{MP_S1}} created too. Required to deliver first message at 
[JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
 # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching connection 
factory is created at 
[AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
 # An attempt to create a new connection and a new session are requested to the 
connection factory to create destination defined in the header 
{{jms_destination}} at 
[JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
 Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it is 
required to check internal logic in CachingConnectionFactory to understand why 
not). A new session {{S2}} is created and stored in the 
{{CachingConnectionFactory}} as the new cached session.
 # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
in the cache, it is physically closed and {{MP_S1}}.
 # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, all 
resources should be reused.

The scenario if it is applied to N consecutive messages create a lot of 
sessions and message producers. We found this issue by adding an 
[Interceptor|https://activemq.apache.org/interceptors] to an [Apache ActiveMQ 
v5.x|http://activemq.apache.org/components/classic/] broker to detect the 
optimal usage of resources. For example, only one message producer per 
connection. In below scenario we will be created N producers for the same 
connection.

  was:
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # 

[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Description: 
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
Also notice jms_destination should be ignored, as suggested at NIFI-7564.
 # For simplicity, the processor is the first time it processes messages.

*Scenario*
 # Processor picks the message. The 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 is created.
 # Connection {{C1}} and session {{S1}} are created. The 
[Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] {{M1_S1}} 
is created and 
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 {{MP_S1}} created too. Required to deliver first message at 
[JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
 # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching connection 
factory is created at 
[AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
 # An attempt to create a new connection and a new session are requested to the 
connection factory to create destination defined in the header 
{{jms_destination}} at 
[JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
 Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it is 
required to check internal logic in CachingConnectionFactory to understand why 
not). A new session {{S2}} is created and stored in the 
{{CachingConnectionFactory}} as the new cached session.
 # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
in the cache, it is physically closed and {{MP_S1}}.
 # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, all 
resources should be reused.

The scenario if it is applied to N consecutive messages create a lot of 
sessions and message producers. We found this issue by adding an 
[Interceptor|https://activemq.apache.org/interceptors] to an Apache ActiveMQ 
broker to detect the optimal usage of resources. For example, only one message 
producer per connection. In below scenario we will be created N producers for 
the same connection.

  was:
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} 

[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions and message producers

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Summary: Optimize the usage of JMS sessions and message producers  (was: 
Optimize the usage of JMS sessions)

> Optimize the usage of JMS sessions and message producers
> 
>
> Key: NIFI-7563
> URL: https://issues.apache.org/jira/browse/NIFI-7563
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0, 1.8.0, 1.7.1, 1.10.0, 1.9.2, 1.11.4
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>
> Below an scenario to reproduce the non optimize usage of JMS resources. 
> Suppose it is required to publish 1 message to the destination {{D}} using 
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
>  The message is a flow file in the processor input queue.
> It is important to know that internally the processor is using 
> [CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
>  to reuse objects and a 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  to be able to use in thread safe manner. For JMS publishers, the default 
> configuration is to cache connections, sessions (only 1) and message 
> producers.
> *Preconditions*
>  # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
> defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
> Also notice jms_destination should be ignored, as suggested at NIFI-7564.
>  # For simplicity, the processor is the first time it processes messages.
> *Scenario*
>  # Processor picks the message. The 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  is created.
>  # Connection {{C1}} and session {{S1}} are created. The 
> [Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] 
> {{M1_S1}} is created and 
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  {{MP_S1}} created too. Required to deliver first message at 
> [JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
>  # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching 
> connection factory is created at 
> [AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
>  # An attempt to create a new connection and a new session are requested to 
> the connection factory to create destination defined in the header 
> {{jms_destination}} at 
> [JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
>  Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it 
> is required to check internal logic in CachingConnectionFactory to understand 
> why not). A new session {{S2}} is created and stored in the 
> {{CachingConnectionFactory}} as the new cached session.
>  # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
> in the cache, it is physically closed and {{MP_S1}}.
>  # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, 
> all resources should be reused.
> The scenario if it is applied to N consecutive messages create a lot of 
> sessions and message producers. We found this issue by adding an 
> [Interceptor|https://activemq.apache.org/interceptors] to detect the optimal 
> usage of resources. For example, only one message producer per connection. In 
> below scenario we will be created N producers for the same connection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7561) Allow using replyTo with destination names that does not contain "queue" or "topic"

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7561:
--
Description: 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
 for destinations where their name does not contain the word {{queue}} or 
{{topic}}.

This limitation is tied to [current 
implementation|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131]
 which does not allow specify the reply to destination type in the flow file 
attributes.


  was:
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
 for destinations where their name does not contain the word {{queue}} or 
{{topic}}.

This limitation is tied to current implementation which does not allow specify 
the reply to destination type in the flow file attributes.



> Allow using replyTo with destination names that does not contain "queue" or 
> "topic"
> ---
>
> Key: NIFI-7561
> URL: https://issues.apache.org/jira/browse/NIFI-7561
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
>  does not allow using [Request-Reply 
> pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
> the usage of 
> [Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
>  for destinations where their name does not contain the word {{queue}} or 
> {{topic}}.
> This limitation is tied to [current 
> implementation|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131]
>  which does not allow specify the reply to destination type in the flow file 
> attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7564) Do not call Message setDestination on PublishJMS

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7564:
--
Issue Type: Improvement  (was: Bug)

> Do not call Message setDestination on PublishJMS
> 
>
> Key: NIFI-7564
> URL: https://issues.apache.org/jira/browse/NIFI-7564
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Gardella Juan Pablo
>Priority: Minor
>
> According to 
> [Message#setJMSDestination|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSDestination-javax.jms.Destination-]
>  documentation, this method should not be called by clients, only by the 
> providers:
> ??This method is for use by JMS providers only to set this field when a 
> message is sent. This message cannot be used by clients to configure the 
> destination of the message. This method is public to allow a JMS provider to 
> set this field when sending a message whose implementation is not its own.??
> Notice by having a basic flow ConsumeJMS(destination=A) to 
> PublishJMS(destination=B) it will be set the destination in the message to A, 
> although that is not true. 
> It is better to do not set that value in the publish jms as that is actually 
> handled by the JMS provider driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Description: 
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}. 
Also notice jms_destination should be ignored, as suggested at NIFI-7564.
 # For simplicity, the processor is the first time it processes messages.

*Scenario*
 # Processor picks the message. The 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 is created.
 # Connection {{C1}} and session {{S1}} are created. The 
[Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] {{M1_S1}} 
is created and 
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 {{MP_S1}} created too. Required to deliver first message at 
[JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
 # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching connection 
factory is created at 
[AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
 # An attempt to create a new connection and a new session are requested to the 
connection factory to create destination defined in the header 
{{jms_destination}} at 
[JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
 Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it is 
required to check internal logic in CachingConnectionFactory to understand why 
not). A new session {{S2}} is created and stored in the 
{{CachingConnectionFactory}} as the new cached session.
 # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
in the cache, it is physically closed and {{MP_S1}}.
 # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, all 
resources should be reused.

The scenario if it is applied to N consecutive messages create a lot of 
sessions and message producers. We found this issue by adding an 
[Interceptor|https://activemq.apache.org/interceptors] to detect the optimal 
usage of resources. For example, only one message producer per connection. In 
below scenario we will be created N producers for the same connection.

  was:
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to 

[jira] [Created] (NIFI-7564) Do not call Message setDestination on PublishJMS

2020-06-19 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-7564:
-

 Summary: Do not call Message setDestination on PublishJMS
 Key: NIFI-7564
 URL: https://issues.apache.org/jira/browse/NIFI-7564
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Gardella Juan Pablo


According to 
[Message#setJMSDestination|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSDestination-javax.jms.Destination-]
 documentation, this method should not be called by clients, only by the 
providers:

??This method is for use by JMS providers only to set this field when a message 
is sent. This message cannot be used by clients to configure the destination of 
the message. This method is public to allow a JMS provider to set this field 
when sending a message whose implementation is not its own.??

Notice by having a basic flow ConsumeJMS(destination=A) to 
PublishJMS(destination=B) it will be set the destination in the message to A, 
although that is not true. 
It is better to do not set that value in the publish jms as that is actually 
handled by the JMS provider driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7563) Optimize the usage of JMS sessions

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7563:
--
Description: 
Below an scenario to reproduce the non optimize usage of JMS resources. Suppose 
it is required to publish 1 message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}.
 # For simplicity, the processor is the first time it processes messages.

*Scenario*
 # Processor picks the message. The 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 is created.
 # Connection {{C1}} and session {{S1}} are created. The 
[Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] {{M1_S1}} 
is created and 
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 {{MP_S1}} created too. Required to deliver first message at 
[JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
 # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching connection 
factory is created at 
[AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
 # An attempt to create a new connection and a new session are requested to the 
connection factory to create destination defined in the header 
{{jms_destination}} at 
[JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
 Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it is 
required to check internal logic in CachingConnectionFactory to understand why 
not). A new session {{S2}} is created and stored in the 
{{CachingConnectionFactory}} as the new cached session.
 # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
in the cache, it is physically closed and {{MP_S1}}.
 # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, all 
resources should be reused.

The scenario if it is applied to N consecutive messages create a lot of 
sessions and message producers. We found this issue by adding an 
[Interceptor|https://activemq.apache.org/interceptors] to detect the optimal 
usage of resources. For example, only one message producer per connection. In 
below scenario we will be created N producers for the same connection.

  was:
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 objects are created by 
[Session|https://docs.oracle.com/javaee/7/api/javax/jms/Session.html].

Below an scenario to reproduce the problem. Suppose it is required to publish 1 
message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile 

[jira] [Commented] (NIFI-7563) Optimize the usage of JMS sessions

2020-06-19 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140484#comment-17140484
 ] 

Gardella Juan Pablo commented on NIFI-7563:
---

I have the patch, I will be provide it soon.

> Optimize the usage of JMS sessions
> --
>
> Key: NIFI-7563
> URL: https://issues.apache.org/jira/browse/NIFI-7563
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0, 1.8.0, 1.7.1, 1.10.0, 1.9.2, 1.11.4
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  objects are created by 
> [Session|https://docs.oracle.com/javaee/7/api/javax/jms/Session.html].
> Below an scenario to reproduce the problem. Suppose it is required to publish 
> 1 message to the destination {{D}} using 
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
>  The message is a flow file in the processor input queue.
> It is important to know that internally the processor is using 
> [CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
>  to reuse objects and a 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  to be able to use in thread safe manner. For JMS publishers, the default 
> configuration is to cache connections, sessions (only 1) and message 
> producers.
> *Preconditions*
>  # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
> defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}.
>  # For simplicity, the processor is the first time it processes messages.
> *Scenario*
>  # Processor picks the message. The 
> [worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
>  is created.
>  # Connection {{C1}} and session {{S1}} are created. The 
> [Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] 
> {{M1_S1}} is created and 
> [MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
>  {{MP_S1}} created too. Required to deliver first message at 
> [JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
>  # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching 
> connection factory is created at 
> [AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
>  # An attempt to create a new connection and a new session are requested to 
> the connection factory to create destination defined in the header 
> {{jms_destination}} at 
> [JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
>  Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it 
> is required to check internal logic in CachingConnectionFactory to understand 
> why not). A new session {{S2}} is created and stored in the 
> {{CachingConnectionFactory}} as the new cached session.
>  # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
> in the cache, it is physically closed and {{MP_S1}}.
>  # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, 
> all resources should be reused.
> The scenario if it is applied to N consecutive messages create a lot of 
> sessions and message producers. We found this issue by adding an 
> [Interceptor|https://activemq.apache.org/interceptors] to detect the optimal 
> usage of resources. For example, only one message producer per connection. In 
> below scenario we will be created N producers for the same connection. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7563) Optimize the usage of JMS sessions

2020-06-19 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-7563:
-

 Summary: Optimize the usage of JMS sessions
 Key: NIFI-7563
 URL: https://issues.apache.org/jira/browse/NIFI-7563
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.11.4, 1.9.2, 1.10.0, 1.7.1, 1.8.0, 1.6.0
Reporter: Gardella Juan Pablo
Assignee: Gardella Juan Pablo


[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 objects are created by 
[Session|https://docs.oracle.com/javaee/7/api/javax/jms/Session.html].

Below an scenario to reproduce the problem. Suppose it is required to publish 1 
message to the destination {{D}} using 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html].
 The message is a flow file in the processor input queue.

It is important to know that internally the processor is using 
[CachingConnectionFactory|https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/connection/CachingConnectionFactory.java]
 to reuse objects and a 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 to be able to use in thread safe manner. For JMS publishers, the default 
configuration is to cache connections, sessions (only 1) and message producers.

*Preconditions*
 # Flowfile has either {{jms_destination}} or {{jms_replyTo}} attribute 
defined. Due to NIFI-7561, it should contain the word {{queue}} or {{topic}}.
 # For simplicity, the processor is the first time it processes messages.

*Scenario*
 # Processor picks the message. The 
[worker|https://github.com/apache/nifi/blob/a1b245e051245bb6c65e7b5ffc6ee982669b7ab7/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L180]
 is created.
 # Connection {{C1}} and session {{S1}} are created. The 
[Message|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html] {{M1_S1}} 
is created and 
[MessageProducer|https://docs.oracle.com/javaee/7/api/javax/jms/MessageProducer.html]
 {{MP_S1}} created too. Required to deliver first message at 
[JMSPublisher#publish|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L65].
 # S1 and C1 are stored in {{CachingConnectionFactory}}. The caching connection 
factory is created at 
[AbstractJMSProcessor.java#L208|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/AbstractJMSProcessor.java#L208].
 # An attempt to create a new connection and a new session are requested to the 
connection factory to create destination defined in the header 
{{jms_destination}} at 
[JMSPublisher.java#L131|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSPublisher.java#L131].
 Notice the connection {{C1}} is reused although *{{S1}} is not reused* (it is 
required to check internal logic in CachingConnectionFactory to understand why 
not). A new session {{S2}} is created and stored in the 
{{CachingConnectionFactory}} as the new cached session.
 # Message is published and {{S1}} and {{MP_S1}} are closed. As {{S1}} is not 
in the cache, it is physically closed and {{MP_S1}}.
 # At this point of time, the cached objects are {{C1}}, {{S2}}. *Ideally*, all 
resources should be reused.

The scenario if it is applied to N consecutive messages create a lot of 
sessions and message producers. We found this issue by adding an 
[Interceptor|https://activemq.apache.org/interceptors] to detect the optimal 
usage of resources. For example, only one message producer per connection. In 
below scenario we will be created N producers for the same connection. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7561) Allow using replyTo with destination names that does not contain "queue" or "topic"

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo reassigned NIFI-7561:
-

Assignee: Gardella Juan Pablo

> Allow using replyTo with destination names that does not contain "queue" or 
> "topic"
> ---
>
> Key: NIFI-7561
> URL: https://issues.apache.org/jira/browse/NIFI-7561
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
>  does not allow using [Request-Reply 
> pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
> the usage of 
> [Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
>  for destinations where their name does not contain the word {{queue}} or 
> {{topic}}.
> This limitation is tied to current implementation which does not allow 
> specify the reply to destination type in the flow file attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7561) Allow using replyTo with destination names that does not contain "queue" or "topic"

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7561:
--
Description: 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
 for destinations where their name does not contain the word {{queue}} or 
{{topic}}.

This limitation is tied to current implementation which does not allow specify 
the reply to destination type in the flow file attributes.


  was:
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
 for destinations where their name does not contain the word {{queue}} or 
{{topic}}.

This limitation is tied to current implementation which does not allow specify 
the reply to destination type in the flow file attributes.


> Allow using replyTo with destination names that does not contain "queue" or 
> "topic"
> ---
>
> Key: NIFI-7561
> URL: https://issues.apache.org/jira/browse/NIFI-7561
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Minor
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
>  does not allow using [Request-Reply 
> pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
> the usage of 
> [Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
>  for destinations where their name does not contain the word {{queue}} or 
> {{topic}}.
> This limitation is tied to current implementation which does not allow 
> specify the reply to destination type in the flow file attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7561) Allow using replyTo with destination names that does not contain "queue" or "topic"

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7561:
--
Description: 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
 for destinations where their name does not contain the word {{queue}} or 
{{topic}}.

This limitation is tied to current implementation which does not allow specify 
the reply to destination type in the flow file attributes.

  was:
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]

This limitation is tied to current implementation which does not allow specify 
the reply to destination type in the flow file attributes.


> Allow using replyTo with destination names that does not contain "queue" or 
> "topic"
> ---
>
> Key: NIFI-7561
> URL: https://issues.apache.org/jira/browse/NIFI-7561
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Minor
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
>  does not allow using [Request-Reply 
> pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
> the usage of 
> [Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
>  for destinations where their name does not contain the word {{queue}} or 
> {{topic}}.
> This limitation is tied to current implementation which does not allow 
> specify the reply to destination type in the flow file attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7561) Allow using replyTo with destination names that does not contain "queue" or "topic"

2020-06-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7561:
--
Description: 
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]

This limitation is tied to current implementation which does not allow specify 
the reply to destination type in the flow file attributes.

  was:
[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-

This limitation is tied to current implementation which does not allow specify 
the reply to destination type in the flow file attributes.


> Allow using replyTo with destination names that does not contain "queue" or 
> "topic"
> ---
>
> Key: NIFI-7561
> URL: https://issues.apache.org/jira/browse/NIFI-7561
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Minor
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> [PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
>  does not allow using [Request-Reply 
> pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
> the usage of 
> [Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-]
> This limitation is tied to current implementation which does not allow 
> specify the reply to destination type in the flow file attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7561) Allow using replyTo with destination names that does not contain "queue" or "topic"

2020-06-19 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-7561:
-

 Summary: Allow using replyTo with destination names that does not 
contain "queue" or "topic"
 Key: NIFI-7561
 URL: https://issues.apache.org/jira/browse/NIFI-7561
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.11.4
 Environment: ALL
Reporter: Gardella Juan Pablo


[PublishJMS|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-jms-processors-nar/1.11.4/org.apache.nifi.jms.processors.PublishJMS/index.html]
 does not allow using [Request-Reply 
pattern|https://docs.oracle.com/cd/E19316-01/820-6424/aerby/index.html] with  
the usage of 
[Message#setJMSReplyTo|https://docs.oracle.com/javaee/7/api/javax/jms/Message.html#setJMSReplyTo-javax.jms.Destination-

This limitation is tied to current implementation which does not allow specify 
the reply to destination type in the flow file attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2020-05-26 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo reassigned NIFI-4893:
-

Assignee: Gardella Juan Pablo

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2020-01-21 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17020538#comment-17020538
 ] 

Gardella Juan Pablo commented on NIFI-6908:
---

[~jzahner] Are you able to attach the memory dump to check memory leaks in 
somewhere? Or any simple template to try to reproduce the problem?

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7050) ConsumeJMS is not yielded in case of exception

2020-01-21 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7050:
--
Status: Patch Available  (was: In Progress)

> ConsumeJMS is not yielded in case of exception
> --
>
> Key: NIFI-7050
> URL: https://issues.apache.org/jira/browse/NIFI-7050
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If any exception happens when ConsumerJMS tries to read messages, the process 
> tries again immediately. 
> {code:java}
>   try {
> consumer.consume(destinationName, errorQueueName, durable, 
> shared, subscriptionName, charset, new ConsumerCallback() {
> @Override
> public void accept(final JMSResponse response) {
> if (response == null) {
> return;
> }
> FlowFile flowFile = processSession.create();
> flowFile = processSession.write(flowFile, out -> 
> out.write(response.getMessageBody()));
> final Map jmsHeaders = 
> response.getMessageHeaders();
> final Map jmsProperties = 
> response.getMessageProperties();
> flowFile = 
> ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsHeaders, 
> flowFile, processSession);
> flowFile = 
> ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsProperties, 
> flowFile, processSession);
> flowFile = processSession.putAttribute(flowFile, 
> JMS_SOURCE_DESTINATION_NAME, destinationName);
> processSession.getProvenanceReporter().receive(flowFile, 
> destinationName);
> processSession.putAttribute(flowFile, JMS_MESSAGETYPE, 
> response.getMessageType());
> processSession.transfer(flowFile, REL_SUCCESS);
> processSession.commit();
> }
> });
> } catch(Exception e) {
> consumer.setValid(false);
> throw e; // for backward compatibility with exception handling in 
> flows
> }
> }
> {code}
> It should call {{context.yield}} in exception block. Notice 
> [PublishJMS|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/PublishJMS.java#L166]
>  is yielded in the same scenario. It is requires to do in the ConsumeJMS 
> processor only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7050) ConsumeJMS is not yielded in case of exception

2020-01-21 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo reassigned NIFI-7050:
-

Assignee: Gardella Juan Pablo

> ConsumeJMS is not yielded in case of exception
> --
>
> Key: NIFI-7050
> URL: https://issues.apache.org/jira/browse/NIFI-7050
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Minor
>
> If any exception happens when ConsumerJMS tries to read messages, the process 
> tries again immediately. 
> {code:java}
>   try {
> consumer.consume(destinationName, errorQueueName, durable, 
> shared, subscriptionName, charset, new ConsumerCallback() {
> @Override
> public void accept(final JMSResponse response) {
> if (response == null) {
> return;
> }
> FlowFile flowFile = processSession.create();
> flowFile = processSession.write(flowFile, out -> 
> out.write(response.getMessageBody()));
> final Map jmsHeaders = 
> response.getMessageHeaders();
> final Map jmsProperties = 
> response.getMessageProperties();
> flowFile = 
> ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsHeaders, 
> flowFile, processSession);
> flowFile = 
> ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsProperties, 
> flowFile, processSession);
> flowFile = processSession.putAttribute(flowFile, 
> JMS_SOURCE_DESTINATION_NAME, destinationName);
> processSession.getProvenanceReporter().receive(flowFile, 
> destinationName);
> processSession.putAttribute(flowFile, JMS_MESSAGETYPE, 
> response.getMessageType());
> processSession.transfer(flowFile, REL_SUCCESS);
> processSession.commit();
> }
> });
> } catch(Exception e) {
> consumer.setValid(false);
> throw e; // for backward compatibility with exception handling in 
> flows
> }
> }
> {code}
> It should call {{context.yield}} in exception block. Notice 
> [PublishJMS|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/PublishJMS.java#L166]
>  is yielded in the same scenario. It is requires to do in the ConsumeJMS 
> processor only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7050) ConsumeJMS is not yielded in case of exception

2020-01-21 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-7050:
-

 Summary: ConsumeJMS is not yielded in case of exception
 Key: NIFI-7050
 URL: https://issues.apache.org/jira/browse/NIFI-7050
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.10.0
Reporter: Gardella Juan Pablo


If any exception happens when ConsumerJMS tries to read messages, the process 
tries again immediately. 

{code:java}
  try {
consumer.consume(destinationName, errorQueueName, durable, shared, 
subscriptionName, charset, new ConsumerCallback() {
@Override
public void accept(final JMSResponse response) {
if (response == null) {
return;
}

FlowFile flowFile = processSession.create();
flowFile = processSession.write(flowFile, out -> 
out.write(response.getMessageBody()));

final Map jmsHeaders = 
response.getMessageHeaders();
final Map jmsProperties = 
response.getMessageProperties();

flowFile = 
ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsHeaders, flowFile, 
processSession);
flowFile = 
ConsumeJMS.this.updateFlowFileAttributesWithJMSAttributes(jmsProperties, 
flowFile, processSession);
flowFile = processSession.putAttribute(flowFile, 
JMS_SOURCE_DESTINATION_NAME, destinationName);

processSession.getProvenanceReporter().receive(flowFile, 
destinationName);
processSession.putAttribute(flowFile, JMS_MESSAGETYPE, 
response.getMessageType());
processSession.transfer(flowFile, REL_SUCCESS);
processSession.commit();
}
});
} catch(Exception e) {
consumer.setValid(false);
throw e; // for backward compatibility with exception handling in 
flows
}
}
{code}

It should call {{context.yield}} in exception block. Notice 
[PublishJMS|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/PublishJMS.java#L166]
 is yielded in the same scenario. It is requires to do in the ConsumeJMS 
processor only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7034) Connection leak with JMSConsumer and JMSPublisher

2020-01-19 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7034:
--
Status: Patch Available  (was: Open)

> Connection leak with JMSConsumer and JMSPublisher
> -
>
> Key: NIFI-7034
> URL: https://issues.apache.org/jira/browse/NIFI-7034
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: Discovered against ActiveMQ.
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Critical
> Fix For: 1.11.0
>
>   Original Estimate: 2h
>  Time Spent: 1h
>  Remaining Estimate: 1h
>
> JMS connections are not closed in case of a failure. Discovered against 
> ActiveMQ, but it applies to other JMS servers. 
> The problem happens when an exception is raised and the worker is marked as 
> invalid. The current code discards the worker before closing it properly. 
> Below the details.
> h3. Details
> Any exception happening to a ConsumerJMS or PublisherJMS marks the worker as 
> invalid. After that, the worker is discarded (the worker object reference is 
> never cleaned). Below the snipped code of the issue:
> {code:java|title=AbstractJMSProcessor}
> } finally {
> //in case of exception during worker's connection (consumer or 
> publisher),
> //an appropriate service is responsible to invalidate the worker.
> //if worker is not valid anymore, don't put it back into a pool, 
> try to rebuild it first, or discard.
> //this will be helpful in a situation, when JNDI has changed, or 
> JMS server is not available
> //and reconnection is required.
> if (worker == null || !worker.isValid()){
> getLogger().debug("Worker is invalid. Will try re-create... 
> ");
> final JMSConnectionFactoryProviderDefinition cfProvider = 
> context.getProperty(CF_SERVICE).asControllerService(JMSConnectionFactoryProviderDefinition.class);
> try {
> // Safe to cast. Method 
> #buildTargetResource(ProcessContext context) sets only 
> CachingConnectionFactory
> CachingConnectionFactory currentCF = 
> (CachingConnectionFactory)worker.jmsTemplate.getConnectionFactory();
> 
> cfProvider.resetConnectionFactory(currentCF.getTargetConnectionFactory());
> worker = buildTargetResource(context);
> }catch(Exception e) {
> getLogger().error("Failed to rebuild:  " + cfProvider);
> worker = null;
> }
> }
> {code}
> Before discard the worker, it should be cleaned all resources associated with 
> it. The proper solution is to call {{worker.shutdown()}} and then discard it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7034) Connection leak with JMSConsumer and JMSPublisher

2020-01-16 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo reassigned NIFI-7034:
-

Assignee: Gardella Juan Pablo

> Connection leak with JMSConsumer and JMSPublisher
> -
>
> Key: NIFI-7034
> URL: https://issues.apache.org/jira/browse/NIFI-7034
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: Discovered against ActiveMQ.
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Critical
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> JMS connections are not closed in case of a failure. Discovered against 
> ActiveMQ, but it applies to other JMS servers. 
> The problem happens when an exception is raised and the worker is marked as 
> invalid. The current code discards the worker before closing it properly. 
> Below the details.
> h3. Details
> Any exception happening to a ConsumerJMS or PublisherJMS marks the worker as 
> invalid. After that, the worker is discarded (the worker object reference is 
> never cleaned). Below the snipped code of the issue:
> {code:java|title=AbstractJMSProcessor}
> } finally {
> //in case of exception during worker's connection (consumer or 
> publisher),
> //an appropriate service is responsible to invalidate the worker.
> //if worker is not valid anymore, don't put it back into a pool, 
> try to rebuild it first, or discard.
> //this will be helpful in a situation, when JNDI has changed, or 
> JMS server is not available
> //and reconnection is required.
> if (worker == null || !worker.isValid()){
> getLogger().debug("Worker is invalid. Will try re-create... 
> ");
> final JMSConnectionFactoryProviderDefinition cfProvider = 
> context.getProperty(CF_SERVICE).asControllerService(JMSConnectionFactoryProviderDefinition.class);
> try {
> // Safe to cast. Method 
> #buildTargetResource(ProcessContext context) sets only 
> CachingConnectionFactory
> CachingConnectionFactory currentCF = 
> (CachingConnectionFactory)worker.jmsTemplate.getConnectionFactory();
> 
> cfProvider.resetConnectionFactory(currentCF.getTargetConnectionFactory());
> worker = buildTargetResource(context);
> }catch(Exception e) {
> getLogger().error("Failed to rebuild:  " + cfProvider);
> worker = null;
> }
> }
> {code}
> Before discard the worker, it should be cleaned all resources associated with 
> it. The proper solution is to call {{worker.shutdown()}} and then discard it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7034) Connection leak with JMSConsumer and JMSPublisher

2020-01-16 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7034:
--
Affects Version/s: 1.10.0

> Connection leak with JMSConsumer and JMSPublisher
> -
>
> Key: NIFI-7034
> URL: https://issues.apache.org/jira/browse/NIFI-7034
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: Discovered against ActiveMQ.
>Reporter: Gardella Juan Pablo
>Priority: Critical
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> JMS connections are not closed in case of a failure. Discovered against 
> ActiveMQ, but it applies to other JMS servers. 
> The problem happens when an exception is raised and the worker is marked as 
> invalid. The current code discards the worker before closing it properly. 
> Below the details.
> h3. Details
> Any exception happening to a ConsumerJMS or PublisherJMS marks the worker as 
> invalid. After that, the worker is discarded (the worker object reference is 
> never cleaned). Below the snipped code of the issue:
> {code:java|title=AbstractJMSProcessor}
> } finally {
> //in case of exception during worker's connection (consumer or 
> publisher),
> //an appropriate service is responsible to invalidate the worker.
> //if worker is not valid anymore, don't put it back into a pool, 
> try to rebuild it first, or discard.
> //this will be helpful in a situation, when JNDI has changed, or 
> JMS server is not available
> //and reconnection is required.
> if (worker == null || !worker.isValid()){
> getLogger().debug("Worker is invalid. Will try re-create... 
> ");
> final JMSConnectionFactoryProviderDefinition cfProvider = 
> context.getProperty(CF_SERVICE).asControllerService(JMSConnectionFactoryProviderDefinition.class);
> try {
> // Safe to cast. Method 
> #buildTargetResource(ProcessContext context) sets only 
> CachingConnectionFactory
> CachingConnectionFactory currentCF = 
> (CachingConnectionFactory)worker.jmsTemplate.getConnectionFactory();
> 
> cfProvider.resetConnectionFactory(currentCF.getTargetConnectionFactory());
> worker = buildTargetResource(context);
> }catch(Exception e) {
> getLogger().error("Failed to rebuild:  " + cfProvider);
> worker = null;
> }
> }
> {code}
> Before discard the worker, it should be cleaned all resources associated with 
> it. The proper solution is to call {{worker.shutdown()}} and then discard it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7034) Connection leak with JMSConsumer and JMSPublisher

2020-01-16 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-7034:
--
Description: 
JMS connections are not closed in case of a failure. Discovered against 
ActiveMQ, but it applies to other JMS servers. 

The problem happens when an exception is raised and the worker is marked as 
invalid. The current code discards the worker before closing it properly. Below 
the details.

h3. Details
Any exception happening to a ConsumerJMS or PublisherJMS marks the worker as 
invalid. After that, the worker is discarded (the worker object reference is 
never cleaned). Below the snipped code of the issue:

{code:java|title=AbstractJMSProcessor}
} finally {
//in case of exception during worker's connection (consumer or 
publisher),
//an appropriate service is responsible to invalidate the worker.
//if worker is not valid anymore, don't put it back into a pool, 
try to rebuild it first, or discard.
//this will be helpful in a situation, when JNDI has changed, or 
JMS server is not available
//and reconnection is required.
if (worker == null || !worker.isValid()){
getLogger().debug("Worker is invalid. Will try re-create... ");
final JMSConnectionFactoryProviderDefinition cfProvider = 
context.getProperty(CF_SERVICE).asControllerService(JMSConnectionFactoryProviderDefinition.class);
try {
// Safe to cast. Method #buildTargetResource(ProcessContext 
context) sets only CachingConnectionFactory
CachingConnectionFactory currentCF = 
(CachingConnectionFactory)worker.jmsTemplate.getConnectionFactory();

cfProvider.resetConnectionFactory(currentCF.getTargetConnectionFactory());
worker = buildTargetResource(context);
}catch(Exception e) {
getLogger().error("Failed to rebuild:  " + cfProvider);
worker = null;
}
}
{code}
Before discard the worker, it should be cleaned all resources associated with 
it. The proper solution is to call {{worker.shutdown()}} and then discard it.

  was:
JMS connections are not closed in case of a failure. Discovered against 
ActiveMQ, but it applies to other JMS servers. 

The problem happens when an exception happen and the worker is discarded and 
never closed properly. Below the details.

h3. Details
Any exception happening to a ConsumerJMS or PublisherJMS marks the worker as 
invalid. After that, the worker is discarded (the worker object reference is 
never cleaned). Below the snipped code of the issue:

{code:java|title=AbstractJMSProcessor}
} finally {
//in case of exception during worker's connection (consumer or 
publisher),
//an appropriate service is responsible to invalidate the worker.
//if worker is not valid anymore, don't put it back into a pool, 
try to rebuild it first, or discard.
//this will be helpful in a situation, when JNDI has changed, or 
JMS server is not available
//and reconnection is required.
if (worker == null || !worker.isValid()){
getLogger().debug("Worker is invalid. Will try re-create... ");
final JMSConnectionFactoryProviderDefinition cfProvider = 
context.getProperty(CF_SERVICE).asControllerService(JMSConnectionFactoryProviderDefinition.class);
try {
// Safe to cast. Method #buildTargetResource(ProcessContext 
context) sets only CachingConnectionFactory
CachingConnectionFactory currentCF = 
(CachingConnectionFactory)worker.jmsTemplate.getConnectionFactory();

cfProvider.resetConnectionFactory(currentCF.getTargetConnectionFactory());
worker = buildTargetResource(context);
}catch(Exception e) {
getLogger().error("Failed to rebuild:  " + cfProvider);
worker = null;
}
}
{code}
Before discard the worker, it should be cleaned all resources associated with 
it. The proper solution is to call {{worker.shutdown()}} and then discard it.


> Connection leak with JMSConsumer and JMSPublisher
> -
>
> Key: NIFI-7034
> URL: https://issues.apache.org/jira/browse/NIFI-7034
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
> Environment: Discovered against ActiveMQ.
>Reporter: Gardella Juan Pablo
>Priority: Critical
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> JMS connections are not closed in case of a failure. Discovered against 
> ActiveMQ, but it applies to other JMS servers. 
> The problem happens when 

[jira] [Created] (NIFI-7034) Connection leak with JMSConsumer and JMSPublisher

2020-01-16 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-7034:
-

 Summary: Connection leak with JMSConsumer and JMSPublisher
 Key: NIFI-7034
 URL: https://issues.apache.org/jira/browse/NIFI-7034
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
 Environment: Discovered against ActiveMQ.
Reporter: Gardella Juan Pablo


JMS connections are not closed in case of a failure. Discovered against 
ActiveMQ, but it applies to other JMS servers. 

The problem happens when an exception happen and the worker is discarded and 
never closed properly. Below the details.

h3. Details
Any exception happening to a ConsumerJMS or PublisherJMS marks the worker as 
invalid. After that, the worker is discarded (the worker object reference is 
never cleaned). Below the snipped code of the issue:

{code:java|title=AbstractJMSProcessor}
} finally {
//in case of exception during worker's connection (consumer or 
publisher),
//an appropriate service is responsible to invalidate the worker.
//if worker is not valid anymore, don't put it back into a pool, 
try to rebuild it first, or discard.
//this will be helpful in a situation, when JNDI has changed, or 
JMS server is not available
//and reconnection is required.
if (worker == null || !worker.isValid()){
getLogger().debug("Worker is invalid. Will try re-create... ");
final JMSConnectionFactoryProviderDefinition cfProvider = 
context.getProperty(CF_SERVICE).asControllerService(JMSConnectionFactoryProviderDefinition.class);
try {
// Safe to cast. Method #buildTargetResource(ProcessContext 
context) sets only CachingConnectionFactory
CachingConnectionFactory currentCF = 
(CachingConnectionFactory)worker.jmsTemplate.getConnectionFactory();

cfProvider.resetConnectionFactory(currentCF.getTargetConnectionFactory());
worker = buildTargetResource(context);
}catch(Exception e) {
getLogger().error("Failed to rebuild:  " + cfProvider);
worker = null;
}
}
{code}
Before discard the worker, it should be cleaned all resources associated with 
it. The proper solution is to call {{worker.shutdown()}} and then discard it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6915) Jms Durable non shared subscription is broken

2020-01-08 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010860#comment-17010860
 ] 

Gardella Juan Pablo commented on NIFI-6915:
---

Thanks [~m-hogue] ! Good catch, I was not aware of that ticket.

> Jms Durable non shared subscription is broken
> -
>
> Key: NIFI-6915
> URL: https://issues.apache.org/jira/browse/NIFI-6915
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.9.1, 1.9.2
> Environment: All
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Critical
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The enhancement NIFI-4834 broke JMS non shared Durable subscriptions. It may 
> lose messages after stopping and starting a {{ConsumeJMS}} processor as 
> [client ID is always 
> different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189]
>  since NIFI-4834.
> Using different client identifiers, makes the consumer missing messages after 
> it is restarted. The problem is when some messages were published to the 
> topic during the consumer is not running.
> A simple solution is to keep old behavior if it is a durable subscriber.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6915) Jms Durable non shared subscription is broken

2020-01-06 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-6915:
--
Status: Patch Available  (was: In Progress)

> Jms Durable non shared subscription is broken
> -
>
> Key: NIFI-6915
> URL: https://issues.apache.org/jira/browse/NIFI-6915
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.9.2, 1.9.1, 1.10.0, 1.9.0, 1.8.0, 1.7.0, 1.6.0
> Environment: All
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The enhancement NIFI-4834 broke JMS non shared Durable subscriptions. It may 
> lose messages after stopping and starting a {{ConsumeJMS}} processor as 
> [client ID is always 
> different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189]
>  since NIFI-4834.
> Using different client identifiers, makes the consumer missing messages after 
> it is restarted. The problem is when some messages were published to the 
> topic during the consumer is not running.
> A simple solution is to keep old behavior if it is a durable subscriber.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6915) Jms Durable non shared subscription is broken

2020-01-06 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-6915:
--
Summary: Jms Durable non shared subscription is broken  (was: Jms Durable 
subscription is broken)

> Jms Durable non shared subscription is broken
> -
>
> Key: NIFI-6915
> URL: https://issues.apache.org/jira/browse/NIFI-6915
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.9.1, 1.9.2
> Environment: All
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Critical
>
> The enhancement NIFI-4834 broke JMS non shared Durable subscriptions. It may 
> lose messages after stopping and starting a {{ConsumeJMS}} processor as 
> [client ID is always 
> different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189]
>  since NIFI-4834.
> Using different client identifiers, makes the consumer missing messages after 
> it is restarted. The problem is when some messages were published to the 
> topic during the consumer is not running.
> A simple solution is to keep old behavior if it is a durable subscriber.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6915) Jms Durable subscription is broken

2020-01-06 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo reassigned NIFI-6915:
-

Assignee: Gardella Juan Pablo

> Jms Durable subscription is broken
> --
>
> Key: NIFI-6915
> URL: https://issues.apache.org/jira/browse/NIFI-6915
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.9.1, 1.9.2
> Environment: All
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Critical
>
> The enhancement NIFI-4834 broke JMS non shared Durable subscriptions. It may 
> lose messages after stopping and starting a {{ConsumeJMS}} processor as 
> [client ID is always 
> different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189]
>  since NIFI-4834.
> Using different client identifiers, makes the consumer missing messages after 
> it is restarted. The problem is when some messages were published to the 
> topic during the consumer is not running.
> A simple solution is to keep old behavior if it is a durable subscriber.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6915) Jms Durable subscription is broken

2020-01-06 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-6915:
--
Description: 
The enhancement NIFI-4834 broke JMS non shared Durable subscriptions. It may 
lose messages after stopping and starting a {{ConsumeJMS}} processor as [client 
ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189]
 since NIFI-4834.

Using different client identifiers, makes the consumer missing messages after 
it is restarted. The problem is when some messages were published to the topic 
during the consumer is not running.

A simple solution is to keep old behavior if it is a durable subscriber.

 

 

 

  was:
The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
and start a {{ConsumeJMS}} processor the [client ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
  Using different client identifiers make the consumer missing messages after 
it is stopped, some messages were published to the topic and then, the consumer 
is started.

A simple solution is to keep old behavior if it is a durable subscriber. 

 

 

 


> Jms Durable subscription is broken
> --
>
> Key: NIFI-6915
> URL: https://issues.apache.org/jira/browse/NIFI-6915
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.9.1, 1.9.2
> Environment: All
>Reporter: Gardella Juan Pablo
>Priority: Critical
>
> The enhancement NIFI-4834 broke JMS non shared Durable subscriptions. It may 
> lose messages after stopping and starting a {{ConsumeJMS}} processor as 
> [client ID is always 
> different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189]
>  since NIFI-4834.
> Using different client identifiers, makes the consumer missing messages after 
> it is restarted. The problem is when some messages were published to the 
> topic during the consumer is not running.
> A simple solution is to keep old behavior if it is a durable subscriber.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6916) Null messages cannot be acknowledge at JMSConsumer

2019-12-04 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987889#comment-16987889
 ] 

Gardella Juan Pablo commented on NIFI-6916:
---

The problem happens at 
[https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSConsumer.java#L99]
 and at 
[https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSConsumer.java#L101].
 

For a TextMessage, it is called 
[https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/MessageBodyToBytesConverter.java#L52]
 and will throw a NPE. 

This will cause acknowledge call 
([https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSConsumer.java#L124])
 is not called for a null message. Let me know if I am clear now. 

> Null messages cannot be acknowledge at JMSConsumer
> --
>
> Key: NIFI-6916
> URL: https://issues.apache.org/jira/browse/NIFI-6916
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
>Reporter: Gardella Juan Pablo
>Priority: Major
>
> {{ConsumeJMS}} procesor does not handled null messages properly. Null 
> messages are never acknowledge.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6917) Support resolve variables/parameters at JMS processors at dynamic properties

2019-12-04 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987874#comment-16987874
 ] 

Gardella Juan Pablo commented on NIFI-6917:
---

You were right, it was partially solved with the changes introduced at 
NIFI-5929. I was looking into tag nifi-rel/0.10.0 and that changes were not 
merged. Your changes are OK from my side, as one of the missing part was the 
change introduced at 
[https://github.com/apache/nifi/pull/3914/commits/fd286e94adbd32cd15a0e1102af388ffa18527bb#diff-29268df987e7900c4750d833f3b0456dR138],
 and the other part of the solution (missing at 1.10.0 tag) is at 
[https://github.com/apache/nifi/commit/1dfbc97c074a5fc5c8e68124e0984504cfa97813#diff-29268df987e7900c4750d833f3b0456dR261].
 (although that part did not include marking the dynamic properties support 
variable registry 
{{.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)}}).

According: 
 ??the JMSConnectionFactoryProvider support dynamic properties and properties 
are evaluated against registry variables.??

This will work now with your changes, as it was missing 
expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) in the 
dynamic properties for {{JMSConnectionFactoryProvider}}.

 

Thanks a lot [~pvillard] to look into this.

> Support resolve variables/parameters at JMS processors at dynamic properties
> 
>
> Key: NIFI-6917
> URL: https://issues.apache.org/jira/browse/NIFI-6917
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Gardella Juan Pablo
>Assignee: Pierre Villard
>Priority: Trivial
>  Labels: documentation
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Support resolve variables/parameters at JMS processors at dynamic properties. 
> Currently they are not supported. For example Solace requires special 
> properties and commonly are different between environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6915) Jms Durable subscription is broken

2019-12-02 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-6915:
--
Description: 
The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
and start a {{ConsumeJMS}} processor the [client ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
  Using different client identifiers make the consumer missing messages after 
it is stopped, some messages were published to the topic and then, the consumer 
is started.

A simple solution is to keep old behavior if it is a durable subscriber. 

 

 

 

  was:
The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
and start a {{ConsumeJMS}} processor the [client ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
 

A simple solution is to keep old behavior if it is a durable subscriber. 

 

 

 


> Jms Durable subscription is broken
> --
>
> Key: NIFI-6915
> URL: https://issues.apache.org/jira/browse/NIFI-6915
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.9.1, 1.9.2
> Environment: All
>Reporter: Gardella Juan Pablo
>Priority: Critical
>
> The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
> and start a {{ConsumeJMS}} processor the [client ID is always 
> different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
>   Using different client identifiers make the consumer missing messages after 
> it is stopped, some messages were published to the topic and then, the 
> consumer is started.
> A simple solution is to keep old behavior if it is a durable subscriber. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6915) Jms Durable subscription is broken

2019-11-30 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-6915:
--
Description: 
The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
and start a {{ConsumeJMS}} processor the [client ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
 

A simple solution is to keep old behavior if it is a durable subscriber. 

 

 

 

  was:
The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
and start a {{ConsumeJMS}} processor the [client ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
 

A simple solution is to keep old behavior if it is a durable subscriber. 
Ideally, for durable subscribers and to keep the scalabilty introduced at 
NIFI-4834, a possible solution may be is to hold in zookeeper the previous 
assigned client id values.

 

 

 


> Jms Durable subscription is broken
> --
>
> Key: NIFI-6915
> URL: https://issues.apache.org/jira/browse/NIFI-6915
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.9.1, 1.9.2
> Environment: All
>Reporter: Gardella Juan Pablo
>Priority: Critical
>
> The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
> and start a {{ConsumeJMS}} processor the [client ID is always 
> different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
>  
> A simple solution is to keep old behavior if it is a durable subscriber. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6915) Jms Durable subscription is broken

2019-11-29 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-6915:
--
Description: 
The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
and start a {{ConsumeJMS}} processor the [client ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
 

A simple solution is to keep old behavior if it is a durable subscriber. 
Ideally, for durable subscribers and to keep the scalabilty introduced at 
NIFI-4834, a possible solution may be is to hold in zookeeper the previous 
assigned client id values.

 

 

 

  was:
The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
and start a {{ConsumeJMS}} processor the [client ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
 

A simple solution is to keep old behavior if it is a durable subscriber. 
Ideally, for durable subscribers and keep the scalabilty introduced at 
NIFI-4834, it is required to hold in zookeeper the previous assigned client id 
values.

 

 

 


> Jms Durable subscription is broken
> --
>
> Key: NIFI-6915
> URL: https://issues.apache.org/jira/browse/NIFI-6915
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.9.1, 1.9.2
> Environment: All
>Reporter: Gardella Juan Pablo
>Priority: Critical
>
> The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
> and start a {{ConsumeJMS}} processor the [client ID is always 
> different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
>  
> A simple solution is to keep old behavior if it is a durable subscriber. 
> Ideally, for durable subscribers and to keep the scalabilty introduced at 
> NIFI-4834, a possible solution may be is to hold in zookeeper the previous 
> assigned client id values.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-6917) Support resolve variables/parameters at JMS processors at dynamic properties

2019-11-29 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-6917:
-

 Summary: Support resolve variables/parameters at JMS processors at 
dynamic properties
 Key: NIFI-6917
 URL: https://issues.apache.org/jira/browse/NIFI-6917
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Gardella Juan Pablo


Support resolve variables/parameters at JMS processors at dynamic properties. 
Currently they are not supported. For example Solace requires special 
properties and commonly are different between environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-6916) Null messages cannot be acknowledge at JMSConsumer

2019-11-29 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-6916:
-

 Summary: Null messages cannot be acknowledge at JMSConsumer
 Key: NIFI-6916
 URL: https://issues.apache.org/jira/browse/NIFI-6916
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.10.0
Reporter: Gardella Juan Pablo


{{ConsumeJMS}} procesor does not handled null messages properly. Null messages 
are never acknowledge.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-4834) ConsumeJMS does not scale when given more than 1 thread

2019-11-29 Thread Gardella Juan Pablo (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16985182#comment-16985182
 ] 

Gardella Juan Pablo commented on NIFI-4834:
---

This enhancement introduced a critical bug, reported at 
https://issues.apache.org/jira/browse/NIFI-6915.

> ConsumeJMS does not scale when given more than 1 thread
> ---
>
> Key: NIFI-4834
> URL: https://issues.apache.org/jira/browse/NIFI-4834
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.6.0
>
>
> When I run ConsumeJMS against a local broker, the performance is great. 
> However, if I run against a broker that is running remotely with a 75 ms 
> round trip time (i.e., somewhat high latency), then the performance is pretty 
> poor, allowing me to receive only about 30-40 msgs/sec (1-2 MB/sec).
> Increasing the number of threads should result in multiple connections to the 
> JMS Broker, which would provide better throughput. However, when I increase 
> the number of Concurrent Tasks to 10, I see 10 consumers but only a single 
> connection being created, so the throughput is no better (in fact it's a bit 
> slower due to added lock contention).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-6915) Jms Durable subscription is broken

2019-11-29 Thread Gardella Juan Pablo (Jira)
Gardella Juan Pablo created NIFI-6915:
-

 Summary: Jms Durable subscription is broken
 Key: NIFI-6915
 URL: https://issues.apache.org/jira/browse/NIFI-6915
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.9.2, 1.9.1, 1.10.0, 1.9.0, 1.8.0, 1.7.0, 1.6.0
 Environment: All
Reporter: Gardella Juan Pablo


The enhancement NIFI-4834 broke JMS Durable subscriptions because after stop 
and start a {{ConsumeJMS}} processor the [client ID is always 
different|https://github.com/apache/nifi/pull/2445/commits/cd091103e4a76e7b54e00257e5e18eaab3d389ec#diff-4ce7e53c92829e48b85959da41653f2bR189].
 

A simple solution is to keep old behavior if it is a durable subscriber. 
Ideally, for durable subscribers and keep the scalabilty introduced at 
NIFI-4834, it is required to hold in zookeeper the previous assigned client id 
values.

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-5761) ReplaceText processor can stop processing data if it evaluates invalid expressions

2018-10-26 Thread Gardella Juan Pablo (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665617#comment-16665617
 ] 

Gardella Juan Pablo commented on NIFI-5761:
---

Patch available at [https://github.com/apache/nifi/pull/3112]

 

> ReplaceText processor can stop processing data if it evaluates invalid 
> expressions
> --
>
> Key: NIFI-5761
> URL: https://issues.apache.org/jira/browse/NIFI-5761
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.7.1
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Given a flowfile body with nifi expression, when _ReplaceText_ processor 
> evaluates it and the expression throws an exception, the processor will 
> rollback the flowfile and keep trying to evaluate instead of send the 
> flowfile to _failure_ relationshipt.
> Discussion Thread: 
> http://apache-nifi-users-list.2361937.n4.nabble.com/ReplaceText-cannot-consume-messages-if-Regex-does-not-match-td5986.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5761) ReplaceText processor can stop processing data if it evaluates invalid expressions

2018-10-26 Thread Gardella Juan Pablo (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-5761:
--
Summary: ReplaceText processor can stop processing data if it evaluates 
invalid expressions  (was: ReplaceText processor can stop pulling data if the 
data contains invalid expressions)

> ReplaceText processor can stop processing data if it evaluates invalid 
> expressions
> --
>
> Key: NIFI-5761
> URL: https://issues.apache.org/jira/browse/NIFI-5761
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.7.1
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Given a flowfile body with nifi expression, when _ReplaceText_ processor 
> evaluates it and the expression throws an exception, the processor will 
> rollback the flowfile and keep trying to evaluate instead of send the 
> flowfile to _failure_ relationshipt.
> Discussion Thread: 
> http://apache-nifi-users-list.2361937.n4.nabble.com/ReplaceText-cannot-consume-messages-if-Regex-does-not-match-td5986.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5761) ReplaceText processor can stop pulling data if the data contains invalid expressions

2018-10-26 Thread Gardella Juan Pablo (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-5761:
--
Affects Version/s: 1.7.1

> ReplaceText processor can stop pulling data if the data contains invalid 
> expressions
> 
>
> Key: NIFI-5761
> URL: https://issues.apache.org/jira/browse/NIFI-5761
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.7.1
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Given a flowfile body with nifi expression, when _ReplaceText_ processor 
> evaluates it and the expression throws an exception, the processor will 
> rollback the flowfile and keep trying to evaluate instead of send the 
> flowfile to _failure_ relationshipt.
> Discussion Thread: 
> http://apache-nifi-users-list.2361937.n4.nabble.com/ReplaceText-cannot-consume-messages-if-Regex-does-not-match-td5986.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5761) ReplaceText processor can stop pulling data if the data contains invalid expressions

2018-10-26 Thread Gardella Juan Pablo (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665461#comment-16665461
 ] 

Gardella Juan Pablo commented on NIFI-5761:
---

Similar type of issue solved for Kafka processors.

> ReplaceText processor can stop pulling data if the data contains invalid 
> expressions
> 
>
> Key: NIFI-5761
> URL: https://issues.apache.org/jira/browse/NIFI-5761
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Given a flowfile body with nifi expression, when _ReplaceText_ processor 
> evaluates it and the expression throws an exception, the processor will 
> rollback the flowfile and keep trying to evaluate instead of send the 
> flowfile to _failure_ relationshipt.
> Discussion Thread: 
> http://apache-nifi-users-list.2361937.n4.nabble.com/ReplaceText-cannot-consume-messages-if-Regex-does-not-match-td5986.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5761) ReplaceText processor can stop pulling data if the data contains invalid expressions

2018-10-26 Thread Gardella Juan Pablo (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16665460#comment-16665460
 ] 

Gardella Juan Pablo commented on NIFI-5761:
---

I will work on the patch.

> ReplaceText processor can stop pulling data if the data contains invalid 
> expressions
> 
>
> Key: NIFI-5761
> URL: https://issues.apache.org/jira/browse/NIFI-5761
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Given a flowfile body with nifi expression, when _ReplaceText_ processor 
> evaluates it and the expression throws an exception, the processor will 
> rollback the flowfile and keep trying to evaluate instead of send the 
> flowfile to _failure_ relationshipt.
> Discussion Thread: 
> http://apache-nifi-users-list.2361937.n4.nabble.com/ReplaceText-cannot-consume-messages-if-Regex-does-not-match-td5986.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5761) ReplaceText processor can stop pulling data if the data contains invalid expressions

2018-10-26 Thread Gardella Juan Pablo (JIRA)
Gardella Juan Pablo created NIFI-5761:
-

 Summary: ReplaceText processor can stop pulling data if the data 
contains invalid expressions
 Key: NIFI-5761
 URL: https://issues.apache.org/jira/browse/NIFI-5761
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.5.0
 Environment: ALL
Reporter: Gardella Juan Pablo


Given a flowfile body with nifi expression, when _ReplaceText_ processor 
evaluates it and the expression throws an exception, the processor will 
rollback the flowfile and keep trying to evaluate instead of send the flowfile 
to _failure_ relationshipt.

Discussion Thread: 
http://apache-nifi-users-list.2361937.n4.nabble.com/ReplaceText-cannot-consume-messages-if-Regex-does-not-match-td5986.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5070) java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

2018-04-11 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434740#comment-16434740
 ] 

Gardella Juan Pablo commented on NIFI-5070:
---

Patch available.

> java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed
> --
>
> Key: NIFI-5070
> URL: https://issues.apache.org/jira/browse/NIFI-5070
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Gardella Juan Pablo
>Priority: Major
>
> Discovered during NIFI-5049. According [ResultSet.next() 
> javadoc|https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html#next%E2%80%93]:
> _When a call to the {{next}} method returns {{false}}, the cursor is 
> positioned after the last row. Any invocation of a {{ResultSet}} method which 
> requires a current row will result in a {{SQLException}} being thrown. If the 
> result set type is {{TYPE_FORWARD_ONLY}}, it is vendor specified whether 
> their JDBC driver implementation will return {{false}} or throw an 
> {{SQLException}} on a subsequent call to {{next}}._
> With Phoenix Database and QueryDatabaseTable the exception 
> {{java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed}} is raised.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5049) Fix handling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-10 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-5049:
--
Summary: Fix handling of Phonenix datetime columns in QueryDatabaseTable 
and GenerateTableFetch  (was: Fixhandling of Phonenix datetime columns in 
QueryDatabaseTable and GenerateTableFetch)

> Fix handling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> --
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-10 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433392#comment-16433392
 ] 

Gardella Juan Pablo commented on NIFI-5049:
---

[~mattyb149] patch applied. Thanks for your help.

> Fixhandling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> -
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-10 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432868#comment-16432868
 ] 

Gardella Juan Pablo edited comment on NIFI-5049 at 4/10/18 8:41 PM:


Patch complete, but there is an issue with QueryDatabaseTable processor (not 
related with the change). The issue is:
 org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
     at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:328)
     at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
     at 
com.point72.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:322)
     at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
     at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
     at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:748)
 Caused by: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed.
     at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
     at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
     at 
org.apache.phoenix.jdbc.PhoenixResultSet.checkOpen(PhoenixResultSet.java:215)
     at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:772)
     at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
     at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
     at 
org.apache.nifi.processors.standard.JdbcCommon.convertToAvroStream(JdbcCommon.java:292)
     at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:326)
     ... 13 common frames omitted
  
 [~mattyb149] Do you agree on file another issue to track below issue?


was (Author: gardellajuanpablo):
Patch complete, but there is an issue with QueryDatabaseTable processor (not 
related with the change). The issue is:
 org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
     at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:328)
     at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
     at 
com.point72.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:322)
     at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
     at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
     at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:748)
 Caused by: 

[jira] [Comment Edited] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-10 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432868#comment-16432868
 ] 

Gardella Juan Pablo edited comment on NIFI-5049 at 4/10/18 8:40 PM:


Patch complete, but there is an issue with QueryDatabaseTable processor (not 
related with the change). The issue is:
 org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
     at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:328)
     at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
     at 
com.point72.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:322)
     at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
     at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
     at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:748)
 Caused by: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed.
     at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
     at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
     at 
org.apache.phoenix.jdbc.PhoenixResultSet.checkOpen(PhoenixResultSet.java:215)
     at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:772)
     at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
     at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
     at 
org.apache.nifi.nifi.processors.JdbcCommon.convertToAvroStream(JdbcCommon.java:292)
     at 
org.apache.nifi.nifi.processors.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:326)
     ... 13 common frames omitted
  
 [~mattyb149] Do you agree on file another issue to track below issue?


was (Author: gardellajuanpablo):
Patch complete, but there is an issue with QueryDatabaseTable processor (not 
related with the change). The issue is:
 org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
     at 
org.apache.nifi.processors.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:328)
     at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
     at 
com.point72.nifi.processors.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:322)
     at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
     at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
     at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:748)
 Caused by: java.sql.SQLException: ERROR 1101 

[jira] [Created] (NIFI-5070) java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

2018-04-10 Thread Gardella Juan Pablo (JIRA)
Gardella Juan Pablo created NIFI-5070:
-

 Summary: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is 
closed
 Key: NIFI-5070
 URL: https://issues.apache.org/jira/browse/NIFI-5070
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.6.0
Reporter: Gardella Juan Pablo


Discovered during NIFI-5049. According [ResultSet.next() 
javadoc|https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html#next%E2%80%93]:

_When a call to the {{next}} method returns {{false}}, the cursor is positioned 
after the last row. Any invocation of a {{ResultSet}} method which requires a 
current row will result in a {{SQLException}} being thrown. If the result set 
type is {{TYPE_FORWARD_ONLY}}, it is vendor specified whether their JDBC driver 
implementation will return {{false}} or throw an {{SQLException}} on a 
subsequent call to {{next}}._

With Phoenix Database and QueryDatabaseTable the exception 
{{java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed}} is raised.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-10 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432909#comment-16432909
 ] 

Gardella Juan Pablo commented on NIFI-5049:
---

Filed https://issues.apache.org/jira/browse/NIFI-5070 to track it. Thanks!

> Fixhandling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> -
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-10 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432868#comment-16432868
 ] 

Gardella Juan Pablo edited comment on NIFI-5049 at 4/10/18 7:57 PM:


Patch complete, but there is an issue with QueryDatabaseTable processor (not 
related with the change). The issue is:
 org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
     at 
org.apache.nifi.processors.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:328)
     at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
     at 
com.point72.nifi.processors.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:322)
     at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
     at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
     at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
     at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
     at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:748)
 Caused by: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed.
     at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
     at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
     at 
org.apache.phoenix.jdbc.PhoenixResultSet.checkOpen(PhoenixResultSet.java:215)
     at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:772)
     at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
     at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
     at 
org.apache.nifi.nifi.processors.JdbcCommon.convertToAvroStream(JdbcCommon.java:292)
     at 
org.apache.nifi.nifi.processors.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:326)
     ... 13 common frames omitted
  
 [~mattyb149] Do you agree on file another issue to track below issue?


was (Author: gardellajuanpablo):
Patch complete, but there is an issue with QueryDatabaseTable processor (not 
related with the change). The issue is:
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
    at 
com.point72.nifi.processors.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:328)
    at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
    at 
com.point72.nifi.processors.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:322)
    at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
    at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
    at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
    at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed.
  

[jira] [Commented] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-10 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432868#comment-16432868
 ] 

Gardella Juan Pablo commented on NIFI-5049:
---

Patch complete, but there is an issue with QueryDatabaseTable processor (not 
related with the change). The issue is:
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
    at 
com.point72.nifi.processors.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:328)
    at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
    at 
com.point72.nifi.processors.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:322)
    at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
    at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
    at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
    at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed.
    at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
    at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
    at 
org.apache.phoenix.jdbc.PhoenixResultSet.checkOpen(PhoenixResultSet.java:215)
    at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:772)
    at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
    at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
    at 
com.point72.nifi.processors.JdbcCommon.convertToAvroStream(JdbcCommon.java:292)
    at 
com.point72.nifi.processors.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:326)
    ... 13 common frames omitted
 
[~mattyb149] Do you agree on file another issue to track below issue?

> Fixhandling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> -
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-06 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428789#comment-16428789
 ] 

Gardella Juan Pablo commented on NIFI-5049:
---

I will work on a patch

> Fixhandling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> -
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-06 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-5049:
--
 Affects Version/s: (was: 0.6.0)
1.5.0
Remaining Estimate: 24h
 Original Estimate: 24h
 Fix Version/s: (was: 1.2.0)
   Description: 
QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
TIMESTAMP. The error is described below:

[https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]

Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 

See 
[https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]

  was:
Due to default handling of Oracle columns returned as java.sql.Date types, the 
string literals used to compare against the column values must be in the same 
format as the NLS_DATE_FORMAT setting of the database (often -MM-DD).

I believe when "Oracle" is provided as the database type (formerly known as 
pre-processing strategy), Oracle's Datetime Functions (such as TO_DATE or 
TO_TIMESTAMP) could be leveraged to give more fine-grained maximum-value 
information.

   Component/s: Core Framework
Issue Type: Bug  (was: Improvement)

> Fixhandling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> -
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5049) Fixhandling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-06 Thread Gardella Juan Pablo (JIRA)
Gardella Juan Pablo created NIFI-5049:
-

 Summary: Fixhandling of Phonenix datetime columns in 
QueryDatabaseTable and GenerateTableFetch
 Key: NIFI-5049
 URL: https://issues.apache.org/jira/browse/NIFI-5049
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 0.6.0
Reporter: Gardella Juan Pablo
Assignee: Matt Burgess
 Fix For: 1.2.0


Due to default handling of Oracle columns returned as java.sql.Date types, the 
string literals used to compare against the column values must be in the same 
format as the NLS_DATE_FORMAT setting of the database (often -MM-DD).

I believe when "Oracle" is provided as the database type (formerly known as 
pre-processing strategy), Oracle's Datetime Functions (such as TO_DATE or 
TO_TIMESTAMP) could be leveraged to give more fine-grained maximum-value 
information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3093) HIVE Support for ExecuteSQL/QueryDatabaseTable/GenerateTableFetch

2018-03-02 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384308#comment-16384308
 ] 

Gardella Juan Pablo commented on NIFI-3093:
---

[~mattyb149]/[~markap14] I've created a PR at 
[https://github.com/apache/nifi/pull/2507], I've tested against HIVE and it 
worked fine. Could you review it please?

> HIVE Support for ExecuteSQL/QueryDatabaseTable/GenerateTableFetch
> -
>
> Key: NIFI-3093
> URL: https://issues.apache.org/jira/browse/NIFI-3093
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Major
>
> Update Query Database Table so that it can pull data from HIVE tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-02 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384282#comment-16384282
 ] 

Gardella Juan Pablo commented on NIFI-4893:
---

[~markap14] let me know if you see any problem.

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-01 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382441#comment-16382441
 ] 

Gardella Juan Pablo edited comment on NIFI-4893 at 3/1/18 7:38 PM:
---

-A test failed :(, I will check it tomorrow.- Ignore this. The tests pass.


was (Author: gardellajuanpablo):
A test failed :(, I will check it tomorrow.

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-01 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382441#comment-16382441
 ] 

Gardella Juan Pablo edited comment on NIFI-4893 at 3/1/18 7:38 PM:
---

-A test failed :(, I will check it tomorrow.- Ignore this. The tests passed.


was (Author: gardellajuanpablo):
-A test failed :(, I will check it tomorrow.- Ignore this. The tests pass.

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-01 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382441#comment-16382441
 ] 

Gardella Juan Pablo commented on NIFI-4893:
---

A test failed :(, I will check it tomorrow.

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-01 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382433#comment-16382433
 ] 

Gardella Juan Pablo commented on NIFI-4893:
---

[~markap14] applied the suggested fix. Ready to be reviewed. Thanks for the 
suggestion and your time.

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-2575) HiveQL Processors Fail due to invalid JDBC URI resolution when using Zookeeper URI

2018-03-01 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382049#comment-16382049
 ] 

Gardella Juan Pablo commented on NIFI-2575:
---

[~markap14], to solve this issue it's required to update Hive driver. The 
approach used with Kafka is to use different processors per version. Do you 
agree to create another NAR for Hive 2.0.0?

Thanks,
Juan 

> HiveQL Processors Fail due to invalid JDBC URI resolution when using 
> Zookeeper URI
> --
>
> Key: NIFI-2575
> URL: https://issues.apache.org/jira/browse/NIFI-2575
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Yolanda M. Davis
>Priority: Major
>
> When configuring a HiveQL processor using the Zookeeper URL (e.g. 
> jdbc:hive2://ydavis-hdp-nifi-test-3.openstacklocal:2181,ydavis-hdp-nifi-test-1.openstacklocal:2181,ydavis-hdp-nifi-test-2.openstacklocal:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2),
>  it appears that the JDBC driver does not properly build the the uri in the 
> expected format.  This is because HS2 is storing JDBC parameters in ZK 
> (https://issues.apache.org/jira/browse/HIVE-11581) and it is expecting the 
> driver to be able to parse and use those values to configure the connection. 
> However it appears the driver is expecting zookeeper to simply return the 
> host:port and subsequently building an invalid URI.
> This problem has result in two variation of errors. The following was 
> experienced by [~mattyb149]
> {noformat}
> 2016-08-15 12:45:12,918 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.Utils Resolved authority: 
> hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Will try to open client transport with 
> JDBC Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Could not open client transport with JDBC 
> Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,836 INFO [Timer-Driven Process Thread-2] 
> o.a.c.f.imps.CuratorFrameworkImpl Starting
> 2016-08-15 12:45:14,064 INFO [Timer-Driven Process Thread-2-EventThread] 
> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
> 2016-08-15 12:45:14,182 INFO [Curator-Framework-0] 
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2016-08-15 12:45:14,337 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] failed to process due 
> to java.lang.reflect.UndeclaredThrowableException; rolling back session: 
> java.lang.reflect.UndeclaredThrowableException
> 2016-08-15 12:45:14,346 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.hive.SelectHiveQL
> java.lang.reflect.UndeclaredThrowableException: null
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>  ~[na:na]
>   at 
> org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:255)
>  ~[na:na]
>   at sun.reflect.GeneratedMethodAccessor331.invoke(Unknown 
> Source) ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_65]
>   at java.lang.reflect.Method.invoke(Method.java:497) 
> ~[na:1.8.0_65]
>   at 
> org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:174)
>  ~[nifi-framework-core-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy81.getConnection(Unknown Source) ~[na:na]
>   at 

[jira] [Closed] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-28 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo closed NIFI-4901.
-

> Json to Avro using Record framework does not support union types with boolean
> -
>
> Key: NIFI-4901
> URL: https://issues.apache.org/jira/browse/NIFI-4901
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: optiona-boolean.zip
>
>
> Given the following valid Avro Schema:
> {code}
> {
>"type":"record",
>"name":"foo",
>"fields":[
>   {
>  "name":"isSwap",
>  "type":[
> "boolean",
> "null"
>  ]
>   } 
>]
> }
> {code}
> And the following JSON:
> {code}
> {
>   "isSwap": {
> "boolean": true
>   }
> }
> {code}
> When it is trying to be converted to Avro using ConvertRecord fails with:
> {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed 
> a JSON object from input but failed to convert into a Record object with the 
> given schema}}
> Attached a repository to reproduce the issue and also included the fix:
> * Run {{mvn clean test}} to reproduce the issue.
> * Run {{mvn clean test -Ppatch}} to test the fix. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-28 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380617#comment-16380617
 ] 

Gardella Juan Pablo commented on NIFI-4901:
---

Yes, please close it. Thanks!

> Json to Avro using Record framework does not support union types with boolean
> -
>
> Key: NIFI-4901
> URL: https://issues.apache.org/jira/browse/NIFI-4901
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: optiona-boolean.zip
>
>
> Given the following valid Avro Schema:
> {code}
> {
>"type":"record",
>"name":"foo",
>"fields":[
>   {
>  "name":"isSwap",
>  "type":[
> "boolean",
> "null"
>  ]
>   } 
>]
> }
> {code}
> And the following JSON:
> {code}
> {
>   "isSwap": {
> "boolean": true
>   }
> }
> {code}
> When it is trying to be converted to Avro using ConvertRecord fails with:
> {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed 
> a JSON object from input but failed to convert into a Record object with the 
> given schema}}
> Attached a repository to reproduce the issue and also included the fix:
> * Run {{mvn clean test}} to reproduce the issue.
> * Run {{mvn clean test -Ppatch}} to test the fix. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-25 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo resolved NIFI-4901.
---
Resolution: Invalid

[~markap14] you are right. My fault, I tried to use the valid JSON for 
[JsonDecoder|https://avro.apache.org/docs/current/api/java/org/apache/avro/io/JsonDecoder.html]
 class as the input, and actually this is not required. Thanks for take some 
time to answer the ticket.

> Json to Avro using Record framework does not support union types with boolean
> -
>
> Key: NIFI-4901
> URL: https://issues.apache.org/jira/browse/NIFI-4901
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: optiona-boolean.zip
>
>
> Given the following valid Avro Schema:
> {code}
> {
>"type":"record",
>"name":"foo",
>"fields":[
>   {
>  "name":"isSwap",
>  "type":[
> "boolean",
> "null"
>  ]
>   } 
>]
> }
> {code}
> And the following JSON:
> {code}
> {
>   "isSwap": {
> "boolean": true
>   }
> }
> {code}
> When it is trying to be converted to Avro using ConvertRecord fails with:
> {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed 
> a JSON object from input but failed to convert into a Record object with the 
> given schema}}
> Attached a repository to reproduce the issue and also included the fix:
> * Run {{mvn clean test}} to reproduce the issue.
> * Run {{mvn clean test -Ppatch}} to test the fix. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-21 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-4901:
--
Description: 
Given the following valid Avro Schema:
{code}
{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}
{code}

And the following JSON:
{code}
{
  "isSwap": {
"boolean": true
  }
}
{code}
When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run {{mvn clean test}} to reproduce the issue.
* Run {{mvn clean test -Ppatch}} to test the fix.   

  was:
Given the following valid Avro Schema:
{code}
{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}
{code}

And the following JSON:
{code}
{
  "isSwap": {
"boolean": true
  }
}
{code}
When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run mvn clean test to reproduce the issue.
* Run mvn clean test -Ppatch to test the fix.   


> Json to Avro using Record framework does not support union types with boolean
> -
>
> Key: NIFI-4901
> URL: https://issues.apache.org/jira/browse/NIFI-4901
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: optiona-boolean.zip
>
>
> Given the following valid Avro Schema:
> {code}
> {
>"type":"record",
>"name":"foo",
>"fields":[
>   {
>  "name":"isSwap",
>  "type":[
> "boolean",
> "null"
>  ]
>   } 
>]
> }
> {code}
> And the following JSON:
> {code}
> {
>   "isSwap": {
> "boolean": true
>   }
> }
> {code}
> When it is trying to be converted to Avro using ConvertRecord fails with:
> {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed 
> a JSON object from input but failed to convert into a Record object with the 
> given schema}}
> Attached a repository to reproduce the issue and also included the fix:
> * Run {{mvn clean test}} to reproduce the issue.
> * Run {{mvn clean test -Ppatch}} to test the fix. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-21 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-4901:
--
Description: 
Given the following valid Avro Schema:
{code}
{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}
{code}

And the following JSON:
{code}
{
  "isSwap": {
"boolean": true
  }
}
{code}
When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run mvn clean test to reproduce the issue.
* Run mvn clean test -Ppatch to test the fix.   

  was:
Given the following valid Avro Schema:

{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}

And the following JSON:
{
  "isSwap": {
"boolean": true
  }
}

When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run mvn clean test to reproduce the issue.
* Run mvn clean test -Ppatch to test the fix.   


> Json to Avro using Record framework does not support union types with boolean
> -
>
> Key: NIFI-4901
> URL: https://issues.apache.org/jira/browse/NIFI-4901
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: optiona-boolean.zip
>
>
> Given the following valid Avro Schema:
> {code}
> {
>"type":"record",
>"name":"foo",
>"fields":[
>   {
>  "name":"isSwap",
>  "type":[
> "boolean",
> "null"
>  ]
>   } 
>]
> }
> {code}
> And the following JSON:
> {code}
> {
>   "isSwap": {
> "boolean": true
>   }
> }
> {code}
> When it is trying to be converted to Avro using ConvertRecord fails with:
> {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed 
> a JSON object from input but failed to convert into a Record object with the 
> given schema}}
> Attached a repository to reproduce the issue and also included the fix:
> * Run mvn clean test to reproduce the issue.
> * Run mvn clean test -Ppatch to test the fix. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-21 Thread Gardella Juan Pablo (JIRA)
Gardella Juan Pablo created NIFI-4901:
-

 Summary: Json to Avro using Record framework does not support 
union types with boolean
 Key: NIFI-4901
 URL: https://issues.apache.org/jira/browse/NIFI-4901
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.5.0
 Environment: ALL
Reporter: Gardella Juan Pablo
 Attachments: optiona-boolean.zip

Given the following valid Avro Schema:

{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}

And the following JSON:
{
  "isSwap": {
"boolean": true
  }
}

When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run mvn clean test to reproduce the issue.
* Run mvn clean test -Ppatch to test the fix.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-02-20 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-4893:
--
Summary: Cannot convert Avro schemas to Record schemas with default value 
in arrays  (was: Cannot convert Avro schemas to Record schemas with default 
arrays)

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default arrays

2018-02-19 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369431#comment-16369431
 ] 

Gardella Juan Pablo commented on NIFI-4893:
---

Pull request with the fix at: https://github.com/apache/nifi/pull/2480

> Cannot convert Avro schemas to Record schemas with default arrays
> -
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default arrays

2018-02-19 Thread Gardella Juan Pablo (JIRA)
Gardella Juan Pablo created NIFI-4893:
-

 Summary: Cannot convert Avro schemas to Record schemas with 
default arrays
 Key: NIFI-4893
 URL: https://issues.apache.org/jira/browse/NIFI-4893
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.5.0, 1.6.0
 Environment: ALL
Reporter: Gardella Juan Pablo
 Attachments: issue1.zip

Given an Avro Schema that has a default array defined, it is not possible to be 
converted to a Nifi Record Schema.

To reproduce the bug, try to convert the following Avro schema to Record Schema:

{code}
{
    "type": "record",
    "name": "Foo1",
    "namespace": "foo.namespace",
    "fields": [
        {
            "name": "listOfInt",
            "type": {
                "type": "array",
                "items": "int"
            },
            "doc": "array of ints",
            "default": 0
        }
    ]
}
{code}
 
Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
reproduce the issue and also the fix.
* To reproduce the bug, run "mvn clean test"
* To test the fix, run "mvn clean test -Ppatch".



 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-2575) HiveQL Processors Fail due to invalid JDBC URI resolution when using Zookeeper URI

2018-02-09 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356883#comment-16356883
 ] 

Gardella Juan Pablo edited comment on NIFI-2575 at 2/9/18 12:06 PM:


I had the same issue. It happens in version Nifi 1.3.0 and also in Nifi 1.5.0. 
The problem is the driver, I've updated to 2.1.0 and it works. Another simple 
workaround is using  Hortonworks' 
[nifi-hive-nar|http://nexus-private.hortonworks.com/nexus/content/groups/public/org/apache/nifi/nifi-hive-nar/1.5.0.3.2.0.0-4/],
 it seems Hortonworks is using a patched version.

The root cause I believe is https://issues.apache.org/jira/browse/HIVE-11875, 
so to solve the issue it is required to upgrade Hive driver which include that 
fix.

Notice that affects any processor that uses HiveConnectionPool.

 


was (Author: gardellajuanpablo):
I had the same issue. It happens in version Nifi 1.3.0 and also in Nifi 1.5.0. 
The problem is the driver, I've updated to 2.1.0 and it works. Another simple 
workaround is using  Hortonworks' 
[nifi-hive-nar|http://nexus-private.hortonworks.com/nexus/content/groups/public/org/apache/nifi/nifi-hive-nar/1.5.0.3.2.0.0-4/],
 it seems Hortonworks is using a patched version.

The root cause I believe is https://issues.apache.org/jira/browse/HIVE-11875, 
so to solve the issue it is required to upgrade Hive driver which include that 
fix.

> HiveQL Processors Fail due to invalid JDBC URI resolution when using 
> Zookeeper URI
> --
>
> Key: NIFI-2575
> URL: https://issues.apache.org/jira/browse/NIFI-2575
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Yolanda M. Davis
>Priority: Major
>
> When configuring a HiveQL processor using the Zookeeper URL (e.g. 
> jdbc:hive2://ydavis-hdp-nifi-test-3.openstacklocal:2181,ydavis-hdp-nifi-test-1.openstacklocal:2181,ydavis-hdp-nifi-test-2.openstacklocal:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2),
>  it appears that the JDBC driver does not properly build the the uri in the 
> expected format.  This is because HS2 is storing JDBC parameters in ZK 
> (https://issues.apache.org/jira/browse/HIVE-11581) and it is expecting the 
> driver to be able to parse and use those values to configure the connection. 
> However it appears the driver is expecting zookeeper to simply return the 
> host:port and subsequently building an invalid URI.
> This problem has result in two variation of errors. The following was 
> experienced by [~mattyb149]
> {noformat}
> 2016-08-15 12:45:12,918 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.Utils Resolved authority: 
> hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Will try to open client transport with 
> JDBC Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Could not open client transport with JDBC 
> Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,836 INFO [Timer-Driven Process Thread-2] 
> o.a.c.f.imps.CuratorFrameworkImpl Starting
> 2016-08-15 12:45:14,064 INFO [Timer-Driven Process Thread-2-EventThread] 
> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
> 2016-08-15 12:45:14,182 INFO [Curator-Framework-0] 
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2016-08-15 12:45:14,337 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] failed to process due 
> to java.lang.reflect.UndeclaredThrowableException; rolling back session: 
> 

[jira] [Comment Edited] (NIFI-2575) HiveQL Processors Fail due to invalid JDBC URI resolution when using Zookeeper URI

2018-02-08 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356883#comment-16356883
 ] 

Gardella Juan Pablo edited comment on NIFI-2575 at 2/8/18 12:28 PM:


I had the same issue. It happens in version Nifi 1.3.0 and also in Nifi 1.5.0. 
The problem is the driver, I've updated to 2.1.0 and it works. Another simple 
workaround is using  Hortonworks' 
[nifi-hive-nar|http://nexus-private.hortonworks.com/nexus/content/groups/public/org/apache/nifi/nifi-hive-nar/1.5.0.3.2.0.0-4/],
 it seems Hortonworks is using a patched version.

The root cause I believe is https://issues.apache.org/jira/browse/HIVE-11875, 
so to solve the issue it is required to upgrade Hive driver which include that 
fix.


was (Author: gardellajuanpablo):
I had the same issue. It happens in version Nifi 1.3.0 and also in Nifi 1.5.0. 
The problem is the driver, I've updated to 2.1.0 and it works. Another simple 
workaround is using  [Hortonworks' 
nifi-hive-nar|[http://nexus-private.hortonworks.com/nexus/content/groups/public/org/apache/nifi/nifi-hive-nar/1.5.0.3.2.0.0-4/]],
 it seems Hortonworks is using a patched version.

The root cause I believe is https://issues.apache.org/jira/browse/HIVE-11875, 
so to solve the issue it is required to upgrade Hive driver which include that 
fix.

> HiveQL Processors Fail due to invalid JDBC URI resolution when using 
> Zookeeper URI
> --
>
> Key: NIFI-2575
> URL: https://issues.apache.org/jira/browse/NIFI-2575
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Yolanda M. Davis
>Priority: Major
>
> When configuring a HiveQL processor using the Zookeeper URL (e.g. 
> jdbc:hive2://ydavis-hdp-nifi-test-3.openstacklocal:2181,ydavis-hdp-nifi-test-1.openstacklocal:2181,ydavis-hdp-nifi-test-2.openstacklocal:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2),
>  it appears that the JDBC driver does not properly build the the uri in the 
> expected format.  This is because HS2 is storing JDBC parameters in ZK 
> (https://issues.apache.org/jira/browse/HIVE-11581) and it is expecting the 
> driver to be able to parse and use those values to configure the connection. 
> However it appears the driver is expecting zookeeper to simply return the 
> host:port and subsequently building an invalid URI.
> This problem has result in two variation of errors. The following was 
> experienced by [~mattyb149]
> {noformat}
> 2016-08-15 12:45:12,918 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.Utils Resolved authority: 
> hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Will try to open client transport with 
> JDBC Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Could not open client transport with JDBC 
> Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,836 INFO [Timer-Driven Process Thread-2] 
> o.a.c.f.imps.CuratorFrameworkImpl Starting
> 2016-08-15 12:45:14,064 INFO [Timer-Driven Process Thread-2-EventThread] 
> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
> 2016-08-15 12:45:14,182 INFO [Curator-Framework-0] 
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2016-08-15 12:45:14,337 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] failed to process due 
> to java.lang.reflect.UndeclaredThrowableException; rolling back session: 
> java.lang.reflect.UndeclaredThrowableException
> 2016-08-15 12:45:14,346 ERROR [Timer-Driven Process 

[jira] [Commented] (NIFI-2575) HiveQL Processors Fail due to invalid JDBC URI resolution when using Zookeeper URI

2018-02-08 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356883#comment-16356883
 ] 

Gardella Juan Pablo commented on NIFI-2575:
---

I had the same issue. It happens in version Nifi 1.3.0 and also in Nifi 1.5.0. 
The problem is the driver, I've updated to 2.1.0 and it works. Another simple 
workaround is using  [Hortonworks' 
nifi-hive-nar|[http://nexus-private.hortonworks.com/nexus/content/groups/public/org/apache/nifi/nifi-hive-nar/1.5.0.3.2.0.0-4/]],
 it seems Hortonworks is using a patched version.

The root cause I believe is https://issues.apache.org/jira/browse/HIVE-11875, 
so to solve the issue it is required to upgrade Hive driver which include that 
fix.

> HiveQL Processors Fail due to invalid JDBC URI resolution when using 
> Zookeeper URI
> --
>
> Key: NIFI-2575
> URL: https://issues.apache.org/jira/browse/NIFI-2575
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Yolanda M. Davis
>Priority: Major
>
> When configuring a HiveQL processor using the Zookeeper URL (e.g. 
> jdbc:hive2://ydavis-hdp-nifi-test-3.openstacklocal:2181,ydavis-hdp-nifi-test-1.openstacklocal:2181,ydavis-hdp-nifi-test-2.openstacklocal:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2),
>  it appears that the JDBC driver does not properly build the the uri in the 
> expected format.  This is because HS2 is storing JDBC parameters in ZK 
> (https://issues.apache.org/jira/browse/HIVE-11581) and it is expecting the 
> driver to be able to parse and use those values to configure the connection. 
> However it appears the driver is expecting zookeeper to simply return the 
> host:port and subsequently building an invalid URI.
> This problem has result in two variation of errors. The following was 
> experienced by [~mattyb149]
> {noformat}
> 2016-08-15 12:45:12,918 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.Utils Resolved authority: 
> hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Will try to open client transport with 
> JDBC Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,835 INFO [Timer-Driven Process Thread-2] 
> org.apache.hive.jdbc.HiveConnection Could not open client transport with JDBC 
> Uri: 
> jdbc:hive2://hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.server2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=hdp-cluster-2-2.novalocal;hive.server2.thrift.port=1;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=hive/_h...@hdf.com/default;principal=hive/_h...@hdf.com;serviceDiscoveryMode=zookeeper;zooKeeperNamespace=hiveserver2
> 2016-08-15 12:45:13,836 INFO [Timer-Driven Process Thread-2] 
> o.a.c.f.imps.CuratorFrameworkImpl Starting
> 2016-08-15 12:45:14,064 INFO [Timer-Driven Process Thread-2-EventThread] 
> o.a.c.f.state.ConnectionStateManager State change: CONNECTED
> 2016-08-15 12:45:14,182 INFO [Curator-Framework-0] 
> o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
> 2016-08-15 12:45:14,337 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] 
> SelectHiveQL[id=7aaffd71-0156-1000-d962-8102c06b23df] failed to process due 
> to java.lang.reflect.UndeclaredThrowableException; rolling back session: 
> java.lang.reflect.UndeclaredThrowableException
> 2016-08-15 12:45:14,346 ERROR [Timer-Driven Process Thread-2] 
> o.a.nifi.processors.hive.SelectHiveQL
> java.lang.reflect.UndeclaredThrowableException: null
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>  ~[na:na]
>   at 
> org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:255)
>  ~[na:na]
>   at sun.reflect.GeneratedMethodAccessor331.invoke(Unknown 
> Source) ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_65]
>   at 

[jira] [Commented] (NIFI-4008) ConsumeKafkaRecord_0_10 assumes there is always one Record in a message

2017-10-03 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189754#comment-16189754
 ] 

Gardella Juan Pablo commented on NIFI-4008:
---

It breaks an scenario of NIFI-4330, it does not handle null values properly.

> ConsumeKafkaRecord_0_10 assumes there is always one Record in a message
> ---
>
> Key: NIFI-4008
> URL: https://issues.apache.org/jira/browse/NIFI-4008
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.2.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> ConsumeKafkaRecord_0_10 uses ConsumerLease underneath, and it [assumes there 
> is one Record available in a consumed 
> message|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-0-10-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumerLease.java#L434]
>  retrieved from a Kafka topic.
> But in fact, a message can contain 0 or more records in it. For example, with 
> a record schema shown below:
> {code}
> {
>   "type": "record",
>   "name": "temp",
>   "fields" : [
> {"name": "value", "type": "string"}
>   ]
> }
> {code}
> Multiple records can be sent within a single message, e.g. using JSON:
> {code}
> [{"value": "a"}, {"value": "b"}, {"value": "c"}]
> {code}
> But ConsumeKafkaRecord only outputs the first record:
> {code}
> [{"value": "a"}]
> {code}
> Also, if a message doesn't contain any record in it, the processor fails with 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4330) ConsumeKafka* throw NullPointerException if Kafka message has a null value

2017-09-29 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185865#comment-16185865
 ] 

Gardella Juan Pablo commented on NIFI-4330:
---

Done at: https://github.com/apache/nifi/pull/2185

It is my first PR, maybe something is missing.

> ConsumeKafka* throw NullPointerException if Kafka message has a null value
> --
>
> Key: NIFI-4330
> URL: https://issues.apache.org/jira/browse/NIFI-4330
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-4330) ConsumeKafka* throw NullPointerException if Kafka message has a null value

2017-09-28 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184702#comment-16184702
 ] 

Gardella Juan Pablo edited comment on NIFI-4330 at 9/28/17 7:26 PM:


I found it in v1.3.0, so no. It is not fixed in that ticket. I've fixed locally 
(at ConsumerLease.java file):

!screenshot-1.png!

I didn't have enough time to push the changes. Also I saw Consumer for v0.11 
was added and I suppose it has the same issue.


was (Author: gardellajuanpablo):
I found it in v1.3.0, so no. It is not fixed in that ticket. I've fixed locally:

!screenshot-1.png!

I didn't have enough time to push the changes. Also I saw Consumer for v0.11 
was added and I suppose it has the same issue.

> ConsumeKafka* throw NullPointerException if Kafka message has a null value
> --
>
> Key: NIFI-4330
> URL: https://issues.apache.org/jira/browse/NIFI-4330
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4330) ConsumeKafka* throw NullPointerException if Kafka message has a null value

2017-09-28 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-4330:
--
Attachment: screenshot-1.png

> ConsumeKafka* throw NullPointerException if Kafka message has a null value
> --
>
> Key: NIFI-4330
> URL: https://issues.apache.org/jira/browse/NIFI-4330
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4330) ConsumeKafka* throw NullPointerException if Kafka message has a null value

2017-09-28 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184702#comment-16184702
 ] 

Gardella Juan Pablo commented on NIFI-4330:
---

I found it in v1.3.0, so no. It is not fixed in that ticket. I've fixed locally:

!screenshot-1.png!

I didn't have enough time to push the changes. Also I saw Consumer for v0.11 
was added and I suppose it has the same issue.

> ConsumeKafka* throw NullPointerException if Kafka message has a null value
> --
>
> Key: NIFI-4330
> URL: https://issues.apache.org/jira/browse/NIFI-4330
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >