[jira] [Updated] (NIFI-5585) Prepare Nodes to be Decommissioned from Cluster

2018-09-25 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5585:
--
Summary: Prepare Nodes to be Decommissioned from Cluster  (was: Decommision 
Nodes from Cluster)

> Prepare Nodes to be Decommissioned from Cluster
> ---
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Similar to the client sending PUT request a DISCONNECTING message to 
> cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to 
> the same URI to initiate an OFFLOAD for a DISCONNECTED node. The OFFLOADING 
> request will be idempotent.
> Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state. 
>  After the node completes offloading, it will transition to the OFFLOADED 
> state.
> OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
> request via the UI/CLI, or restarting NiFi on the node.
> The steps to decommission a node and remove it from the cluster are:
>  # Send request to disconnect the node
>  # Once disconnect completes, send request to offload the node.
>  # Once offload completes, send request to delete node.
>  # Once the delete request has finished, the NiFi service on the host can be 
> stopped/removed.
> When an error occurs and the node can not complete offloading, the user can:
>  # Send request to delete the node from the cluster
>  # Diagnose why the node had issues with the offload (out of memory, no 
> network connection, etc) and address the issue
>  # Restart NiFi on the node to so that it will reconnect to the cluster
>  # Go through the steps to decommission a node
> Toolkit CLI commands for retrieving a list of nodes and 
> connecting/disconnecting/offloading/deleting nodes have been added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5636) Use LDAP objectGUID/entryUUID for Binding Users/Groups

2018-09-25 Thread Jim Williams (JIRA)
Jim Williams created NIFI-5636:
--

 Summary: Use LDAP objectGUID/entryUUID for Binding Users/Groups
 Key: NIFI-5636
 URL: https://issues.apache.org/jira/browse/NIFI-5636
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Security
Affects Versions: 1.7.1
 Environment: N/A
Reporter: Jim Williams


With respect to the ‘Identity Strategy’, there is room for improvement in Nifi…

When the strategy “USE_DN” is chosen, things should work fine until the point 
that the directory structure is changed and users get new DNs. Then the 
mappings would be broken.

Then the strategy “USE_USERNAME” is chosen, the issue with changing DNs is 
avoided, but two issues are introduced:
– The directory must be guaranteed to be free of duplicate usernames or one 
mapping will potentially refer to more than one user.
– In the case of a user deleted and another person being added with the same 
username, it is possible that the new user will unintentionally be given access.

It might be worthwhile to introduce a third strategy (let’s call it 
“USE_UUID”). The third strategy should define not only the “USE_UUID” setting 
but also provide a setting for an LDAP user object attribute which is unique, 
immutable, and never re-used. For instance, there is the Active Directory 
‘objectGUID’ attribute or the OpenLDAP ‘entryUUID’ attribute.

Microsoft has some information about the objectGUID attribute 
here:[https://docs.microsoft.com/en-us/windows/desktop/ad/using-objectguid-to-bind-to-an-object]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-618) Add C2 triggers for local updates

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628082#comment-16628082
 ] 

ASF GitHub Bot commented on MINIFICPP-618:
--

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/405
  
Committing test  soon -- added PR to have someone evaluate this early


> Add C2 triggers for local updates
> -
>
> Key: MINIFICPP-618
> URL: https://issues.apache.org/jira/browse/MINIFICPP-618
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-618) Add C2 triggers for local updates

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628081#comment-16628081
 ] 

ASF GitHub Bot commented on MINIFICPP-618:
--

GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405

MINIFICPP-618: Add C2 triggers, first of which monitors a local file …

…for changes


Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-618

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/405.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #405


commit 44f8fceaa0dbf6d8961ad08af05ae0efb52f9af0
Author: Marc Parisi 
Date:   2018-09-25T21:45:07Z

MINIFICPP-618: Add C2 triggers, first of which monitors a local file for 
changes




> Add C2 triggers for local updates
> -
>
> Key: MINIFICPP-618
> URL: https://issues.apache.org/jira/browse/MINIFICPP-618
> Project: NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #405: MINIFICPP-618: Add C2 triggers, first of which m...

2018-09-25 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/405
  
Committing test  soon -- added PR to have someone evaluate this early


---


[GitHub] nifi-minifi-cpp pull request #405: MINIFICPP-618: Add C2 triggers, first of ...

2018-09-25 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/405

MINIFICPP-618: Add C2 triggers, first of which monitors a local file …

…for changes


Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-618

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/405.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #405


commit 44f8fceaa0dbf6d8961ad08af05ae0efb52f9af0
Author: Marc Parisi 
Date:   2018-09-25T21:45:07Z

MINIFICPP-618: Add C2 triggers, first of which monitors a local file for 
changes




---


[jira] [Created] (MINIFICPP-618) Add C2 triggers for local updates

2018-09-25 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-618:


 Summary: Add C2 triggers for local updates
 Key: MINIFICPP-618
 URL: https://issues.apache.org/jira/browse/MINIFICPP-618
 Project: NiFi MiNiFi C++
  Issue Type: New Feature
Reporter: Mr TheSegfault
Assignee: Mr TheSegfault






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628064#comment-16628064
 ] 

ASF GitHub Bot commented on NIFI-5612:
--

Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3032
  
The test failure seems to be a timeout from a test unrelated to what I 
changed but still within the nifi-standard-processors maven component:

```
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
10.089 s <<< FAILURE! - in 
org.apache.nifi.processors.standard.TestHandleHttpRequest
[ERROR] 
testRequestAddedToService(org.apache.nifi.processors.standard.TestHandleHttpRequest)
  Time elapsed: 10.01 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 1 
milliseconds
at 
org.apache.nifi.processors.standard.TestHandleHttpRequest.testRequestAddedToService(TestHandleHttpRequest.java:138)
```

All other tests passed. Is this an intermittent failure?


> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still 

[GitHub] nifi issue #3032: NIFI-5612: Support JDBC drivers that return Long for unsig...

2018-09-25 Thread colindean
Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3032
  
The test failure seems to be a timeout from a test unrelated to what I 
changed but still within the nifi-standard-processors maven component:

```
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
10.089 s <<< FAILURE! - in 
org.apache.nifi.processors.standard.TestHandleHttpRequest
[ERROR] 
testRequestAddedToService(org.apache.nifi.processors.standard.TestHandleHttpRequest)
  Time elapsed: 10.01 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 1 
milliseconds
at 
org.apache.nifi.processors.standard.TestHandleHttpRequest.testRequestAddedToService(TestHandleHttpRequest.java:138)
```

All other tests passed. Is this an intermittent failure?


---


[jira] [Commented] (NIFI-5519) Allow ListDatabaseTables to accept incoming connections

2018-09-25 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628061#comment-16628061
 ] 

Colin Dean commented on NIFI-5519:
--

I was eventually able to get this to work. The linked gist has my latest 
revision.

> Allow ListDatabaseTables to accept incoming connections
> ---
>
> Key: NIFI-5519
> URL: https://issues.apache.org/jira/browse/NIFI-5519
> Project: Apache NiFi
>  Issue Type: Wish
>Reporter: Matt Burgess
>Priority: Major
>
> As of [NIFI-5229|https://issues.apache.org/jira/browse/NIFI-5229], 
> DBCPConnectionPoolLookup allows the dynamic selection of a DBCPConnectionPool 
> by name. This allows processors who are to perform the same work on multiple 
> databases to be able to do so by providing individual flow files upstream 
> with the database.name attribute set.
> However ListDatabaseTables does not accept incoming connections, so you 
> currently need 1 DBCPConnectionPool per database, plus 1 ListDatabaseTables 
> per database, each using a corresponding DBCPConnectionPool. It would be nice 
> if ListDatabaseTables could accept incoming connection(s), if only to provide 
> attributes for selecting the DBCPConnectionPool.
> I propose the behavior be like other processors that can generate data with 
> or without an incoming connection (such as GenerateTableFetch, see 
> [NIFI-2881|https://issues.apache.org/jira/browse/NIFI-2881] for details). In 
> general that means if there is an incoming non-loop connection, it becomes 
> more "event-driven" in the sense that it will not execute if there is no 
> FlowFile on which to work. If there is no incoming connection, then it would 
> run as it always has, on its Run Schedule and with State Management, so as 
> not to re-list the same tables every time it executes. 
> However with an incoming connection and an available FlowFile, the behavior 
> could be that all tables for that database are listed, meaning that processor 
> state would not be updated nor queried, making it fully "event-driven". If 
> the tables for a database are not to be re-listed, the onus would be on the 
> upstream flow to not send a flow file for that database. This is not a 
> requirement, just a suggestion; it could be more flexible by honoring 
> processor state if the Refresh Interval is non-zero, but I think that adds 
> too much complexity for the user, for little payoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627988#comment-16627988
 ] 

ASF GitHub Bot commented on NIFI-5612:
--

GitHub user colindean opened a pull request:

https://github.com/apache/nifi/pull/3032

NIFI-5612: Support JDBC drivers that return Long for unsigned ints

Refactors tests in order to share code repeated in tests and to enable some 
parameterized testing.

MySQL Connector/J 5.1.x in conjunction with MySQL 5.0.x will return a Long 
for ResultSet#getObject when the SQL type is an unsigned integer. This change 
prevents that error from occurring while implementing a more informational 
exception describing what the failing object's POJO type is in addition to its 
string value.

---

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [ ] ~~If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? ~~
- [ ] ~~If applicable, have you updated the LICENSE file, including the 
main LICENSE file under nifi-assembly?~~
- [ ] ~~If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?~~
- [ ] ~~If adding new Properties, have you added .displayName in addition 
to .name (programmatic access) for each of the new properties?~~

### For documentation related changes:
- [ ] ~~Have you ensured that format looks appropriate for the output in 
which it is rendered?~~

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/colindean/nifi 
colindean/nifi-5612-unresolvedunionint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3032.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3032


commit a017ff49aadde6dbddc0b89a0f1a47a9803a
Author: Colin Dean 
Date:   2018-09-20T00:27:47Z

NIFI-5612: Support JDBC drivers that return Long for unsigned ints

Refactors tests in order to share code repeated in tests and to enable
some parameterized testing.

MySQL Connector/J 5.1.x in conjunction with MySQL 5.0.x will return
a Long for ResultSet#getObject when the SQL type is an unsigned integer.
This change prevents that error from occurring while implementing a more
informational exception describing what the failing object's POJO type
is in addition to its string value.




> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at 

[GitHub] nifi pull request #3032: NIFI-5612: Support JDBC drivers that return Long fo...

2018-09-25 Thread colindean
GitHub user colindean opened a pull request:

https://github.com/apache/nifi/pull/3032

NIFI-5612: Support JDBC drivers that return Long for unsigned ints

Refactors tests in order to share code repeated in tests and to enable some 
parameterized testing.

MySQL Connector/J 5.1.x in conjunction with MySQL 5.0.x will return a Long 
for ResultSet#getObject when the SQL type is an unsigned integer. This change 
prevents that error from occurring while implementing a more informational 
exception describing what the failing object's POJO type is in addition to its 
string value.

---

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [ ] ~~If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? ~~
- [ ] ~~If applicable, have you updated the LICENSE file, including the 
main LICENSE file under nifi-assembly?~~
- [ ] ~~If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?~~
- [ ] ~~If adding new Properties, have you added .displayName in addition 
to .name (programmatic access) for each of the new properties?~~

### For documentation related changes:
- [ ] ~~Have you ensured that format looks appropriate for the output in 
which it is rendered?~~

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/colindean/nifi 
colindean/nifi-5612-unresolvedunionint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3032.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3032


commit a017ff49aadde6dbddc0b89a0f1a47a9803a
Author: Colin Dean 
Date:   2018-09-20T00:27:47Z

NIFI-5612: Support JDBC drivers that return Long for unsigned ints

Refactors tests in order to share code repeated in tests and to enable
some parameterized testing.

MySQL Connector/J 5.1.x in conjunction with MySQL 5.0.x will return
a Long for ResultSet#getObject when the SQL type is an unsigned integer.
This change prevents that error from occurring while implementing a more
informational exception describing what the failing object's POJO type
is in addition to its string value.




---


[jira] [Commented] (NIFI-2624) JDBC-to-Avro processors handle BigDecimals as Strings

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-2624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627967#comment-16627967
 ] 

ASF GitHub Bot commented on NIFI-2624:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1142


> JDBC-to-Avro processors handle BigDecimals as Strings
> -
>
> Key: NIFI-2624
> URL: https://issues.apache.org/jira/browse/NIFI-2624
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Koji Kawamura
>Priority: Major
> Fix For: 1.3.0
>
>
> The original SQL processors implemented BigDecimal values as Strings for 
> Avro, as the version of Avro it used (1.7.6) did not support DECIMAL type.
> As of Avro 1.7.7 (AVRO-1402), this type is supported and so the SQL -/HiveQL- 
> processors should be updated to handle BigDecimals correctly if possible.
> UPDATED: This JIRA improved only ExecuteSQL and QueryDatabaseTable 
> processors. SelectHiveQL is removed from target. Hive tables can be queried 
> by ExecuteSQL/QueryDatabaseTable once NIFI-3093 is resolved (and logical 
> types will also be supported with those processors).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #1142: NIFI-2624 JdbcCommon treats BigDecimals now as Avro...

2018-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1142


---


[jira] [Commented] (NIFI-2624) JDBC-to-Avro processors handle BigDecimals as Strings

2018-09-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-2624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627962#comment-16627962
 ] 

ASF subversion and git services commented on NIFI-2624:
---

Commit 4d6de9663087ebb30041341c8d55de01c4af3875 in nifi's branch 
refs/remotes/github/pr/1142 from [~Toivo Adams]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=4d6de96 ]

NIFI-2624 JdbcCommon treats BigDecimals now as Avro Logical type using bytes to 
hold data (not String as is was before).


> JDBC-to-Avro processors handle BigDecimals as Strings
> -
>
> Key: NIFI-2624
> URL: https://issues.apache.org/jira/browse/NIFI-2624
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Koji Kawamura
>Priority: Major
> Fix For: 1.3.0
>
>
> The original SQL processors implemented BigDecimal values as Strings for 
> Avro, as the version of Avro it used (1.7.6) did not support DECIMAL type.
> As of Avro 1.7.7 (AVRO-1402), this type is supported and so the SQL -/HiveQL- 
> processors should be updated to handle BigDecimals correctly if possible.
> UPDATED: This JIRA improved only ExecuteSQL and QueryDatabaseTable 
> processors. SelectHiveQL is removed from target. Hive tables can be queried 
> by ExecuteSQL/QueryDatabaseTable once NIFI-3093 is resolved (and logical 
> types will also be supported with those processors).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-2767) Periodically reload properties from file-based variable registry properties files

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627938#comment-16627938
 ] 

ASF GitHub Bot commented on NIFI-2767:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1015
  
Hey @jfrazee - I think we can close this PR with the new feature to define 
variables at Process Group level through the UI.


> Periodically reload properties from file-based variable registry properties 
> files
> -
>
> Key: NIFI-2767
> URL: https://issues.apache.org/jira/browse/NIFI-2767
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joey Frazee
>Priority: Minor
>
> Currently FileBasedVariableRegistry only loads properties when it is injected 
> into the flow controller so making updates to the properties requires a 
> restart.
> Management of data flows would be much easier if these changes could be 
> picked up without doing a (rolling) restart. It'd be helpful from an 
> administrative standpoint if the FileBasedVariableRegistry reloaded 
> properties from the properties files periodically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #1015: NIFI-2767 Periodically reload properties from file-based v...

2018-09-25 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1015
  
Hey @jfrazee - I think we can close this PR with the new feature to define 
variables at Process Group level through the UI.


---


[jira] [Commented] (NIFI-4517) Allow SQL results to be output as records in any supported format

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627935#comment-16627935
 ] 

ASF GitHub Bot commented on NIFI-4517:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2945
  
I've been trying the two processors in my usual test workflows replacing 
ExecuteSQL with the record version (and similarly with QueryDatabaseTable) and 
it works fine. Once Mike's comments are addressed and the PR rebased, I can 
have a final review before merging.


> Allow SQL results to be output as records in any supported format
> -
>
> Key: NIFI-4517
> URL: https://issues.apache.org/jira/browse/NIFI-4517
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> ExecuteSQL and QueryDatabaseTable currently only outputs Avro, and the schema 
> is only available as embedded within the flow file, not as an attribute such 
> as record-aware processors can handle.
> ExecuteSQL and QueryDatabaseTable processors should be updated with a 
> RecordSetWriter implementation. This will allow output using any writer 
> format (Avro, JSON, CSV, Free-form text, etc.), as well as all the other 
> features therein (such as writing the schema to an attribute, and will avoid 
> the need for a ConvertAvroToXYZ or ConvertRecord processor downstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2945: NIFI-4517: Added ExecuteSQLRecord and QueryDatabaseTableRe...

2018-09-25 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2945
  
I've been trying the two processors in my usual test workflows replacing 
ExecuteSQL with the record version (and similarly with QueryDatabaseTable) and 
it works fine. Once Mike's comments are addressed and the PR rebased, I can 
have a final review before merging.


---


[jira] [Commented] (NIFI-3344) Enable JoltTransformJSON processor to have option to pretty print

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627926#comment-16627926
 ] 

ASF GitHub Bot commented on NIFI-3344:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2987


> Enable JoltTransformJSON processor to have option to pretty print
> -
>
> Key: NIFI-3344
> URL: https://issues.apache.org/jira/browse/NIFI-3344
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Elli Schwarz
>Priority: Major
> Fix For: 1.8.0
>
>
> It would be nice if a Nifi property could be set on the JoltTransformJSON 
> processor to enable/disable pretty printing of the JSON output. (For 
> performance reasons, I assume that some users might want it off.)
> Currently, the code uses the Jolt library's method JsonUtils.toJsonString(), 
> but there's also a toPrettyJsonString() method that can be used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-3344) Enable JoltTransformJSON processor to have option to pretty print

2018-09-25 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-3344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-3344:
-
Component/s: Extensions

> Enable JoltTransformJSON processor to have option to pretty print
> -
>
> Key: NIFI-3344
> URL: https://issues.apache.org/jira/browse/NIFI-3344
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Elli Schwarz
>Priority: Major
> Fix For: 1.8.0
>
>
> It would be nice if a Nifi property could be set on the JoltTransformJSON 
> processor to enable/disable pretty printing of the JSON output. (For 
> performance reasons, I assume that some users might want it off.)
> Currently, the code uses the Jolt library's method JsonUtils.toJsonString(), 
> but there's also a toPrettyJsonString() method that can be used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-3344) Enable JoltTransformJSON processor to have option to pretty print

2018-09-25 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-3344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-3344.
--
   Resolution: Fixed
Fix Version/s: 1.8.0

> Enable JoltTransformJSON processor to have option to pretty print
> -
>
> Key: NIFI-3344
> URL: https://issues.apache.org/jira/browse/NIFI-3344
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Elli Schwarz
>Priority: Major
> Fix For: 1.8.0
>
>
> It would be nice if a Nifi property could be set on the JoltTransformJSON 
> processor to enable/disable pretty printing of the JSON output. (For 
> performance reasons, I assume that some users might want it off.)
> Currently, the code uses the Jolt library's method JsonUtils.toJsonString(), 
> but there's also a toPrettyJsonString() method that can be used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2987: NIFI-3344 Added property to JoltTransformJSON allow...

2018-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2987


---


[jira] [Commented] (NIFI-3344) Enable JoltTransformJSON processor to have option to pretty print

2018-09-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627925#comment-16627925
 ] 

ASF subversion and git services commented on NIFI-3344:
---

Commit db645ec475416b77018fcae0d25e5b5dfd801b15 in nifi's branch 
refs/heads/master from Nick Lewis
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=db645ec ]

NIFI-3344 Added property to JoltTransformJSON allowing the user to specify 
pretty print, defaults to false

Signed-off-by: Pierre Villard 

This closes #2987.


> Enable JoltTransformJSON processor to have option to pretty print
> -
>
> Key: NIFI-3344
> URL: https://issues.apache.org/jira/browse/NIFI-3344
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.1.1
>Reporter: Elli Schwarz
>Priority: Major
>
> It would be nice if a Nifi property could be set on the JoltTransformJSON 
> processor to enable/disable pretty printing of the JSON output. (For 
> performance reasons, I assume that some users might want it off.)
> Currently, the code uses the Jolt library's method JsonUtils.toJsonString(), 
> but there's also a toPrettyJsonString() method that can be used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3344) Enable JoltTransformJSON processor to have option to pretty print

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627922#comment-16627922
 ] 

ASF GitHub Bot commented on NIFI-3344:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2987
  
Code LGTM, merging to master, thanks @nalewis 


> Enable JoltTransformJSON processor to have option to pretty print
> -
>
> Key: NIFI-3344
> URL: https://issues.apache.org/jira/browse/NIFI-3344
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.1.1
>Reporter: Elli Schwarz
>Priority: Major
>
> It would be nice if a Nifi property could be set on the JoltTransformJSON 
> processor to enable/disable pretty printing of the JSON output. (For 
> performance reasons, I assume that some users might want it off.)
> Currently, the code uses the Jolt library's method JsonUtils.toJsonString(), 
> but there's also a toPrettyJsonString() method that can be used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2987: NIFI-3344 Added property to JoltTransformJSON allowing the...

2018-09-25 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2987
  
Code LGTM, merging to master, thanks @nalewis 


---


[jira] [Updated] (NIFI-5635) Description PutEmail - multiple senders/recipients

2018-09-25 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5635:
-
Status: Patch Available  (was: Open)

> Description PutEmail - multiple senders/recipients
> --
>
> Key: NIFI-5635
> URL: https://issues.apache.org/jira/browse/NIFI-5635
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>
> Improve the description of properties in PutEmail processor: multiples 
> senders/recipients can be set when configuring the processor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5635) Description PutEmail - multiple senders/recipients

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627915#comment-16627915
 ] 

ASF GitHub Bot commented on NIFI-5635:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/3031

NIFI-5635 - Description PutEmail properties with multiple senders/rec…

…ipients

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-5635

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3031.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3031


commit 1c76f7826d60cb3dc644e9863c671aad1e1f76a9
Author: Pierre Villard 
Date:   2018-09-25T20:53:28Z

NIFI-5635 - Description PutEmail properties with multiple senders/recipients




> Description PutEmail - multiple senders/recipients
> --
>
> Key: NIFI-5635
> URL: https://issues.apache.org/jira/browse/NIFI-5635
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>
> Improve the description of properties in PutEmail processor: multiples 
> senders/recipients can be set when configuring the processor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3031: NIFI-5635 - Description PutEmail properties with mu...

2018-09-25 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/3031

NIFI-5635 - Description PutEmail properties with multiple senders/rec…

…ipients

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-5635

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3031.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3031


commit 1c76f7826d60cb3dc644e9863c671aad1e1f76a9
Author: Pierre Villard 
Date:   2018-09-25T20:53:28Z

NIFI-5635 - Description PutEmail properties with multiple senders/recipients




---


[jira] [Created] (NIFI-5635) Description PutEmail - multiple senders/recipients

2018-09-25 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-5635:


 Summary: Description PutEmail - multiple senders/recipients
 Key: NIFI-5635
 URL: https://issues.apache.org/jira/browse/NIFI-5635
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Reporter: Pierre Villard
Assignee: Pierre Villard


Improve the description of properties in PutEmail processor: multiples 
senders/recipients can be set when configuring the processor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3028: Nifi 4806

2018-09-25 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3028
  
and i'll address the checkstyle finding

[WARNING] 
src/test/java/org/apache/nifi/atlas/emulator/AtlasAPIV2ServerEmulator.java:[175,21]
 (blocks) LeftCurly: '{' at column 21 should be on the previous line.



---


[GitHub] nifi issue #3028: Nifi 4806

2018-09-25 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3028
  
new commons lang came out.  grabbing that.  will squash too to make 
reviewing easier from here


---


[jira] [Updated] (NIFI-5619) Node can join the cluster with a processor running when cluster state for that processor is stopped

2018-09-25 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5619:
--
Description: 
When the cluster state for a processor is stopped, it is possible for a node to 
join the cluster with that particular processor running.  The UI will show the 
processor as stopped, but going to the cluster summary and viewing the 
node-specific status of that processor will show it's running on the node that 
joined.

To reproduce this:
# Build the Node Decommissioning [PR 
3010|https://github.com/apache/nifi/pull/3010]
# Create a two-node cluster, does not need to be secure
# Create a flow with a GenerateFlowFile processor connected to an 
UpdateAttribute processor
# Start the GenerateFlowFile processor to let flowfiles queue in the connection 
to UpdateAttribute
# Using node 1's UI, disconnect node 2 from the cluster
# Using node 1's UI, offload node 2
# Using node 1's UI, delete node 2 from the cluster
# Using node 1's UI, stop GenerateFlowFile
# Restart node 2
# Once node 2 is reconnected, in the UI the GenerateFlowFile processor will 
look like it is stopped
# Go to the Cluster Summary UI and click on the View Processor Details icon 
which will show the processor running on node 2



  was:
When the cluster state for a processor is stopped, it is possible for a node to 
join the cluster with that particular processor running.  The UI will show the 
processor as stopped, but going to the cluster summary and viewing the 
node-specific status of that processor will show it's running on the node that 
joined.

To reproduce this:
# Build the Node Decommissioning [PR 
3010|https://github.com/apache/nifi/pull/3010]
# Create a two-node cluster, does not need to be secure
# Create a flow with a GenerateFlowFile processor connected to an 
UpdateAttribute processor
# Start the GenerateFlowFile processor to let flowfiles queue in the connection 
to UpdateAttribute
# Using node 1's UI, disconnect node 2 from the cluster
# Using node 1's UI, offload node 2
# Using node 1's UI, delete node 2 from the cluster
# Restart node 2
# Once node 2 is reconnected, in the UI the GenerateFlowFile processor will 
look like it is stopped
# Go to the Cluster Summary UI and click on the View Processor Details icon 
which will show the processor running on node 2




> Node can join the cluster with a processor running when cluster state for 
> that processor is stopped
> ---
>
> Key: NIFI-5619
> URL: https://issues.apache.org/jira/browse/NIFI-5619
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Jeff Storck
>Priority: Major
> Attachments: Screen Shot 2018-09-20 at 4.53.44 PM.png, Screen Shot 
> 2018-09-20 at 4.54.07 PM.png, Screen Shot 2018-09-20 at 4.54.15 PM.png
>
>
> When the cluster state for a processor is stopped, it is possible for a node 
> to join the cluster with that particular processor running.  The UI will show 
> the processor as stopped, but going to the cluster summary and viewing the 
> node-specific status of that processor will show it's running on the node 
> that joined.
> To reproduce this:
> # Build the Node Decommissioning [PR 
> 3010|https://github.com/apache/nifi/pull/3010]
> # Create a two-node cluster, does not need to be secure
> # Create a flow with a GenerateFlowFile processor connected to an 
> UpdateAttribute processor
> # Start the GenerateFlowFile processor to let flowfiles queue in the 
> connection to UpdateAttribute
> # Using node 1's UI, disconnect node 2 from the cluster
> # Using node 1's UI, offload node 2
> # Using node 1's UI, delete node 2 from the cluster
> # Using node 1's UI, stop GenerateFlowFile
> # Restart node 2
> # Once node 2 is reconnected, in the UI the GenerateFlowFile processor will 
> look like it is stopped
> # Go to the Cluster Summary UI and click on the View Processor Details icon 
> which will show the processor running on node 2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5619) Node can join the cluster with a processor running when cluster state for that processor is stopped

2018-09-25 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5619:
--
Description: 
When the cluster state for a processor is stopped, it is possible for a node to 
join the cluster with that particular processor running.  The UI will show the 
processor as stopped, but going to the cluster summary and viewing the 
node-specific status of that processor will show it's running on the node that 
joined.

To reproduce this:
# Build the Node Decommissioning [PR 
3010|https://github.com/apache/nifi/pull/3010]
# Create a two-node cluster, does not need to be secure
# Create a flow with a GenerateFlowFile processor connected to an 
UpdateAttribute processor
# Start the GenerateFlowFile processor to let flowfiles queue in the connection 
to UpdateAttribute
# Using node 1's UI, disconnect node 2 from the cluster
# Using node 1's UI, offload node 2
# Using node 1's UI, delete node 2 from the cluster
# Restart node 2
# Once node 2 is reconnected, in the UI the GenerateFlowFile processor will 
look like it is stopped
# Go to the Cluster Summary UI and click on the View Processor Details icon 
which will show the processor running on node 2



  was:
When the cluster state for a processor is stopped, it is possible for a node to 
join the cluster with that particular processor running.  The UI will show the 
processor as stopped, but going to the cluster summary and viewing the 
node-specific status of that processor will show it's running on the node that 
joined.

To reproduce this:
# Build the Node Decommissioning [PR 
3010|https://github.com/apache/nifi/pull/3010]
# Create a two-node cluster, does not need to be secure
# Create a flow with a GenerateFlowFile processor connected to an 
UpdateAttribute processor
# Start the GenerateFlowFile processor to let flowfiles queue in the connection 
to UpdateAttribute
# Disconnect the second node from the cluster
# Decommission the second node from the cluster
# Restart the node that was decommissioned from the cluster
# Once the node is reconnected, in the UI the GenerateFlowFile processor will 
look like it is stopped
# Go to the Cluster Summary UI and click on the View Processor Details icon 
which will show the processor still running on the node that was reconnected




> Node can join the cluster with a processor running when cluster state for 
> that processor is stopped
> ---
>
> Key: NIFI-5619
> URL: https://issues.apache.org/jira/browse/NIFI-5619
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Jeff Storck
>Priority: Major
> Attachments: Screen Shot 2018-09-20 at 4.53.44 PM.png, Screen Shot 
> 2018-09-20 at 4.54.07 PM.png, Screen Shot 2018-09-20 at 4.54.15 PM.png
>
>
> When the cluster state for a processor is stopped, it is possible for a node 
> to join the cluster with that particular processor running.  The UI will show 
> the processor as stopped, but going to the cluster summary and viewing the 
> node-specific status of that processor will show it's running on the node 
> that joined.
> To reproduce this:
> # Build the Node Decommissioning [PR 
> 3010|https://github.com/apache/nifi/pull/3010]
> # Create a two-node cluster, does not need to be secure
> # Create a flow with a GenerateFlowFile processor connected to an 
> UpdateAttribute processor
> # Start the GenerateFlowFile processor to let flowfiles queue in the 
> connection to UpdateAttribute
> # Using node 1's UI, disconnect node 2 from the cluster
> # Using node 1's UI, offload node 2
> # Using node 1's UI, delete node 2 from the cluster
> # Restart node 2
> # Once node 2 is reconnected, in the UI the GenerateFlowFile processor will 
> look like it is stopped
> # Go to the Cluster Summary UI and click on the View Processor Details icon 
> which will show the processor running on node 2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5585) Decommision Nodes from Cluster

2018-09-25 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5585:
--
Description: 
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Similar to the client sending PUT request a DISCONNECTING message to 
cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to the 
same URI to initiate an OFFLOAD for a DISCONNECTED node. The OFFLOADING request 
will be idempotent.

Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state.  
After the node completes offloading, it will transition to the OFFLOADED state.
OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
request via the UI/CLI, or restarting NiFi on the node.

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to offload the node.
 # Once offload completes, send request to delete node.
 # Once the delete request has finished, the NiFi service on the host can be 
stopped/removed.

When an error occurs and the node can not complete offloading, the user can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the offload (out of memory, no network 
connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission a node

Toolkit CLI commands for retrieving a list of nodes and 
connecting/disconnecting/offloading/deleting nodes have been added.

  was:
Allow a node in the cluster to be decommissioned, rebalancing flowfiles on the 
node to be decommissioned to the other active nodes.  This work depends on 
NIFI-5516.

Similar to the client sending PUT request a DISCONNECTING message to 
cluster/nodes/\{id}, a DECOMMISSIONING message can be sent as a PUT request to 
the same URI to initiate a DECOMMISSION for a DISCONNECTED node. The 
DECOMMISSIONING request will be idempotent.

The steps to decommission a node and remove it from the cluster are:
 # Send request to disconnect the node
 # Once disconnect completes, send request to decommission the node.
 # Once decommission completes, send request to delete node.

When an error occurs and the node can not complete decommissioning, the user 
can:
 # Send request to delete the node from the cluster
 # Diagnose why the node had issues with the decommission (out of memory, no 
network connection, etc) and address the issue
 # Restart NiFi on the node to so that it will reconnect to the cluster
 # Go through the steps to decommission and remove a node

Toolkit CLI commands for retrieving a list of nodes and 
disconnecting/decommissioning/deleting nodes have been added.


> Decommision Nodes from Cluster
> --
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.
> Similar to the client sending PUT request a DISCONNECTING message to 
> cluster/nodes/\{id}, an OFFLOADING message can be sent as a PUT request to 
> the same URI to initiate an OFFLOAD for a DISCONNECTED node. The OFFLOADING 
> request will be idempotent.
> Only nodes that are DISCONNECTED can be transitioned to the OFFLOADING state. 
>  After the node completes offloading, it will transition to the OFFLOADED 
> state.
> OFFLOADED nodes can be reconnected to the cluster by issuing a connection 
> request via the UI/CLI, or restarting NiFi on the node.
> The steps to decommission a node and remove it from the cluster are:
>  # Send request to disconnect the node
>  # Once disconnect completes, send request to offload the node.
>  # Once offload completes, send request to delete node.
>  # Once the delete request has finished, the NiFi service on the host can be 
> stopped/removed.
> When an error occurs and the node can not complete offloading, the user can:
>  # Send request to delete the node from the cluster
>  # Diagnose why the node had issues with the offload (out of memory, no 
> network connection, etc) and address the issue
>  # Restart NiFi on the node to so that it will reconnect to the cluster
>  # Go through the steps to decommission a node
> Toolkit CLI commands for retrieving a list of nodes and 
> connecting/disconnecting/offloading/deleting nodes have been added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-616) Update appveyor.yml to remove restricted building on a branch

2018-09-25 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-616:
--
   Resolution: Fixed
Fix Version/s: 0.6.0
   Status: Resolved  (was: Patch Available)

> Update appveyor.yml to remove restricted building on a branch
> -
>
> Key: MINIFICPP-616
> URL: https://issues.apache.org/jira/browse/MINIFICPP-616
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Major
> Fix For: 0.6.0
>
>
> Appveyor is currently configured to only build off of a certain named branch 
> from when the associated functionality was introduced.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5633) EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627736#comment-16627736
 ] 

ASF GitHub Bot commented on NIFI-5633:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3029


> EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException
> -
>
> Key: NIFI-5633
> URL: https://issues.apache.org/jira/browse/NIFI-5633
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.8.0
>
>
> When running this unit test:
> {code:java}
> @Test
> public void testAllDelineatedValuesCount() {
> final Map attributes = new HashMap<>();
> final String query = "${allDelineatedValues('${test}', '/'):count()}";
> attributes.put("test", "/");
> assertEquals(ResultType.WHOLE_NUMBER, Query.getResultType(query));
> assertEquals("0", Query.evaluateExpressions(query, attributes, null));
> }
> {code}
> This will throw:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.DelineatedAttributeEvaluator.evaluate(DelineatedAttributeEvaluator.java:65)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.reduce.CountEvaluator.evaluate(CountEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.MappingEvaluator.evaluate(MappingEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluate(Query.java:363)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpression(Query.java:204)
>   at 
> org.apache.nifi.attribute.expression.language.CompiledExpression.evaluate(CompiledExpression.java:58)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:51)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:64)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpressions(Query.java:223)
>   at 
> org.apache.nifi.attribute.expression.language.TestQuery.testAllDelineatedValuesCount(TestQuery.java:1033)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5633) EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException

2018-09-25 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5633:
-
   Resolution: Fixed
Fix Version/s: 1.8.0
   Status: Resolved  (was: Patch Available)

> EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException
> -
>
> Key: NIFI-5633
> URL: https://issues.apache.org/jira/browse/NIFI-5633
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.8.0
>
>
> When running this unit test:
> {code:java}
> @Test
> public void testAllDelineatedValuesCount() {
> final Map attributes = new HashMap<>();
> final String query = "${allDelineatedValues('${test}', '/'):count()}";
> attributes.put("test", "/");
> assertEquals(ResultType.WHOLE_NUMBER, Query.getResultType(query));
> assertEquals("0", Query.evaluateExpressions(query, attributes, null));
> }
> {code}
> This will throw:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.DelineatedAttributeEvaluator.evaluate(DelineatedAttributeEvaluator.java:65)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.reduce.CountEvaluator.evaluate(CountEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.MappingEvaluator.evaluate(MappingEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluate(Query.java:363)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpression(Query.java:204)
>   at 
> org.apache.nifi.attribute.expression.language.CompiledExpression.evaluate(CompiledExpression.java:58)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:51)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:64)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpressions(Query.java:223)
>   at 
> org.apache.nifi.attribute.expression.language.TestQuery.testAllDelineatedValuesCount(TestQuery.java:1033)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3029: NIFI-5633 - allDelineatedValues can throw ArrayIndexOutOfB...

2018-09-25 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/3029
  
@pvillard31 thanks for the fix and for providing great test cases! +1 
merged to master.


---


[jira] [Commented] (NIFI-5633) EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627735#comment-16627735
 ] 

ASF GitHub Bot commented on NIFI-5633:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/3029
  
@pvillard31 thanks for the fix and for providing great test cases! +1 
merged to master.


> EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException
> -
>
> Key: NIFI-5633
> URL: https://issues.apache.org/jira/browse/NIFI-5633
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> When running this unit test:
> {code:java}
> @Test
> public void testAllDelineatedValuesCount() {
> final Map attributes = new HashMap<>();
> final String query = "${allDelineatedValues('${test}', '/'):count()}";
> attributes.put("test", "/");
> assertEquals(ResultType.WHOLE_NUMBER, Query.getResultType(query));
> assertEquals("0", Query.evaluateExpressions(query, attributes, null));
> }
> {code}
> This will throw:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.DelineatedAttributeEvaluator.evaluate(DelineatedAttributeEvaluator.java:65)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.reduce.CountEvaluator.evaluate(CountEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.MappingEvaluator.evaluate(MappingEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluate(Query.java:363)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpression(Query.java:204)
>   at 
> org.apache.nifi.attribute.expression.language.CompiledExpression.evaluate(CompiledExpression.java:58)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:51)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:64)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpressions(Query.java:223)
>   at 
> org.apache.nifi.attribute.expression.language.TestQuery.testAllDelineatedValuesCount(TestQuery.java:1033)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3029: NIFI-5633 - allDelineatedValues can throw ArrayInde...

2018-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3029


---


[jira] [Commented] (NIFI-5633) EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException

2018-09-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627734#comment-16627734
 ] 

ASF subversion and git services commented on NIFI-5633:
---

Commit 5d3558a79d83defb4dd7484da3303cb04013cc6c in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=5d3558a ]

NIFI-5633 - allDelineatedValues can throw ArrayIndexOutOfBoundsException

This closes #3029.


> EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException
> -
>
> Key: NIFI-5633
> URL: https://issues.apache.org/jira/browse/NIFI-5633
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> When running this unit test:
> {code:java}
> @Test
> public void testAllDelineatedValuesCount() {
> final Map attributes = new HashMap<>();
> final String query = "${allDelineatedValues('${test}', '/'):count()}";
> attributes.put("test", "/");
> assertEquals(ResultType.WHOLE_NUMBER, Query.getResultType(query));
> assertEquals("0", Query.evaluateExpressions(query, attributes, null));
> }
> {code}
> This will throw:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.DelineatedAttributeEvaluator.evaluate(DelineatedAttributeEvaluator.java:65)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.reduce.CountEvaluator.evaluate(CountEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.MappingEvaluator.evaluate(MappingEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluate(Query.java:363)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpression(Query.java:204)
>   at 
> org.apache.nifi.attribute.expression.language.CompiledExpression.evaluate(CompiledExpression.java:58)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:51)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:64)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpressions(Query.java:223)
>   at 
> org.apache.nifi.attribute.expression.language.TestQuery.testAllDelineatedValuesCount(TestQuery.java:1033)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5318) Implement NiFi test harness

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627597#comment-16627597
 ] 

ASF GitHub Bot commented on NIFI-5318:
--

Github user peter-gergely-horvath commented on the issue:

https://github.com/apache/nifi/pull/2872
  
That makes sense: I'll look into that... once I have a tiny bit of time...


> Implement NiFi test harness
> ---
>
> Key: NIFI-5318
> URL: https://issues.apache.org/jira/browse/NIFI-5318
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Peter Horvath
>Priority: Major
>
> Currently, it is not really possible to automatically test the behaviour of a 
> specific NiFi flow and make unit test type asserts if it works as expected. 
> For example, if the expected behaviour of a NiFi flow is that a file placed 
> to a specific directory will trigger some operation after which some output 
> file will appear at another directory, once currently can only do one thing: 
> test the NiFi flow manually. 
> Manual testing is especially hard to manage if a NiFi flow is being actively 
> developed: any change to a complex, existing NiFi flow might require a lot of 
> manual testing just to ensure there are no regressions introduced. 
> Some kind of Java API that allows managing a NiFi instance and manipulating 
> flow deployments like for example, [Codehaus 
> Cargo|]https://codehaus-cargo.github.io/] would be of great help. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2872: NIFI-5318 Implement NiFi test harness: initial commit of n...

2018-09-25 Thread peter-gergely-horvath
Github user peter-gergely-horvath commented on the issue:

https://github.com/apache/nifi/pull/2872
  
That makes sense: I'll look into that... once I have a tiny bit of time...


---


[jira] [Commented] (NIFI-5318) Implement NiFi test harness

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627590#comment-16627590
 ] 

ASF GitHub Bot commented on NIFI-5318:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2872
  
I think there should be a Maven module for the test harness which is 
disabled by default and can be activated with a flag like `mvn clean test 
-Ptest-harness`. 


> Implement NiFi test harness
> ---
>
> Key: NIFI-5318
> URL: https://issues.apache.org/jira/browse/NIFI-5318
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Peter Horvath
>Priority: Major
>
> Currently, it is not really possible to automatically test the behaviour of a 
> specific NiFi flow and make unit test type asserts if it works as expected. 
> For example, if the expected behaviour of a NiFi flow is that a file placed 
> to a specific directory will trigger some operation after which some output 
> file will appear at another directory, once currently can only do one thing: 
> test the NiFi flow manually. 
> Manual testing is especially hard to manage if a NiFi flow is being actively 
> developed: any change to a complex, existing NiFi flow might require a lot of 
> manual testing just to ensure there are no regressions introduced. 
> Some kind of Java API that allows managing a NiFi instance and manipulating 
> flow deployments like for example, [Codehaus 
> Cargo|]https://codehaus-cargo.github.io/] would be of great help. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2872: NIFI-5318 Implement NiFi test harness: initial commit of n...

2018-09-25 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2872
  
I think there should be a Maven module for the test harness which is 
disabled by default and can be activated with a flag like `mvn clean test 
-Ptest-harness`. 


---


[jira] [Commented] (NIFI-5318) Implement NiFi test harness

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627588#comment-16627588
 ] 

ASF GitHub Bot commented on NIFI-5318:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2872
  
@peter-gergely-horvath ok. I'm pretty sure that the surefire plugin can be 
disabled in the POM, but manually activated, so we'll need to look at that 
because those tests should be runnable if someone wants to modify the test 
harness and not roll their own test case.


> Implement NiFi test harness
> ---
>
> Key: NIFI-5318
> URL: https://issues.apache.org/jira/browse/NIFI-5318
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Peter Horvath
>Priority: Major
>
> Currently, it is not really possible to automatically test the behaviour of a 
> specific NiFi flow and make unit test type asserts if it works as expected. 
> For example, if the expected behaviour of a NiFi flow is that a file placed 
> to a specific directory will trigger some operation after which some output 
> file will appear at another directory, once currently can only do one thing: 
> test the NiFi flow manually. 
> Manual testing is especially hard to manage if a NiFi flow is being actively 
> developed: any change to a complex, existing NiFi flow might require a lot of 
> manual testing just to ensure there are no regressions introduced. 
> Some kind of Java API that allows managing a NiFi instance and manipulating 
> flow deployments like for example, [Codehaus 
> Cargo|]https://codehaus-cargo.github.io/] would be of great help. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2872: NIFI-5318 Implement NiFi test harness: initial commit of n...

2018-09-25 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2872
  
@peter-gergely-horvath ok. I'm pretty sure that the surefire plugin can be 
disabled in the POM, but manually activated, so we'll need to look at that 
because those tests should be runnable if someone wants to modify the test 
harness and not roll their own test case.


---


[jira] [Commented] (NIFI-5318) Implement NiFi test harness

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627563#comment-16627563
 ] 

ASF GitHub Bot commented on NIFI-5318:
--

Github user peter-gergely-horvath commented on the issue:

https://github.com/apache/nifi/pull/2872
  
Hi @MikeThomsen I _intentionally_ have no test configuration in the project 
(at least for now): they are merely *samples* of what can be done, but they 
should not be executed as part of the core NiFi build. 
NiFi is a beast, starting and stopping it takes some time, I do not want to 
add that to the each NiFi build. 

Please create a new Maven project quickstart project (that will have 
testing configuration enabled) and add the following (replacing `${nifi 
version}` with the current one) to the dependencies:

```

org.apache.nifi:
nifi-testharness
${nifi version}

```
once done, you can take the samples into your own project, where you can 
experiment with the test harness. 
I understand piggybacking on the tests directory is maybe not the perfect 
place to deliver samples, but given the circumstances I think it is acceptable 
and could maybe be improved in the future.  



> Implement NiFi test harness
> ---
>
> Key: NIFI-5318
> URL: https://issues.apache.org/jira/browse/NIFI-5318
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Peter Horvath
>Priority: Major
>
> Currently, it is not really possible to automatically test the behaviour of a 
> specific NiFi flow and make unit test type asserts if it works as expected. 
> For example, if the expected behaviour of a NiFi flow is that a file placed 
> to a specific directory will trigger some operation after which some output 
> file will appear at another directory, once currently can only do one thing: 
> test the NiFi flow manually. 
> Manual testing is especially hard to manage if a NiFi flow is being actively 
> developed: any change to a complex, existing NiFi flow might require a lot of 
> manual testing just to ensure there are no regressions introduced. 
> Some kind of Java API that allows managing a NiFi instance and manipulating 
> flow deployments like for example, [Codehaus 
> Cargo|]https://codehaus-cargo.github.io/] would be of great help. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-25 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627564#comment-16627564
 ] 

Colin Dean commented on NIFI-5612:
--

I did a quick test during an unexpected gap in my schedule and it works. I'll 
clean up my patch and add a test and submit the PR this afternoon.

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.
>  
> *Edit 2018-09-24, so that my update doesn't get buried:*
> I am able to reproduce the exception using
>  * Vagrant 2.1.1
>  * Virtualbox 5.2.18 r124319
>  * Ubuntu 18.04
>  * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in 
> use on the system where I observed this failure first)
>  * MySQL Connector/J 5.1.46
>  * NiFi 1.7.1
> With this table definition and data:
> {code:sql}
> create table fails ( 
>   fails int(1) unsigned NOT NULL default '0' 
> ) ENGINE=InnoDB 

[GitHub] nifi issue #2872: NIFI-5318 Implement NiFi test harness: initial commit of n...

2018-09-25 Thread peter-gergely-horvath
Github user peter-gergely-horvath commented on the issue:

https://github.com/apache/nifi/pull/2872
  
Hi @MikeThomsen I _intentionally_ have no test configuration in the project 
(at least for now): they are merely *samples* of what can be done, but they 
should not be executed as part of the core NiFi build. 
NiFi is a beast, starting and stopping it takes some time, I do not want to 
add that to the each NiFi build. 

Please create a new Maven project quickstart project (that will have 
testing configuration enabled) and add the following (replacing `${nifi 
version}` with the current one) to the dependencies:

```

org.apache.nifi:
nifi-testharness
${nifi version}

```
once done, you can take the samples into your own project, where you can 
experiment with the test harness. 
I understand piggybacking on the tests directory is maybe not the perfect 
place to deliver samples, but given the circumstances I think it is acceptable 
and could maybe be improved in the future.  



---


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-25 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627543#comment-16627543
 ] 

Colin Dean commented on NIFI-5612:
--

I'll give [~bende] 's code a try and if it works, I'll put up a PR so others 
can take a look and we can let CI do its thing. I expect to complete this in 
the next 5-6 hours – I've got a few things to do beforehand.

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.
>  
> *Edit 2018-09-24, so that my update doesn't get buried:*
> I am able to reproduce the exception using
>  * Vagrant 2.1.1
>  * Virtualbox 5.2.18 r124319
>  * Ubuntu 18.04
>  * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in 
> use on the system where I observed this failure first)
>  * MySQL Connector/J 5.1.46
>  * NiFi 1.7.1
> With this table definition and data:
> {code:sql}
> create table fails ( 
> 

[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627501#comment-16627501
 ] 

ASF GitHub Bot commented on NIFI-5557:
--

Github user ekovacs commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2971#discussion_r220236629
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
@@ -389,16 +380,24 @@ public void process(InputStream in) throws 
IOException {
 session.transfer(putFlowFile, REL_SUCCESS);
 
 } catch (final Throwable t) {
-if (tempDotCopyFile != null) {
-try {
-hdfs.delete(tempDotCopyFile, false);
-} catch (Exception e) {
-getLogger().error("Unable to remove temporary 
file {} due to {}", new Object[]{tempDotCopyFile, e});
-}
+   Optional causeOptional = findCause(t, 
GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor());
--- End diff --

yes. it makes sense.


> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> {code:java}
> watch -n 5 "cp book.txt /path/to/input"
> {code}
> then the flow will successfully run without issue.
> If we adjust this, to:
> {code:java}
> watch -n 121 "cp book.txt 

[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627502#comment-16627502
 ] 

ASF GitHub Bot commented on NIFI-5557:
--

Github user ekovacs commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2971#discussion_r220236670
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
@@ -389,16 +380,24 @@ public void process(InputStream in) throws 
IOException {
 session.transfer(putFlowFile, REL_SUCCESS);
 
 } catch (final Throwable t) {
-if (tempDotCopyFile != null) {
-try {
-hdfs.delete(tempDotCopyFile, false);
-} catch (Exception e) {
-getLogger().error("Unable to remove temporary 
file {} due to {}", new Object[]{tempDotCopyFile, e});
-}
+   Optional causeOptional = findCause(t, 
GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor());
+if (causeOptional.isPresent()) {
+  getLogger().warn(String.format("An error occured 
while connecting to HDFS. "
--- End diff --

indeed.


> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> {code:java}
> watch -n 5 "cp book.txt 

[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...

2018-09-25 Thread ekovacs
Github user ekovacs commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2971#discussion_r220236670
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
@@ -389,16 +380,24 @@ public void process(InputStream in) throws 
IOException {
 session.transfer(putFlowFile, REL_SUCCESS);
 
 } catch (final Throwable t) {
-if (tempDotCopyFile != null) {
-try {
-hdfs.delete(tempDotCopyFile, false);
-} catch (Exception e) {
-getLogger().error("Unable to remove temporary 
file {} due to {}", new Object[]{tempDotCopyFile, e});
-}
+   Optional causeOptional = findCause(t, 
GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor());
+if (causeOptional.isPresent()) {
+  getLogger().warn(String.format("An error occured 
while connecting to HDFS. "
--- End diff --

indeed.


---


[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...

2018-09-25 Thread ekovacs
Github user ekovacs commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2971#discussion_r220236629
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
@@ -389,16 +380,24 @@ public void process(InputStream in) throws 
IOException {
 session.transfer(putFlowFile, REL_SUCCESS);
 
 } catch (final Throwable t) {
-if (tempDotCopyFile != null) {
-try {
-hdfs.delete(tempDotCopyFile, false);
-} catch (Exception e) {
-getLogger().error("Unable to remove temporary 
file {} due to {}", new Object[]{tempDotCopyFile, e});
-}
+   Optional causeOptional = findCause(t, 
GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor());
--- End diff --

yes. it makes sense.


---


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-25 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627480#comment-16627480
 ] 

Colin Dean commented on NIFI-5612:
--

One thought I had is to compare with newer MySQL and maybe SQLite and Derby, 
too. 

Unfortunately, I'm stuck on older MySQL with the older driver for now so it 
sucks to need a workaround for what might be a weird driver behavior. 
Unfortunately, that driver behavior is unlikely ever to change.

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.
>  
> *Edit 2018-09-24, so that my update doesn't get buried:*
> I am able to reproduce the exception using
>  * Vagrant 2.1.1
>  * Virtualbox 5.2.18 r124319
>  * Ubuntu 18.04
>  * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in 
> use on the system where I observed this failure first)
>  * MySQL Connector/J 5.1.46
>  * NiFi 

[jira] [Commented] (NIFI-5224) Add SolrClientService

2018-09-25 Thread Mike Thomsen (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627468#comment-16627468
 ] 

Mike Thomsen commented on NIFI-5224:


Taking over after several months of no activity.

> Add SolrClientService
> -
>
> Key: NIFI-5224
> URL: https://issues.apache.org/jira/browse/NIFI-5224
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Johannes Peter
>Assignee: Mike Thomsen
>Priority: Major
>
> The Solr CRUD functions that are currently included in SolrUtils should be 
> moved to a controller service. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5224) Add SolrClientService

2018-09-25 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen reassigned NIFI-5224:
--

Assignee: Mike Thomsen  (was: Johannes Peter)

> Add SolrClientService
> -
>
> Key: NIFI-5224
> URL: https://issues.apache.org/jira/browse/NIFI-5224
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Johannes Peter
>Assignee: Mike Thomsen
>Priority: Major
>
> The Solr CRUD functions that are currently included in SolrUtils should be 
> moved to a controller service. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-25 Thread Matt Burgess (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627462#comment-16627462
 ] 

Matt Burgess commented on NIFI-5612:


I haven't looked too closely at this, but if what Bryan says about handling 
schema types vs values is true, then it definitely needs to be made consistent. 
The whole point of figuring out the right schema is so we know how to correctly 
handle the actual value that gets stored in the row.

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.avro.UnresolvedUnionException: Not in union 
> ["null","int"]: 0
>   at 
> org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:709)
>   at 
> org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:192)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:110)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:153)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143)
>   at 
> org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73)
>   at 
> org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60)
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:302)
>   ... 15 common frames omitted
> {code}
> I don't know if I can share the database schema – still working with my team 
> on that – but looking at it, I think it has something to do with the 
> signedness of int(1) or tinyint(1) because those two are the only numerical 
> types common to all of the table.
>  
> *Edit 2018-09-24, so that my update doesn't get buried:*
> I am able to reproduce the exception using
>  * Vagrant 2.1.1
>  * Virtualbox 5.2.18 r124319
>  * Ubuntu 18.04
>  * MySQL 5.0.81 (as close as I can get to the 5.0.80 Enterprise Edition in 
> use on the system where I observed this failure first)
>  * MySQL Connector/J 5.1.46
>  * NiFi 

[jira] [Commented] (NIFI-5606) UpdateRecord doesn't allow population of nested fields if input parent is null

2018-09-25 Thread Mark Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627443#comment-16627443
 ] 

Mark Payne commented on NIFI-5606:
--

I am not opposed to allowing the creation of that parent element, but it likely 
will be non-trivial.

I also think that UpdateRecord's usage will largely be replaced with the new 
JoltTransformRecord processor, which makes a lot of things much easier :)

> UpdateRecord doesn't allow population of nested fields if input parent is null
> --
>
> Key: NIFI-5606
> URL: https://issues.apache.org/jira/browse/NIFI-5606
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.1
>Reporter: Joseph Percivall
>Priority: Major
>
> To reproduce, open the TestUpdateRecord.java processor and change the dynamic 
> properties in testFieldValuesInEL[1] to the following:
> {noformat}
> runner.setProperty("/name/last", "NiFi");
> runner.setProperty("/name/first", "Apache");{noformat}
>  
> Also, change person.json[2] to have no name field:
>  
> {noformat}
> {
>"id": 485,
>"mother": {
>"last": "Doe",
>"first": "Jane"
>}
> }{noformat}
>  
> After running, the output is:
>  
> {noformat}
> [ { "id" : 485, "name" : null } ]{noformat}
>  
> Where the expected output would be:
> {noformat}
> {
>"id": 485,
>"name": {
>"last": "NiFi",
>"first": "Apache"
>}
> }
> {noformat}
>  
> [1][https://github.com/apache/nifi/blob/4c787799ff7d971eb924df1e496da8338e6ab192/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestUpdateRecord.java#L303]
> [2] 
> [https://github.com/apache/nifi/blob/9ebf2cfaf1fdb1a28427aed5a8168004071efd12/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/TestUpdateRecord/input/person.json#L3]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5612) org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0

2018-09-25 Thread Bryan Bende (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627434#comment-16627434
 ] 

Bryan Bende commented on NIFI-5612:
---

[~colindean] thanks for digging in here... From quickly looking at your 
findings, it looks like we conditionally decide to use int or long in the union 
based on the precision:
{code:java}
case INTEGER:
  if (meta.isSigned(i) || (meta.getPrecision(i) > 0 && meta.getPrecision(i) < 
MAX_DIGITS_IN_INT)) {

builder.name(columnName).type().unionOf().nullBuilder().endNull().and().intType().endUnion().noDefault();
  } else {

builder.name(columnName).type().unionOf().nullBuilder().endNull().and().longType().endUnion().noDefault();
  }
  break;{code}
So when creating the schema it must be going into the first if statement based 
on precision being > 0 and < 11, so the schema now has [null, int].

Then later the JDBC driver returns a Long value, which seems like could be an 
issue with the driver, but since we are deciding between int and long when 
creating the schema, maybe we should be doing the same when handling the value?

It looks like a Long would fall into this block of code...
{code:java}
else if (value instanceof Number || value instanceof Boolean) {
  if (javaSqlType == BIGINT) {
int precision = meta.getPrecision(i);
if (precision < 0 || precision > MAX_DIGITS_IN_BIGINT) {
  rec.put(i - 1, value.toString());
} else {
  rec.put(i - 1, value);
}
 } else {
rec.put(i - 1, value);
 }
}{code}
[https://github.com/apache/nifi/blob/e959630c22c9a52ec717141f6cf9f018830a38bf/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L442]

The value would be a Long which extends Number, and since its not BIGINT it 
would fall into the second else block where it just writes value to the record.

Maybe in that second else block we should be doing something similar to what 
was done when creating the schema, and do something like...
{code:java}
if ((value instanceof Long) && precision < MAX_DIGITS_IN_INT) {
  int intValue = ((Long)value).intValue();
  rec.put(i-1, intValue);
} else {
  rec.put(i-1, value);
}{code}
 

> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> 
>
> Key: NIFI-5612
> URL: https://issues.apache.org/jira/browse/NIFI-5612
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0, 1.7.1
> Environment: Microsoft Windows, MySQL Enterprise 5.0.80
>Reporter: Colin Dean
>Priority: Major
>  Labels: ExecuteSQL, avro, nifi
>
> I'm seeing this when I execute {{SELECT * FROM }} on a few tables 
> but not on dozens of others in the same database.
> {code:java}
> 2018-09-13 01:11:31,434 WARN [Timer-Driven Process Thread-8] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ExecuteSQL[id=cf5c0996-eddf-3e05-25a3-c407c5edf990] due to uncaught 
> Exception: org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.UnresolvedUnionException: Not in union ["null","int"]: 0
>   at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308)
>   at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:462)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.lambda$onTrigger$1(ExecuteSQL.java:252)
>   at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2625)
>   at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:242)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627418#comment-16627418
 ] 

ASF GitHub Bot commented on NIFI-5557:
--

Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2971#discussion_r220206853
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
@@ -389,16 +380,24 @@ public void process(InputStream in) throws 
IOException {
 session.transfer(putFlowFile, REL_SUCCESS);
 
 } catch (final Throwable t) {
-if (tempDotCopyFile != null) {
-try {
-hdfs.delete(tempDotCopyFile, false);
-} catch (Exception e) {
-getLogger().error("Unable to remove temporary 
file {} due to {}", new Object[]{tempDotCopyFile, e});
-}
+   Optional causeOptional = findCause(t, 
GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor());
+if (causeOptional.isPresent()) {
+  getLogger().warn(String.format("An error occured 
while connecting to HDFS. "
--- End diff --

This could be changed to:
```java
getLogger().warn("An error occured while connecting to HDFS. Rolling back 
session, and penalizing flow file {}",
new Object[] {putFlowFile.getAttribute(CoreAttributes.UUID.key()),
causeOptional.get()});
```


> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>  

[jira] [Commented] (NIFI-5557) PutHDFS "GSSException: No valid credentials provided" when krb ticket expires

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627419#comment-16627419
 ] 

ASF GitHub Bot commented on NIFI-5557:
--

Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2971#discussion_r220204503
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
@@ -389,16 +380,24 @@ public void process(InputStream in) throws 
IOException {
 session.transfer(putFlowFile, REL_SUCCESS);
 
 } catch (final Throwable t) {
-if (tempDotCopyFile != null) {
-try {
-hdfs.delete(tempDotCopyFile, false);
-} catch (Exception e) {
-getLogger().error("Unable to remove temporary 
file {} due to {}", new Object[]{tempDotCopyFile, e});
-}
+   Optional causeOptional = findCause(t, 
GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor());
--- End diff --

My previous comment was a bit ambiguous, I apologize.  Having this logic in 
this catch for all Throwables is fine, but you could move this bit into a 
separate catch(IOException e) block of this try/catch.


> PutHDFS "GSSException: No valid credentials provided" when krb ticket expires
> -
>
> Key: NIFI-5557
> URL: https://issues.apache.org/jira/browse/NIFI-5557
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Endre Kovacs
>Assignee: Endre Kovacs
>Priority: Major
>
> when using *PutHDFS* processor in a kerberized environment, with a flow 
> "traffic" which approximately matches or less frequent then the lifetime of 
> the ticket of the principal, we see this in the log:
> {code:java}
> INFO [Timer-Driven Process Thread-4] o.a.h.io.retry.RetryInvocationHandler 
> Exception while invoking getFileInfo of class 
> ClientNamenodeProtocolTranslatorPB over host2/ip2:8020 after 13 fail over 
> attempts. Trying to fail over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for princi...@example.com to host2.example.com/ip2:8020; 
> Host Details : local host is: "host1.example.com/ip1"; destination host is: 
> "host2.example.com":8020; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy134.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.GeneratedMethodAccessor344.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy135.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:254)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
> at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:222)
> {code}
> and the flowfile is routed to failure relationship.
> *To reproduce:*
> Create a principal in your KDC with two minutes ticket lifetime,
> and set up a similar flow:
> {code:java}
> GetFile => putHDFS - success- -> logAttributes
> \
>  fail
>\
>  -> logAttributes
> {code}
>  copy a file to the input directory of the getFile processor. If the influx 
> of the flowfile is much more frequent, then the expiration time of the ticket:
> 

[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...

2018-09-25 Thread jtstorck
Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2971#discussion_r220204503
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
@@ -389,16 +380,24 @@ public void process(InputStream in) throws 
IOException {
 session.transfer(putFlowFile, REL_SUCCESS);
 
 } catch (final Throwable t) {
-if (tempDotCopyFile != null) {
-try {
-hdfs.delete(tempDotCopyFile, false);
-} catch (Exception e) {
-getLogger().error("Unable to remove temporary 
file {} due to {}", new Object[]{tempDotCopyFile, e});
-}
+   Optional causeOptional = findCause(t, 
GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor());
--- End diff --

My previous comment was a bit ambiguous, I apologize.  Having this logic in 
this catch for all Throwables is fine, but you could move this bit into a 
separate catch(IOException e) block of this try/catch.


---


[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...

2018-09-25 Thread jtstorck
Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2971#discussion_r220206853
  
--- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
@@ -389,16 +380,24 @@ public void process(InputStream in) throws 
IOException {
 session.transfer(putFlowFile, REL_SUCCESS);
 
 } catch (final Throwable t) {
-if (tempDotCopyFile != null) {
-try {
-hdfs.delete(tempDotCopyFile, false);
-} catch (Exception e) {
-getLogger().error("Unable to remove temporary 
file {} due to {}", new Object[]{tempDotCopyFile, e});
-}
+   Optional causeOptional = findCause(t, 
GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor());
+if (causeOptional.isPresent()) {
+  getLogger().warn(String.format("An error occured 
while connecting to HDFS. "
--- End diff --

This could be changed to:
```java
getLogger().warn("An error occured while connecting to HDFS. Rolling back 
session, and penalizing flow file {}",
new Object[] {putFlowFile.getAttribute(CoreAttributes.UUID.key()),
causeOptional.get()});
```


---


[jira] [Updated] (NIFI-5591) Enable compression of Avro in ExecuteSQL

2018-09-25 Thread Mike Thomsen (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-5591:
---
   Resolution: Fixed
Fix Version/s: 1.8.0
   Status: Resolved  (was: Patch Available)

> Enable compression of Avro in ExecuteSQL
> 
>
> Key: NIFI-5591
> URL: https://issues.apache.org/jira/browse/NIFI-5591
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>  Labels: ExecuteSQL, avro, compression
> Fix For: 1.8.0
>
>
> The Avro stream that comes out of the ExecuteSQL processor is uncompressed. 
> It's possible to rewrite it compressed using a combination of ConvertRecord 
> processor with AvroReader and AvroRecordSetWriter, but that's a lot of extra 
> I/O that could be handled transparently at the moment that the Avro data is 
> created.
> For implementation, it looks like ExecuteSQL builds a set of 
> {{JdbcCommon.AvroConvertionOptions}}[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java#L246].
>  That options object would need to gain a compression flag. Then, within 
> {{JdbcCommon#convertToAvroStream}} 
> [here|https://github.com/apache/nifi/blob/0dd4a91a6741eec04965a260c8aff38b72b3828d/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L281],
>  the {{dataFileWriter}} would get a codec set by {{setCodec}}, with the codec 
> having been created shortly before.
> For example of creating the codec, I looked at how the AvroRecordSetWriter 
> does it. The {{setCodec()}} is performed 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java#L44]
>  after the codec is created by factory option 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L104]
>  using a factory method 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L137].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5591) Enable compression of Avro in ExecuteSQL

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627409#comment-16627409
 ] 

ASF GitHub Bot commented on NIFI-5591:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3023


> Enable compression of Avro in ExecuteSQL
> 
>
> Key: NIFI-5591
> URL: https://issues.apache.org/jira/browse/NIFI-5591
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>  Labels: ExecuteSQL, avro, compression
>
> The Avro stream that comes out of the ExecuteSQL processor is uncompressed. 
> It's possible to rewrite it compressed using a combination of ConvertRecord 
> processor with AvroReader and AvroRecordSetWriter, but that's a lot of extra 
> I/O that could be handled transparently at the moment that the Avro data is 
> created.
> For implementation, it looks like ExecuteSQL builds a set of 
> {{JdbcCommon.AvroConvertionOptions}}[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java#L246].
>  That options object would need to gain a compression flag. Then, within 
> {{JdbcCommon#convertToAvroStream}} 
> [here|https://github.com/apache/nifi/blob/0dd4a91a6741eec04965a260c8aff38b72b3828d/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L281],
>  the {{dataFileWriter}} would get a codec set by {{setCodec}}, with the codec 
> having been created shortly before.
> For example of creating the codec, I looked at how the AvroRecordSetWriter 
> does it. The {{setCodec()}} is performed 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java#L44]
>  after the codec is created by factory option 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L104]
>  using a factory method 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L137].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3023: NIFI-5591 - Added avro compression format to Execut...

2018-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3023


---


[jira] [Commented] (NIFI-5591) Enable compression of Avro in ExecuteSQL

2018-09-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627407#comment-16627407
 ] 

ASF subversion and git services commented on NIFI-5591:
---

Commit 78c4e223fcaf78a819e04f8b6fa6541bfff2782f in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=78c4e22 ]

NIFI-5591 - Added avro compression format to ExecuteSQL

This closes #3023

Signed-off-by: Mike Thomsen 


> Enable compression of Avro in ExecuteSQL
> 
>
> Key: NIFI-5591
> URL: https://issues.apache.org/jira/browse/NIFI-5591
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>  Labels: ExecuteSQL, avro, compression
>
> The Avro stream that comes out of the ExecuteSQL processor is uncompressed. 
> It's possible to rewrite it compressed using a combination of ConvertRecord 
> processor with AvroReader and AvroRecordSetWriter, but that's a lot of extra 
> I/O that could be handled transparently at the moment that the Avro data is 
> created.
> For implementation, it looks like ExecuteSQL builds a set of 
> {{JdbcCommon.AvroConvertionOptions}}[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java#L246].
>  That options object would need to gain a compression flag. Then, within 
> {{JdbcCommon#convertToAvroStream}} 
> [here|https://github.com/apache/nifi/blob/0dd4a91a6741eec04965a260c8aff38b72b3828d/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L281],
>  the {{dataFileWriter}} would get a codec set by {{setCodec}}, with the codec 
> having been created shortly before.
> For example of creating the codec, I looked at how the AvroRecordSetWriter 
> does it. The {{setCodec()}} is performed 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java#L44]
>  after the codec is created by factory option 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L104]
>  using a factory method 
> [here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L137].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5588) Unable to set indefinite max wait time on DBCPConnectionPool

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627368#comment-16627368
 ] 

ASF GitHub Bot commented on NIFI-5588:
--

Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3022
  
:tada:



> Unable to set indefinite max wait time on DBCPConnectionPool
> 
>
> Key: NIFI-5588
> URL: https://issues.apache.org/jira/browse/NIFI-5588
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.8.0
>
>
> The DBCPConnectionPool controller service accepts a "Max Wait Time" that 
> configures 
> bq. The maximum amount of time that the pool will wait (when there are no 
> available connections) for a connection to be returned before failing, or -1 
> to wait indefinitely. 
> This value must validate as a time period. *There is no valid way to set 
> {{-1}}* with the current validator.
> The validator [in 
> use|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java#L110]
>  is {{StandardValidators.TIME_PERIOD_VALIDATOR}}. The 
> [TIME_PERIOD_VALIDATOR|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/processor/util/StandardValidators.java#L443]
>  uses [a regex built  in 
> FormatUtils|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java#L44]
>  that must have a time unit:
> {code:java}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> {code}
> The regex does not allow for an value such as {{-1}} or {{-1 secs}}, etc.
> The obvious workaround is to set that _very_ high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3022: NIFI-5588 - Fix max wait time in DBCP Connection Pool

2018-09-25 Thread colindean
Github user colindean commented on the issue:

https://github.com/apache/nifi/pull/3022
  
:tada:



---


[jira] [Commented] (NIFI-5634) When retrieving RPG from REST API in a cluster, ports can be returned that are not available on all nodes

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627325#comment-16627325
 ] 

ASF GitHub Bot commented on NIFI-5634:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3030

NIFI-5634: When merging RPG entities, ensure that we only send back t…

…he ports that are common to all nodes - even if that means sending back no 
ports

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5634

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3030.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3030


commit 5185d4407593946521bfde049c0cc2609b13ec20
Author: Mark Payne 
Date:   2018-09-25T13:05:06Z

NIFI-5634: When merging RPG entities, ensure that we only send back the 
ports that are common to all nodes - even if that means sending back no ports




> When retrieving RPG from REST API in a cluster, ports can be returned that 
> are not available on all nodes
> -
>
> Key: NIFI-5634
> URL: https://issues.apache.org/jira/browse/NIFI-5634
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When retrieving a specific Remote Process Group, it is possible to get back 
> an RPG that shows that a port is available, even when it is not available on 
> all nodes. The merging logic appears to be flawed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5634) When retrieving RPG from REST API in a cluster, ports can be returned that are not available on all nodes

2018-09-25 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5634:
-
Fix Version/s: 1.8.0
   Status: Patch Available  (was: Open)

> When retrieving RPG from REST API in a cluster, ports can be returned that 
> are not available on all nodes
> -
>
> Key: NIFI-5634
> URL: https://issues.apache.org/jira/browse/NIFI-5634
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When retrieving a specific Remote Process Group, it is possible to get back 
> an RPG that shows that a port is available, even when it is not available on 
> all nodes. The merging logic appears to be flawed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3030: NIFI-5634: When merging RPG entities, ensure that w...

2018-09-25 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/3030

NIFI-5634: When merging RPG entities, ensure that we only send back t…

…he ports that are common to all nodes - even if that means sending back 
no ports

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-5634

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3030.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3030


commit 5185d4407593946521bfde049c0cc2609b13ec20
Author: Mark Payne 
Date:   2018-09-25T13:05:06Z

NIFI-5634: When merging RPG entities, ensure that we only send back the 
ports that are common to all nodes - even if that means sending back no ports




---


[jira] [Updated] (NIFI-5633) EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException

2018-09-25 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5633:
-
Status: Patch Available  (was: Open)

> EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException
> -
>
> Key: NIFI-5633
> URL: https://issues.apache.org/jira/browse/NIFI-5633
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> When running this unit test:
> {code:java}
> @Test
> public void testAllDelineatedValuesCount() {
> final Map attributes = new HashMap<>();
> final String query = "${allDelineatedValues('${test}', '/'):count()}";
> attributes.put("test", "/");
> assertEquals(ResultType.WHOLE_NUMBER, Query.getResultType(query));
> assertEquals("0", Query.evaluateExpressions(query, attributes, null));
> }
> {code}
> This will throw:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.DelineatedAttributeEvaluator.evaluate(DelineatedAttributeEvaluator.java:65)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.reduce.CountEvaluator.evaluate(CountEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.MappingEvaluator.evaluate(MappingEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluate(Query.java:363)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpression(Query.java:204)
>   at 
> org.apache.nifi.attribute.expression.language.CompiledExpression.evaluate(CompiledExpression.java:58)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:51)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:64)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpressions(Query.java:223)
>   at 
> org.apache.nifi.attribute.expression.language.TestQuery.testAllDelineatedValuesCount(TestQuery.java:1033)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5633) EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627300#comment-16627300
 ] 

ASF GitHub Bot commented on NIFI-5633:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/3029

NIFI-5633 - allDelineatedValues can throw ArrayIndexOutOfBoundsException

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-5633

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3029.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3029


commit b9cbc23114a0078c47384a84e91c0c0d9b24c1b2
Author: Pierre Villard 
Date:   2018-09-25T12:57:49Z

NIFI-5633 - allDelineatedValues can throw ArrayIndexOutOfBoundsException




> EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException
> -
>
> Key: NIFI-5633
> URL: https://issues.apache.org/jira/browse/NIFI-5633
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> When running this unit test:
> {code:java}
> @Test
> public void testAllDelineatedValuesCount() {
> final Map attributes = new HashMap<>();
> final String query = "${allDelineatedValues('${test}', '/'):count()}";
> attributes.put("test", "/");
> assertEquals(ResultType.WHOLE_NUMBER, Query.getResultType(query));
> assertEquals("0", Query.evaluateExpressions(query, attributes, null));
> }
> {code}
> This will throw:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.DelineatedAttributeEvaluator.evaluate(DelineatedAttributeEvaluator.java:65)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.reduce.CountEvaluator.evaluate(CountEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.evaluation.selection.MappingEvaluator.evaluate(MappingEvaluator.java:38)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluate(Query.java:363)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpression(Query.java:204)
>   at 
> org.apache.nifi.attribute.expression.language.CompiledExpression.evaluate(CompiledExpression.java:58)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:51)
>   at 
> org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:64)
>   at 
> org.apache.nifi.attribute.expression.language.Query.evaluateExpressions(Query.java:223)
>   at 
> org.apache.nifi.attribute.expression.language.TestQuery.testAllDelineatedValuesCount(TestQuery.java:1033)
>   at 

[GitHub] nifi pull request #3029: NIFI-5633 - allDelineatedValues can throw ArrayInde...

2018-09-25 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/3029

NIFI-5633 - allDelineatedValues can throw ArrayIndexOutOfBoundsException

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-5633

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3029.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3029


commit b9cbc23114a0078c47384a84e91c0c0d9b24c1b2
Author: Pierre Villard 
Date:   2018-09-25T12:57:49Z

NIFI-5633 - allDelineatedValues can throw ArrayIndexOutOfBoundsException




---


[jira] [Created] (NIFI-5634) When retrieving RPG from REST API in a cluster, ports can be returned that are not available on all nodes

2018-09-25 Thread Mark Payne (JIRA)
Mark Payne created NIFI-5634:


 Summary: When retrieving RPG from REST API in a cluster, ports can 
be returned that are not available on all nodes
 Key: NIFI-5634
 URL: https://issues.apache.org/jira/browse/NIFI-5634
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne


When retrieving a specific Remote Process Group, it is possible to get back an 
RPG that shows that a port is available, even when it is not available on all 
nodes. The merging logic appears to be flawed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5633) EL - allDelineatedValues can throw ArrayIndexOutOfBoundsException

2018-09-25 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-5633:


 Summary: EL - allDelineatedValues can throw 
ArrayIndexOutOfBoundsException
 Key: NIFI-5633
 URL: https://issues.apache.org/jira/browse/NIFI-5633
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard


When running this unit test:

{code:java}
@Test
public void testAllDelineatedValuesCount() {
final Map attributes = new HashMap<>();

final String query = "${allDelineatedValues('${test}', '/'):count()}";

attributes.put("test", "/");
assertEquals(ResultType.WHOLE_NUMBER, Query.getResultType(query));
assertEquals("0", Query.evaluateExpressions(query, attributes, null));
}
{code}

This will throw:

{noformat}
java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.nifi.attribute.expression.language.evaluation.selection.DelineatedAttributeEvaluator.evaluate(DelineatedAttributeEvaluator.java:65)
at 
org.apache.nifi.attribute.expression.language.evaluation.reduce.CountEvaluator.evaluate(CountEvaluator.java:38)
at 
org.apache.nifi.attribute.expression.language.evaluation.selection.MappingEvaluator.evaluate(MappingEvaluator.java:38)
at 
org.apache.nifi.attribute.expression.language.Query.evaluate(Query.java:363)
at 
org.apache.nifi.attribute.expression.language.Query.evaluateExpression(Query.java:204)
at 
org.apache.nifi.attribute.expression.language.CompiledExpression.evaluate(CompiledExpression.java:58)
at 
org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:51)
at 
org.apache.nifi.attribute.expression.language.StandardPreparedQuery.evaluateExpressions(StandardPreparedQuery.java:64)
at 
org.apache.nifi.attribute.expression.language.Query.evaluateExpressions(Query.java:223)
at 
org.apache.nifi.attribute.expression.language.TestQuery.testAllDelineatedValuesCount(TestQuery.java:1033)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at 
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5618) NullPointerException is thrown if attempting to view details of a Provenance Event on a node that is disconnected from cluster

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627141#comment-16627141
 ] 

ASF GitHub Bot commented on NIFI-5618:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3027


> NullPointerException is thrown if attempting to view details of a Provenance 
> Event on a node that is disconnected from cluster
> --
>
> Key: NIFI-5618
> URL: https://issues.apache.org/jira/browse/NIFI-5618
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> I have a cluster of 2 nodes. I disconnected one of the nodes, then did a 
> Provenance Query. This returned the results correctly. However, when I tried 
> to view the details of the provenance event, I got an error in the UI 
> indicating that I should check my logs. User log has the following (partial) 
> stack trace:
> {code:java}
> 2018-09-20 15:16:36,049 ERROR [NiFi Web Server-177] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.api.ProvenanceEventResource.getProvenanceEvent(ProvenanceEventResource.java:299)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
> at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
> at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:268){code}
>  
> It appears to be due to the fact that since the node is disconnected, the 
> clusterNodeId is not provided in the REST API call. So the following block of 
> code:
> {code:java}
> final ClusterCoordinator coordinator = getClusterCoordinator();
> if (coordinator != null) {
> final NodeIdentifier nodeId = 
> coordinator.getNodeIdentifier(clusterNodeId);
> event.setClusterNodeAddress(nodeId.getApiAddress() + ":" + 
> nodeId.getApiPort());
> }{code}
> results in calling coordinator.getNodeIdentifier(null), which returns null 
> for the nodeId. We then call nodeId.getApiAddress(), throwing a NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5618) NullPointerException is thrown if attempting to view details of a Provenance Event on a node that is disconnected from cluster

2018-09-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627139#comment-16627139
 ] 

ASF subversion and git services commented on NIFI-5618:
---

Commit 030129c7ceacfd4e32db5dc1d00d6c86c8c3aaa7 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=030129c ]

NIFI-5618: Avoid NPE when viewing Provenance Event details on a disconnected 
node

Signed-off-by: Pierre Villard 

This closes #3027.


> NullPointerException is thrown if attempting to view details of a Provenance 
> Event on a node that is disconnected from cluster
> --
>
> Key: NIFI-5618
> URL: https://issues.apache.org/jira/browse/NIFI-5618
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> I have a cluster of 2 nodes. I disconnected one of the nodes, then did a 
> Provenance Query. This returned the results correctly. However, when I tried 
> to view the details of the provenance event, I got an error in the UI 
> indicating that I should check my logs. User log has the following (partial) 
> stack trace:
> {code:java}
> 2018-09-20 15:16:36,049 ERROR [NiFi Web Server-177] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.api.ProvenanceEventResource.getProvenanceEvent(ProvenanceEventResource.java:299)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
> at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
> at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:268){code}
>  
> It appears to be due to the fact that since the node is disconnected, the 
> clusterNodeId is not provided in the REST API call. So the following block of 
> code:
> {code:java}
> final ClusterCoordinator coordinator = getClusterCoordinator();
> if (coordinator != null) {
> final NodeIdentifier nodeId = 
> coordinator.getNodeIdentifier(clusterNodeId);
> event.setClusterNodeAddress(nodeId.getApiAddress() + ":" + 
> nodeId.getApiPort());
> }{code}
> results in calling coordinator.getNodeIdentifier(null), which returns null 
> for the nodeId. We then call nodeId.getApiAddress(), throwing a NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5618) NullPointerException is thrown if attempting to view details of a Provenance Event on a node that is disconnected from cluster

2018-09-25 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5618:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> NullPointerException is thrown if attempting to view details of a Provenance 
> Event on a node that is disconnected from cluster
> --
>
> Key: NIFI-5618
> URL: https://issues.apache.org/jira/browse/NIFI-5618
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> I have a cluster of 2 nodes. I disconnected one of the nodes, then did a 
> Provenance Query. This returned the results correctly. However, when I tried 
> to view the details of the provenance event, I got an error in the UI 
> indicating that I should check my logs. User log has the following (partial) 
> stack trace:
> {code:java}
> 2018-09-20 15:16:36,049 ERROR [NiFi Web Server-177] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.api.ProvenanceEventResource.getProvenanceEvent(ProvenanceEventResource.java:299)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
> at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
> at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:268){code}
>  
> It appears to be due to the fact that since the node is disconnected, the 
> clusterNodeId is not provided in the REST API call. So the following block of 
> code:
> {code:java}
> final ClusterCoordinator coordinator = getClusterCoordinator();
> if (coordinator != null) {
> final NodeIdentifier nodeId = 
> coordinator.getNodeIdentifier(clusterNodeId);
> event.setClusterNodeAddress(nodeId.getApiAddress() + ":" + 
> nodeId.getApiPort());
> }{code}
> results in calling coordinator.getNodeIdentifier(null), which returns null 
> for the nodeId. We then call nodeId.getApiAddress(), throwing a NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5618) NullPointerException is thrown if attempting to view details of a Provenance Event on a node that is disconnected from cluster

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627140#comment-16627140
 ] 

ASF GitHub Bot commented on NIFI-5618:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/3027
  
Code change LGTM, taking care of the check style issue while merging, +1, 
merging to master, thanks @markap14 


> NullPointerException is thrown if attempting to view details of a Provenance 
> Event on a node that is disconnected from cluster
> --
>
> Key: NIFI-5618
> URL: https://issues.apache.org/jira/browse/NIFI-5618
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> I have a cluster of 2 nodes. I disconnected one of the nodes, then did a 
> Provenance Query. This returned the results correctly. However, when I tried 
> to view the details of the provenance event, I got an error in the UI 
> indicating that I should check my logs. User log has the following (partial) 
> stack trace:
> {code:java}
> 2018-09-20 15:16:36,049 ERROR [NiFi Web Server-177] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.api.ProvenanceEventResource.getProvenanceEvent(ProvenanceEventResource.java:299)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
> at 
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
> at 
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
> at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:268){code}
>  
> It appears to be due to the fact that since the node is disconnected, the 
> clusterNodeId is not provided in the REST API call. So the following block of 
> code:
> {code:java}
> final ClusterCoordinator coordinator = getClusterCoordinator();
> if (coordinator != null) {
> final NodeIdentifier nodeId = 
> coordinator.getNodeIdentifier(clusterNodeId);
> event.setClusterNodeAddress(nodeId.getApiAddress() + ":" + 
> nodeId.getApiPort());
> }{code}
> results in calling coordinator.getNodeIdentifier(null), which returns null 
> for the nodeId. We then call nodeId.getApiAddress(), throwing a NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3027: NIFI-5618: Avoid NPE when viewing Provenance Event ...

2018-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3027


---


[GitHub] nifi issue #3027: NIFI-5618: Avoid NPE when viewing Provenance Event details...

2018-09-25 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/3027
  
Code change LGTM, taking care of the check style issue while merging, +1, 
merging to master, thanks @markap14 


---


[jira] [Commented] (NIFI-5630) Status History no longer showing counter values

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627135#comment-16627135
 ] 

ASF GitHub Bot commented on NIFI-5630:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3026


> Status History no longer showing counter values
> ---
>
> Key: NIFI-5630
> URL: https://issues.apache.org/jira/browse/NIFI-5630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When viewing Status History for a Processor, if that processor has any 
> counters, they should be shown in the Status History. This was added a few 
> releases ago but on master appears not to show this in 1.8.0-SNAPSHOT. 
> Appears to be ok in 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3026: NIFI-5630: Ensure that we include counters in Statu...

2018-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3026


---


[jira] [Commented] (NIFI-5630) Status History no longer showing counter values

2018-09-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627134#comment-16627134
 ] 

ASF subversion and git services commented on NIFI-5630:
---

Commit 4b4c9e14cb06928fb72cc883fd7c8551e6e8f01c in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=4b4c9e1 ]

NIFI-5630: Ensure that we include counters in Status History when present

Signed-off-by: Pierre Villard 

This closes #3026.


> Status History no longer showing counter values
> ---
>
> Key: NIFI-5630
> URL: https://issues.apache.org/jira/browse/NIFI-5630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When viewing Status History for a Processor, if that processor has any 
> counters, they should be shown in the Status History. This was added a few 
> releases ago but on master appears not to show this in 1.8.0-SNAPSHOT. 
> Appears to be ok in 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5630) Status History no longer showing counter values

2018-09-25 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5630:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Status History no longer showing counter values
> ---
>
> Key: NIFI-5630
> URL: https://issues.apache.org/jira/browse/NIFI-5630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When viewing Status History for a Processor, if that processor has any 
> counters, they should be shown in the Status History. This was added a few 
> releases ago but on master appears not to show this in 1.8.0-SNAPSHOT. 
> Appears to be ok in 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5630) Status History no longer showing counter values

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627131#comment-16627131
 ] 

ASF GitHub Bot commented on NIFI-5630:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/3026
  
Build ok, tested with UpdateCounter processor to confirm that counter is 
available in Status History. +1, merging to master, thanks @markap14 


> Status History no longer showing counter values
> ---
>
> Key: NIFI-5630
> URL: https://issues.apache.org/jira/browse/NIFI-5630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> When viewing Status History for a Processor, if that processor has any 
> counters, they should be shown in the Status History. This was added a few 
> releases ago but on master appears not to show this in 1.8.0-SNAPSHOT. 
> Appears to be ok in 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3026: NIFI-5630: Ensure that we include counters in Status Histo...

2018-09-25 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/3026
  
Build ok, tested with UpdateCounter processor to confirm that counter is 
available in Status History. +1, merging to master, thanks @markap14 


---


[jira] [Commented] (NIFI-5588) Unable to set indefinite max wait time on DBCPConnectionPool

2018-09-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626872#comment-16626872
 ] 

ASF GitHub Bot commented on NIFI-5588:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3022


> Unable to set indefinite max wait time on DBCPConnectionPool
> 
>
> Key: NIFI-5588
> URL: https://issues.apache.org/jira/browse/NIFI-5588
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>
> The DBCPConnectionPool controller service accepts a "Max Wait Time" that 
> configures 
> bq. The maximum amount of time that the pool will wait (when there are no 
> available connections) for a connection to be returned before failing, or -1 
> to wait indefinitely. 
> This value must validate as a time period. *There is no valid way to set 
> {{-1}}* with the current validator.
> The validator [in 
> use|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java#L110]
>  is {{StandardValidators.TIME_PERIOD_VALIDATOR}}. The 
> [TIME_PERIOD_VALIDATOR|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/processor/util/StandardValidators.java#L443]
>  uses [a regex built  in 
> FormatUtils|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java#L44]
>  that must have a time unit:
> {code:java}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> {code}
> The regex does not allow for an value such as {{-1}} or {{-1 secs}}, etc.
> The obvious workaround is to set that _very_ high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5588) Unable to set indefinite max wait time on DBCPConnectionPool

2018-09-25 Thread Sivaprasanna Sethuraman (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaprasanna Sethuraman updated NIFI-5588:
--
   Resolution: Fixed
Fix Version/s: 1.8.0
   Status: Resolved  (was: Patch Available)

> Unable to set indefinite max wait time on DBCPConnectionPool
> 
>
> Key: NIFI-5588
> URL: https://issues.apache.org/jira/browse/NIFI-5588
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.8.0
>
>
> The DBCPConnectionPool controller service accepts a "Max Wait Time" that 
> configures 
> bq. The maximum amount of time that the pool will wait (when there are no 
> available connections) for a connection to be returned before failing, or -1 
> to wait indefinitely. 
> This value must validate as a time period. *There is no valid way to set 
> {{-1}}* with the current validator.
> The validator [in 
> use|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java#L110]
>  is {{StandardValidators.TIME_PERIOD_VALIDATOR}}. The 
> [TIME_PERIOD_VALIDATOR|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/processor/util/StandardValidators.java#L443]
>  uses [a regex built  in 
> FormatUtils|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java#L44]
>  that must have a time unit:
> {code:java}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> {code}
> The regex does not allow for an value such as {{-1}} or {{-1 secs}}, etc.
> The obvious workaround is to set that _very_ high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3022: NIFI-5588 - Fix max wait time in DBCP Connection Po...

2018-09-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3022


---


[jira] [Commented] (NIFI-5588) Unable to set indefinite max wait time on DBCPConnectionPool

2018-09-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626871#comment-16626871
 ] 

ASF subversion and git services commented on NIFI-5588:
---

Commit c4d3b5e94f80b3973e18a49007a5e51728c62d74 in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=c4d3b5e ]

NIFI-5588 - Fix max wait time in DBCP Connection Pool

This closes #3022

Signed-off-by: zenfenan 


> Unable to set indefinite max wait time on DBCPConnectionPool
> 
>
> Key: NIFI-5588
> URL: https://issues.apache.org/jira/browse/NIFI-5588
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: macOS, Java 8
>Reporter: Colin Dean
>Assignee: Pierre Villard
>Priority: Major
>
> The DBCPConnectionPool controller service accepts a "Max Wait Time" that 
> configures 
> bq. The maximum amount of time that the pool will wait (when there are no 
> available connections) for a connection to be returned before failing, or -1 
> to wait indefinitely. 
> This value must validate as a time period. *There is no valid way to set 
> {{-1}}* with the current validator.
> The validator [in 
> use|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java#L110]
>  is {{StandardValidators.TIME_PERIOD_VALIDATOR}}. The 
> [TIME_PERIOD_VALIDATOR|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/processor/util/StandardValidators.java#L443]
>  uses [a regex built  in 
> FormatUtils|https://github.com/apache/nifi/blob/0274bd4ff3f4199838ff1307c9c01d98fcc9150b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java#L44]
>  that must have a time unit:
> {code:java}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> {code}
> The regex does not allow for an value such as {{-1}} or {{-1 secs}}, etc.
> The obvious workaround is to set that _very_ high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)