[GitHub] SatwikBhandiwad commented on issue #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
SatwikBhandiwad commented on issue #3257: NIFI-5435 Prometheus /metrics http 
endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#issuecomment-463516038
 
 
   Thanks @pepov  and @kevdoran for showing interest and reviewing the code. 
I'll address all the comments and come back soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rkarthik29 opened a new pull request #3306: nifi oracle cdc changes using xstream

2019-02-13 Thread GitBox
rkarthik29 opened a new pull request #3306: nifi oracle cdc changes using 
xstream
URL: https://github.com/apache/nifi/pull/3306
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
   - [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
   - [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6029) Conduct Apache NiFi 1.9.0 Release

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767813#comment-16767813
 ] 

ASF subversion and git services commented on NIFI-6029:
---

Commit 3470114e44825b6671a2af0bc00c734a08c3e80a in nifi's branch 
refs/heads/NIFI-6029-RC1 from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3470114 ]

NIFI-6029-RC1 prepare for next development iteration


> Conduct Apache NiFi 1.9.0 Release
> -
>
> Key: NIFI-6029
> URL: https://issues.apache.org/jira/browse/NIFI-6029
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-6029) Conduct Apache NiFi 1.9.0 Release

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767812#comment-16767812
 ] 

ASF subversion and git services commented on NIFI-6029:
---

Commit 75233d0c05275a587bcf4b7496f4bb797f5026d6 in nifi's branch 
refs/heads/NIFI-6029-RC1 from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=75233d0 ]

NIFI-6029-RC1 prepare release nifi-1.9.0-RC1


> Conduct Apache NiFi 1.9.0 Release
> -
>
> Key: NIFI-6029
> URL: https://issues.apache.org/jira/browse/NIFI-6029
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5673) Support auto loading of new NARs

2019-02-13 Thread Joseph Witt (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-5673:
--
Issue Type: New Feature  (was: Improvement)

> Support auto loading of new NARs
> 
>
> Key: NIFI-5673
> URL: https://issues.apache.org/jira/browse/NIFI-5673
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.9.0
>
>
> We should be able to detect when new NARs have been added to any of the NAR 
> directories and automatically load them and make the components available for 
> use without restarting the whole NiFi instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread Joseph Witt (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4975:
--
Issue Type: New Feature  (was: Improvement)

> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767654#comment-16767654
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767649#comment-16767649
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4975:
---
   Resolution: Fixed
Fix Version/s: 1.9.0
   Status: Resolved  (was: Patch Available)

> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767656#comment-16767656
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767651#comment-16767651
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767650#comment-16767650
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767652#comment-16767652
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767655#comment-16767655
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767657#comment-16767657
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767648#comment-16767648
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767653#comment-16767653
 ] 

ASF subversion and git services commented on NIFI-4975:
---

Commit 033b2a1940bf97bb213347a53d2d2a6c0b4cd12b in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=033b2a1 ]

NIFI-4975 Add GridFS processors
NIFI-4975 Added changes requested in a code review.
NIFI-4975 Reverted some base Mongo changes.
NIFI-4975 Moved connection configuration to using Mongo client service.
NIFI-4975 Fixed a lot of style issues.
NIFI-4975 Removed an EL statement that was causing problems with the UI.
NIFI-4975 Added changes from code review.
NIFI-4975 Added additional details for FetchGridFS.
NIFI-4975 Added documentation for DeleteGridFS.
NIFI-4975 Added documentation for PutGridFS.

Signed-off-by: Matthew Burgess 

This closes #2546


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #2546: NIFI-4975 Add GridFS processors

2019-02-13 Thread GitBox
asfgit closed pull request #2546: NIFI-4975 Add GridFS processors
URL: https://github.com/apache/nifi/pull/2546
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6029) Conduct Apache NiFi 1.9.0 Release

2019-02-13 Thread Joseph Witt (JIRA)
Joseph Witt created NIFI-6029:
-

 Summary: Conduct Apache NiFi 1.9.0 Release
 Key: NIFI-6029
 URL: https://issues.apache.org/jira/browse/NIFI-6029
 Project: Apache NiFi
  Issue Type: Task
  Components: Tools and Build
Reporter: Joseph Witt
Assignee: Joseph Witt
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-6028) Upgrading NiFi can put versioned flows into a conflict state

2019-02-13 Thread Bryan Bende (JIRA)
Bryan Bende created NIFI-6028:
-

 Summary: Upgrading NiFi can put versioned flows into a conflict 
state
 Key: NIFI-6028
 URL: https://issues.apache.org/jira/browse/NIFI-6028
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.7.1, 1.8.0, 1.7.0, 1.6.0, 1.5.0
Reporter: Bryan Bende


When you upgrade NiFi, existing processors may have new properties and 
relationships. If any of those processors are part of a versioned flow then 
these changes will trigger a local modification indicating that a new version 
needs to be saved to registry to track the new properties or relationships.

The issue is when you have multiple environments...
 * Start in dev with NiFi version X
 * Upgrade dev to NiFi version Y
 * Now commit versioned PGs in dev that had local changes
 * Go to staging and upgrade NiFi to version Y
 * The versioned PGs are now in conflict because there is an upgrade available, 
but there are local changes detected from upgrading NiFI, even though these are 
the same changes available in the upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] mattyb149 commented on issue #2546: NIFI-4975 Add GridFS processors

2019-02-13 Thread GitBox
mattyb149 commented on issue #2546: NIFI-4975 Add GridFS processors
URL: https://github.com/apache/nifi/pull/2546#issuecomment-463389308
 
 
   +1 LGTM, thanks for the new features and for adding the extra documentation! 
Waiting for Travis then merging to master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (NIFI-5575) PutHDFS does not use fs.permissions.umask-mode from hdfs-site.xml

2019-02-13 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck resolved NIFI-5575.
---
Resolution: Fixed

> PutHDFS does not use fs.permissions.umask-mode from hdfs-site.xml
> -
>
> Key: NIFI-5575
> URL: https://issues.apache.org/jira/browse/NIFI-5575
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0, 1.7.1
>Reporter: Jeff Storck
>Assignee: Kei Miyauchi
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> PutHDFS does not use the value of "fs.permissions.umask-mode" in 
> hdfs-site.xml.  If the user does not provide a umask in the "Permissions 
> umask" property, PutHDFS will use FsPermissions.DEFAULT_UMASK and set that in 
> the config, which will overwrite the value from hdfs-site.xml.
> The code below shows that without the "Permissions umask" property being set, 
> it will force a umask of '18', the value of FsPermission.DEFAULT_MASK.
> Instead, PutHDFS should first check the Configuration instance to see if 
> "fs.permissions.umask-mode" is set and use that value.  
> FsPermission.DEFAULT_MASK should be used only in the case when 
> "fs.permissions.umask-mode" is not set.
> {code:java}
> protected void preProcessConfiguration(final Configuration config, final 
> ProcessContext context) {
> // Set umask once, to avoid thread safety issues doing it in onTrigger
> final PropertyValue umaskProp = context.getProperty(UMASK);
> final short dfsUmask;
> if (umaskProp.isSet()) {
> dfsUmask = Short.parseShort(umaskProp.getValue(), 8);
> } else {
> dfsUmask = FsPermission.DEFAULT_UMASK;
> }
> FsPermission.setUMask(config, new FsPermission(dfsUmask));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] jtstorck commented on issue #3252: NIFI-5575 Make PutHDFS check fs.permissions.umask-mode if property "Permission umask" is empty.

2019-02-13 Thread GitBox
jtstorck commented on issue #3252: NIFI-5575 Make PutHDFS check 
fs.permissions.umask-mode if property "Permission umask" is empty.
URL: https://github.com/apache/nifi/pull/3252#issuecomment-463376874
 
 
   Tests against real HDFS cluster worked:
   * umask configured in the processor
   * no umask in the processor, custom umask value hdfs-site.xml 
`fs.permissions.umask-mode` property defined in hdfs-site.xml
   * no umask in processor or hdfs-site.xml `fs.permissions.umask-mode` 
property, fallback to Hadoop default 022
   
   +1, merged to master!
   @kei-miyauchi Thanks for the contribution!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-5575) PutHDFS does not use fs.permissions.umask-mode from hdfs-site.xml

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767600#comment-16767600
 ] 

ASF subversion and git services commented on NIFI-5575:
---

Commit 7a763d3c496049725524a0022e86775b3d0fd760 in nifi's branch 
refs/heads/master from Key Miyauchi
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7a763d3 ]

NIFI-5575 Make PutHDFS check fs.permissions.umask-mode if property "Permission 
umask" is empty.


> PutHDFS does not use fs.permissions.umask-mode from hdfs-site.xml
> -
>
> Key: NIFI-5575
> URL: https://issues.apache.org/jira/browse/NIFI-5575
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0, 1.7.1
>Reporter: Jeff Storck
>Assignee: Kei Miyauchi
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> PutHDFS does not use the value of "fs.permissions.umask-mode" in 
> hdfs-site.xml.  If the user does not provide a umask in the "Permissions 
> umask" property, PutHDFS will use FsPermissions.DEFAULT_UMASK and set that in 
> the config, which will overwrite the value from hdfs-site.xml.
> The code below shows that without the "Permissions umask" property being set, 
> it will force a umask of '18', the value of FsPermission.DEFAULT_MASK.
> Instead, PutHDFS should first check the Configuration instance to see if 
> "fs.permissions.umask-mode" is set and use that value.  
> FsPermission.DEFAULT_MASK should be used only in the case when 
> "fs.permissions.umask-mode" is not set.
> {code:java}
> protected void preProcessConfiguration(final Configuration config, final 
> ProcessContext context) {
> // Set umask once, to avoid thread safety issues doing it in onTrigger
> final PropertyValue umaskProp = context.getProperty(UMASK);
> final short dfsUmask;
> if (umaskProp.isSet()) {
> dfsUmask = Short.parseShort(umaskProp.getValue(), 8);
> } else {
> dfsUmask = FsPermission.DEFAULT_UMASK;
> }
> FsPermission.setUMask(config, new FsPermission(dfsUmask));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] jtstorck closed pull request #3252: NIFI-5575 Make PutHDFS check fs.permissions.umask-mode if property "Permission umask" is empty.

2019-02-13 Thread GitBox
jtstorck closed pull request #3252: NIFI-5575 Make PutHDFS check 
fs.permissions.umask-mode if property "Permission umask" is empty.
URL: https://github.com/apache/nifi/pull/3252
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] alopresto commented on issue #3305: NIFI-5950 Refactored shared logic for input and output port update

2019-02-13 Thread GitBox
alopresto commented on issue #3305: NIFI-5950 Refactored shared logic for input 
and output port update
URL: https://github.com/apache/nifi/pull/3305#issuecomment-463371867
 
 
   Recommend hiding whitespace changes for this PR (setting under "Diff 
Settings" or via [link 
here](https://github.com/apache/nifi/pull/3305/files?utf8=%E2%9C%93=unified=1)).
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6020) Cannot list users or policies when an access policy contains a group that is deleted

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767592#comment-16767592
 ] 

ASF subversion and git services commented on NIFI-6020:
---

Commit 2938454ae4fc200fd5f6aeffe23bc4b33c83e783 in nifi's branch 
refs/heads/master from Kevin Doran
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2938454 ]

NIFI-6020: Fix NPE in getAccessPoliciesForUser

This closes #3304


> Cannot list users or policies when an access policy contains a group that is 
> deleted
> 
>
> Key: NIFI-6020
> URL: https://issues.apache.org/jira/browse/NIFI-6020
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Originally reported on the Apache NiFi Slack.
> when groups are removed in ldap, it impacts access policies that had the 
> group id.
> [https://apachenifi.slack.com/archives/C0L9UPWJZ/p1549493200163800]
> This relates to NIFI-5948.
> Steps to reproduce:
>  # Configure NiFi to use the LDAP UserGroupProvider.
>  # Then in Nifi, using the UI, create some access policies that contain the 
> LDAP groups.
>  # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
> different group search base/filter such that a subset of groups are returned, 
> and at least one group that belongs to an access policy is no longer synced 
> from ldap.
>  # Go to buger menu -> users as observe an NPE. stack trace below
> The only way to fix this problem is to delete the association of the access 
> policy -> group in the file: authorizations.xml.
> +*Stack trace:*+
>  
> {noformat}
> 2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
> at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
> at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at 

[GitHub] asfgit closed pull request #3304: NIFI-6020: Fix NPE in getAccessPoliciesForUser

2019-02-13 Thread GitBox
asfgit closed pull request #3304: NIFI-6020: Fix NPE in getAccessPoliciesForUser
URL: https://github.com/apache/nifi/pull/3304
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6020) Cannot list users or policies when an access policy contains a group that is deleted

2019-02-13 Thread Matt Gilman (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-6020:
--
   Resolution: Fixed
Fix Version/s: 1.9.0
   Status: Resolved  (was: Patch Available)

> Cannot list users or policies when an access policy contains a group that is 
> deleted
> 
>
> Key: NIFI-6020
> URL: https://issues.apache.org/jira/browse/NIFI-6020
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Originally reported on the Apache NiFi Slack.
> when groups are removed in ldap, it impacts access policies that had the 
> group id.
> [https://apachenifi.slack.com/archives/C0L9UPWJZ/p1549493200163800]
> This relates to NIFI-5948.
> Steps to reproduce:
>  # Configure NiFi to use the LDAP UserGroupProvider.
>  # Then in Nifi, using the UI, create some access policies that contain the 
> LDAP groups.
>  # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
> different group search base/filter such that a subset of groups are returned, 
> and at least one group that belongs to an access policy is no longer synced 
> from ldap.
>  # Go to buger menu -> users as observe an NPE. stack trace below
> The only way to fix this problem is to delete the association of the access 
> policy -> group in the file: authorizations.xml.
> +*Stack trace:*+
>  
> {noformat}
> 2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
> at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
> at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 

[GitHub] mcgilman commented on issue #3304: NIFI-6020: Fix NPE in getAccessPoliciesForUser

2019-02-13 Thread GitBox
mcgilman commented on issue #3304: NIFI-6020: Fix NPE in 
getAccessPoliciesForUser
URL: https://github.com/apache/nifi/pull/3304#issuecomment-463369306
 
 
   Verified this fix. Thanks @kevdoran! Will merge to master.
   
   Hit a similar issue when attempting to view the policy under these same 
conditions. Filed a JIRA for addressing it [1].
   
   [1] https://issues.apache.org/jira/browse/NIFI-6027


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6027) Cannot list policies when a user/group has been removed

2019-02-13 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-6027:
-

 Summary: Cannot list policies when a user/group has been removed
 Key: NIFI-6027
 URL: https://issues.apache.org/jira/browse/NIFI-6027
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman


When attempting to view policy where a user or group has been removed outside 
of NiFi (like when NiFi is syncing to an LDAP instance) the endpoint 
incorrectly returns 404. This 404 is happening because it's unable to find the 
user or group even though the policy does exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5859) Update NAR maven plugin to include information about Extensions

2019-02-13 Thread Kevin Doran (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767579#comment-16767579
 ] 

Kevin Doran commented on NIFI-5859:
---

[~joewitt] thanks for catching. updated the fix version

> Update NAR maven plugin to include information about Extensions
> ---
>
> Key: NIFI-5859
> URL: https://issues.apache.org/jira/browse/NIFI-5859
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: nifi-nar-maven-plugin-1.2.1
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In order to have the NiFi Registry host any extensions, the registry will 
> need a way to know what extensions exist in a given NAR. Currently, that 
> information is not available directly.
> The NAR maven plugin should be updated to provide a list of extensions and 
> for each one, provide at least the following minimal information:
>  * Extension Type
>  * Extension Name
>  * Capability Description
>  * Whether or not the component is Restricted, "sub-restrictions" it has, and 
> explanations of both
>  * Any Tags that the component has
>  * If the component is a Controller Service, any Controller Service API's 
> that it provides
> Additionally, it would be ideal to provide all documentation for the 
> component within the NAR. It is best, though, not to write the documentation 
> in HTML as is done now but rather in XML or some sort of form that provides 
> the information in a structured way without any styling. This would allow the 
> documentation to be rendered consistently, even if the styling changes from 1 
> version to the next.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5859) Update NAR maven plugin to include information about Extensions

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-5859:
--
Fix Version/s: nifi-nar-maven-plugin-1.2.1

> Update NAR maven plugin to include information about Extensions
> ---
>
> Key: NIFI-5859
> URL: https://issues.apache.org/jira/browse/NIFI-5859
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: nifi-nar-maven-plugin-1.2.1
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In order to have the NiFi Registry host any extensions, the registry will 
> need a way to know what extensions exist in a given NAR. Currently, that 
> information is not available directly.
> The NAR maven plugin should be updated to provide a list of extensions and 
> for each one, provide at least the following minimal information:
>  * Extension Type
>  * Extension Name
>  * Capability Description
>  * Whether or not the component is Restricted, "sub-restrictions" it has, and 
> explanations of both
>  * Any Tags that the component has
>  * If the component is a Controller Service, any Controller Service API's 
> that it provides
> Additionally, it would be ideal to provide all documentation for the 
> component within the NAR. It is best, though, not to write the documentation 
> in HTML as is done now but rather in XML or some sort of form that provides 
> the information in a structured way without any styling. This would allow the 
> documentation to be rendered consistently, even if the styling changes from 1 
> version to the next.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5859) Update NAR maven plugin to include information about Extensions

2019-02-13 Thread Joseph Witt (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767577#comment-16767577
 ] 

Joseph Witt commented on NIFI-5859:
---

[~kdoran] Any JIRA we resolve through code commits/etc.. we should specify what 
the fix version is for it so when the release happens the rel notes will have 
it.  If there isn't a fix version for this we can create one.


> Update NAR maven plugin to include information about Extensions
> ---
>
> Key: NIFI-5859
> URL: https://issues.apache.org/jira/browse/NIFI-5859
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In order to have the NiFi Registry host any extensions, the registry will 
> need a way to know what extensions exist in a given NAR. Currently, that 
> information is not available directly.
> The NAR maven plugin should be updated to provide a list of extensions and 
> for each one, provide at least the following minimal information:
>  * Extension Type
>  * Extension Name
>  * Capability Description
>  * Whether or not the component is Restricted, "sub-restrictions" it has, and 
> explanations of both
>  * Any Tags that the component has
>  * If the component is a Controller Service, any Controller Service API's 
> that it provides
> Additionally, it would be ideal to provide all documentation for the 
> component within the NAR. It is best, though, not to write the documentation 
> in HTML as is done now but rather in XML or some sort of form that provides 
> the information in a structured way without any styling. This would allow the 
> documentation to be rendered consistently, even if the styling changes from 1 
> version to the next.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] MikeThomsen commented on issue #3286: NIFI-5988 Fixed validation issues on database and collection names.

2019-02-13 Thread GitBox
MikeThomsen commented on issue #3286: NIFI-5988 Fixed validation issues on 
database and collection names.
URL: https://github.com/apache/nifi/pull/3286#issuecomment-463362350
 
 
   @zenfenan made the change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6026) TLS-Toolkit should breakout JKS to individual certs and keys.

2019-02-13 Thread Nathan Gough (JIRA)
Nathan Gough created NIFI-6026:
--

 Summary: TLS-Toolkit should breakout JKS to individual certs and 
keys.
 Key: NIFI-6026
 URL: https://issues.apache.org/jira/browse/NIFI-6026
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Nathan Gough
Assignee: Nathan Gough


Ingest a JKS and output the unencrypted public certificate and private key as 
two separate files in PEM format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] MikeThomsen commented on issue #3285: NIFI-5987 Fixed issue where an invalid query pulled from an attribute…

2019-02-13 Thread GitBox
MikeThomsen commented on issue #3285: NIFI-5987 Fixed issue where an invalid 
query pulled from an attribute…
URL: https://github.com/apache/nifi/pull/3285#issuecomment-463361119
 
 
   @zenfenan addressed your comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MikeThomsen commented on a change in pull request #3285: NIFI-5987 Fixed issue where an invalid query pulled from an attribute…

2019-02-13 Thread GitBox
MikeThomsen commented on a change in pull request #3285: NIFI-5987 Fixed issue 
where an invalid query pulled from an attribute…
URL: https://github.com/apache/nifi/pull/3285#discussion_r256580250
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ##
 @@ -149,10 +149,16 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 }
 }
 
-final Document query = getQuery(context, session, input );
+final Document query;
+try {
+query = getQuery(context, session, input);
+} catch (Exception ex) {
+getLogger().error("Error parsing query.", ex);
+if (input != null) {
+session.transfer(input, REL_FAILURE);
+}
 
-if (query == null) {
-return;
+return; //We need to stop immediately.
 
 Review comment:
   Yeah, we want onTrigger to stop immediately and commit things to REL_FAILURE.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MikeThomsen commented on issue #2546: NIFI-4975 Add GridFS processors

2019-02-13 Thread GitBox
MikeThomsen commented on issue #2546: NIFI-4975 Add GridFS processors
URL: https://github.com/apache/nifi/pull/2546#issuecomment-463359883
 
 
   @mattyb149 Wrote up documentation for all three new processors that should 
set things straight.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-5859) Update NAR maven plugin to include information about Extensions

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-5859:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update NAR maven plugin to include information about Extensions
> ---
>
> Key: NIFI-5859
> URL: https://issues.apache.org/jira/browse/NIFI-5859
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In order to have the NiFi Registry host any extensions, the registry will 
> need a way to know what extensions exist in a given NAR. Currently, that 
> information is not available directly.
> The NAR maven plugin should be updated to provide a list of extensions and 
> for each one, provide at least the following minimal information:
>  * Extension Type
>  * Extension Name
>  * Capability Description
>  * Whether or not the component is Restricted, "sub-restrictions" it has, and 
> explanations of both
>  * Any Tags that the component has
>  * If the component is a Controller Service, any Controller Service API's 
> that it provides
> Additionally, it would be ideal to provide all documentation for the 
> component within the NAR. It is best, though, not to write the documentation 
> in HTML as is done now but rather in XML or some sort of form that provides 
> the information in a structured way without any styling. This would allow the 
> documentation to be rendered consistently, even if the styling changes from 1 
> version to the next.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] jtstorck commented on issue #3252: NIFI-5575 Make PutHDFS check fs.permissions.umask-mode if property "Permission umask" is empty.

2019-02-13 Thread GitBox
jtstorck commented on issue #3252: NIFI-5575 Make PutHDFS check 
fs.permissions.umask-mode if property "Permission umask" is empty.
URL: https://github.com/apache/nifi/pull/3252#issuecomment-463355367
 
 
   @kei-miyauchi I've updated the permission tests in PutHDFSTest so that the 
mock filesystem has the correct Hadoop configuration, which fixes the tests.  
I'm going to do some final live HDFS testing, and then I'll merge your PR with 
my updates.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MikeThomsen commented on a change in pull request #2546: NIFI-4975 Add GridFS processors

2019-02-13 Thread GitBox
MikeThomsen commented on a change in pull request #2546: NIFI-4975 Add GridFS 
processors
URL: https://github.com/apache/nifi/pull/2546#discussion_r256571266
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/gridfs/FetchGridFS.java
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.mongodb.gridfs;
+
+import com.mongodb.client.MongoCursor;
+import com.mongodb.client.gridfs.GridFSBucket;
+import com.mongodb.client.gridfs.model.GridFSFile;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.mongodb.MongoDBClientService;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.JsonValidator;
+import org.apache.nifi.processors.mongodb.QueryHelper;
+import org.apache.nifi.util.StringUtils;
+import org.bson.Document;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes(
+@WritesAttribute(attribute = "gridfs.file.metadata", description = "The 
custom metadata stored with a file is attached to this property if it exists.")
+)
+@Tags({"fetch", "gridfs", "mongo"})
+@CapabilityDescription("Retrieves one or more files from a GridFS bucket by 
file name or by a user-defined query.")
+public class FetchGridFS extends AbstractGridFSProcessor implements 
QueryHelper {
+
+static final String METADATA_ATTRIBUTE = "gridfs.file.metadata";
+
+static final PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
 
 Review comment:
   There's a third mode, which is retrieving the query from the body similar to 
GetMongo. I'll update the docs as if they're confusing you, they're not 
sufficient for regular users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6025) Track Enabled/Disabled State in Registry

2019-02-13 Thread Alan Jackoway (JIRA)
Alan Jackoway created NIFI-6025:
---

 Summary: Track Enabled/Disabled State in Registry
 Key: NIFI-6025
 URL: https://issues.apache.org/jira/browse/NIFI-6025
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.8.0
Reporter: Alan Jackoway


We often have little chunks of pipelines that are used for debugging/fixing 
things that went wrong/etc. that we want to disable. I would love for 
disabled/enabled to be a thing that gets committed to registry, but my little 
test says it isn't. It would be nice for disabled state to be persisted in 
registry so that we can differentiate between things that should usually be 
disabled and things that are just stopped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-6024) Registry Buckets Inconsistently Sorted

2019-02-13 Thread Alan Jackoway (JIRA)
Alan Jackoway created NIFI-6024:
---

 Summary: Registry Buckets Inconsistently Sorted
 Key: NIFI-6024
 URL: https://issues.apache.org/jira/browse/NIFI-6024
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.8.0
Reporter: Alan Jackoway


When importing a process group from a registry with multiple buckets, they 
should always be in alphanumeric order. Currently they are inconsistent in what 
order comes back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] bbende commented on issue #7: NIFI-5859: Build NAR Extension Definitions/docs at build time

2019-02-13 Thread GitBox
bbende commented on issue #7: NIFI-5859: Build NAR Extension Definitions/docs 
at build time
URL: https://github.com/apache/nifi-maven/pull/7#issuecomment-463350679
 
 
   @markap14 going to close this since @kevdoran was able to review/merge the 
other PR that included this work, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bbende closed pull request #7: NIFI-5859: Build NAR Extension Definitions/docs at build time

2019-02-13 Thread GitBox
bbende closed pull request #7: NIFI-5859: Build NAR Extension Definitions/docs 
at build time
URL: https://github.com/apache/nifi-maven/pull/7
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kevdoran closed pull request #8: NIFI-5859 - Build NAR Extension Definitions/docs at build time

2019-02-13 Thread GitBox
kevdoran closed pull request #8: NIFI-5859 - Build NAR Extension 
Definitions/docs at build time
URL: https://github.com/apache/nifi-maven/pull/8
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-5869) JMS Connection Fails After JMS servers Change behind JNDI

2019-02-13 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5869:
---
   Resolution: Fixed
Fix Version/s: 1.9.0
   Status: Resolved  (was: Patch Available)

> JMS Connection Fails After JMS servers Change behind JNDI
> -
>
> Key: NIFI-5869
> URL: https://issues.apache.org/jira/browse/NIFI-5869
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
> Fix For: 1.9.0
>
> Attachments: 3261.patch.txt, JNDI_JMS_Exception.txt
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> JMS Connection Fails After JMS servers Change behind JNDI.
> Reproduce:
>  # Define and enable JNDI Controller Service
>  # Create a flow with ConsumeJMS or PublishJMS processors with controller 
> service defined in #1.
>  # Consume and publish at least one message to ensure the connectivity can be 
> established.
>  # Change JNDI configuration for the same connection factory to point to new 
> JMS servers.
>  # Stop JMS service on previous servers
>  # Observe failure in ConsumeJMS/PublishJMS (Caused by: 
> javax.jms.JMSException: Failed to connect to any server at: 
> tcp://jms_server1:12345)
>  
> Work Around:
>  # Disable JNDI Controller Service
>  # Enable JNDI Controller Service and dependent processors.
>  
> Possible Issue/Fix:
>  * AbstractJMSProcessor has a method "buildTargetResource", in which 
> connection factory is instantiated and then cached in workerPool in onTrigger 
> .
>  * Issue: Once cached, it will be reused forever.
>  * Fix: on connectivity failure there should be an attempt to rebuild the 
> worker. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5869) JMS Connection Fails After JMS servers Change behind JNDI

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767543#comment-16767543
 ] 

ASF subversion and git services commented on NIFI-5869:
---

Commit 3492313d0b3436cdd0f7390d46d403fed9d65b77 in nifi's branch 
refs/heads/master from Ed
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3492313 ]

NIFI-5869 Support Reconnection for JMS

resets worker if it doesn't work anymore for any reason. this will add 
"reconnect" capabilities. Will solve issues for following use cases:
- authentication changed after successful connection
- JNDI mapping changed and requires recaching.
- JMS server isn't available anymore or restarted.

improved controller reset on exception

Signed-off-by: Matthew Burgess 

This closes #3261


> JMS Connection Fails After JMS servers Change behind JNDI
> -
>
> Key: NIFI-5869
> URL: https://issues.apache.org/jira/browse/NIFI-5869
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
> Attachments: 3261.patch.txt, JNDI_JMS_Exception.txt
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> JMS Connection Fails After JMS servers Change behind JNDI.
> Reproduce:
>  # Define and enable JNDI Controller Service
>  # Create a flow with ConsumeJMS or PublishJMS processors with controller 
> service defined in #1.
>  # Consume and publish at least one message to ensure the connectivity can be 
> established.
>  # Change JNDI configuration for the same connection factory to point to new 
> JMS servers.
>  # Stop JMS service on previous servers
>  # Observe failure in ConsumeJMS/PublishJMS (Caused by: 
> javax.jms.JMSException: Failed to connect to any server at: 
> tcp://jms_server1:12345)
>  
> Work Around:
>  # Disable JNDI Controller Service
>  # Enable JNDI Controller Service and dependent processors.
>  
> Possible Issue/Fix:
>  * AbstractJMSProcessor has a method "buildTargetResource", in which 
> connection factory is instantiated and then cached in workerPool in onTrigger 
> .
>  * Issue: Once cached, it will be reused forever.
>  * Fix: on connectivity failure there should be an attempt to rebuild the 
> worker. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #3261: NIFI-5869 Support Reconnection for JMS

2019-02-13 Thread GitBox
asfgit closed pull request #3261: NIFI-5869 Support Reconnection for JMS
URL: https://github.com/apache/nifi/pull/3261
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mattyb149 commented on issue #3261: NIFI-5869 Support Reconnection for JMS

2019-02-13 Thread GitBox
mattyb149 commented on issue #3261: NIFI-5869 Support Reconnection for JMS
URL: https://github.com/apache/nifi/pull/3261#issuecomment-463346783
 
 
   +1 LGTM, thanks for the improvement! Merging to master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aeaversa commented on issue #3281: NIFI-5986 Adding "Stop & Configure" button functionality to Processor…

2019-02-13 Thread GitBox
aeaversa commented on issue #3281: NIFI-5986 Adding "Stop & Configure" button 
functionality to Processor…
URL: https://github.com/apache/nifi/pull/3281#issuecomment-463341664
 
 
   Attaching screenshots...
   
   
![p-config](https://user-images.githubusercontent.com/36886905/52739181-59d54c80-2f9e-11e9-917a-6c768f5c5e1d.png)
   
![p-detail](https://user-images.githubusercontent.com/36886905/52739186-5b9f1000-2f9e-11e9-9e34-42332b5ab038.png)
   
![p-config-terminate](https://user-images.githubusercontent.com/36886905/52739191-5e9a0080-2f9e-11e9-9319-ded9450f6aec.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pepov commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
pepov commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256559891
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/PrometheusServer.java
 ##
 @@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting.prometheus;
+
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.net.HttpURLConnection;
+import java.net.InetSocketAddress;
+
+import javax.servlet.ServletException;
+import javax.servlet.ServletOutputStream;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.nifi.controller.status.ProcessGroupStatus;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.reporting.ReportingContext;
+import org.apache.nifi.reporting.prometheus.api.PrometheusMetricsFactory;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.servlet.ServletContextHandler;
+import org.eclipse.jetty.servlet.ServletHolder;
+
+import com.yammer.metrics.core.VirtualMachineMetrics;
+
+import io.prometheus.client.CollectorRegistry;
+import io.prometheus.client.exporter.common.TextFormat;
+
+public class PrometheusServer {
+private static ComponentLog logger;
+private Server server;
+private ServletContextHandler handler;
+
+public static ReportingContext context;
+public static boolean sendJvmMetrics;
+public static String applicationId;
+
+// set context, SEND_JVM_METRICS, applicationId values are set in onTrigger
+
+static class MetricsServlet extends HttpServlet {
+private CollectorRegistry nifiRegistry, jvmRegistry;
+private ProcessGroupStatus rootGroupStatus;
+
+@Override
+protected void doGet(final HttpServletRequest req, final 
HttpServletResponse resp) throws ServletException, IOException {
+logger.info("Do get called");
+
+rootGroupStatus = context.getEventAccess().getControllerStatus();
+ServletOutputStream response = resp.getOutputStream();
+OutputStreamWriter osw = new OutputStreamWriter(response);
+
+nifiRegistry = 
PrometheusMetricsFactory.createNifiMetrics(rootGroupStatus, applicationId);
+
+TextFormat.write004(osw, nifiRegistry.metricFamilySamples());
+
+if (sendJvmMetrics == true) {
+jvmRegistry = 
PrometheusMetricsFactory.createJvmMetrics(VirtualMachineMetrics.getInstance());
+TextFormat.write004(osw, jvmRegistry.metricFamilySamples());
+}
+
+osw.flush();
+osw.close();
+response.flush();
+response.close();
+resp.setHeader("Content-Type", TextFormat.CONTENT_TYPE_004);
+resp.setStatus(HttpURLConnection.HTTP_OK);
+resp.flushBuffer();
+}
+}
+
+public PrometheusServer(InetSocketAddress addr, ComponentLog logger) 
throws Exception {
+PrometheusServer.logger = logger;
+this.server = new Server(addr);
+
+this.handler = new ServletContextHandler(server, "/metrics");
 
 Review comment:
   That is the default and every exporter uses the same, so I think it's ok:
   
https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
   https://prometheus.io/docs/instrumenting/writing_exporters/#landing-page


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] alopresto opened a new pull request #3305: NIFI-5950 Refactored shared logic for input and output port update

2019-02-13 Thread GitBox
alopresto opened a new pull request #3305: NIFI-5950 Refactored shared logic 
for input and output port update
URL: https://github.com/apache/nifi/pull/3305
 
 
   @kevdoran fixed a bug in port name updating logic for NIFI-5950 in PR 3301, 
and I did some refactoring which wasn't reviewed & merged at that time. Just 
capturing those changes here for review when time is available. 
   
   Thank you for submitting a contribution to Apache NiFi.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
   - [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
   - [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256550903
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/PrometheusServer.java
 ##
 @@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting.prometheus;
+
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.net.HttpURLConnection;
+import java.net.InetSocketAddress;
+
+import javax.servlet.ServletException;
+import javax.servlet.ServletOutputStream;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.nifi.controller.status.ProcessGroupStatus;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.reporting.ReportingContext;
+import org.apache.nifi.reporting.prometheus.api.PrometheusMetricsFactory;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.servlet.ServletContextHandler;
+import org.eclipse.jetty.servlet.ServletHolder;
+
+import com.yammer.metrics.core.VirtualMachineMetrics;
+
+import io.prometheus.client.CollectorRegistry;
+import io.prometheus.client.exporter.common.TextFormat;
+
+public class PrometheusServer {
+private static ComponentLog logger;
+private Server server;
+private ServletContextHandler handler;
+
+public static ReportingContext context;
+public static boolean sendJvmMetrics;
+public static String applicationId;
+
+// set context, SEND_JVM_METRICS, applicationId values are set in onTrigger
+
+static class MetricsServlet extends HttpServlet {
+private CollectorRegistry nifiRegistry, jvmRegistry;
+private ProcessGroupStatus rootGroupStatus;
 
 Review comment:
   It looks like these class variables are only used in the `doGet` method 
below. Can they be local variables to that method?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256552799
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/api/PrometheusMetricsFactory.java
 ##
 @@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting.prometheus.api;
+
+import java.util.Collection;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.nifi.controller.status.ProcessGroupStatus;
+import org.apache.nifi.controller.status.ProcessorStatus;
+
+import com.yammer.metrics.core.VirtualMachineMetrics;
+
+import io.prometheus.client.CollectorRegistry;
+import io.prometheus.client.Gauge;
+
+public class PrometheusMetricsFactory {
 
 Review comment:
   `Factory` is a bit misleading class name. Perhaps `PrometheusMetricsUtil` 
would be better? If possible, I would add some unit tests for the results of 
the static methods as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256548864
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/PrometheusServer.java
 ##
 @@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting.prometheus;
+
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.net.HttpURLConnection;
+import java.net.InetSocketAddress;
+
+import javax.servlet.ServletException;
+import javax.servlet.ServletOutputStream;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.nifi.controller.status.ProcessGroupStatus;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.reporting.ReportingContext;
+import org.apache.nifi.reporting.prometheus.api.PrometheusMetricsFactory;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.servlet.ServletContextHandler;
+import org.eclipse.jetty.servlet.ServletHolder;
+
+import com.yammer.metrics.core.VirtualMachineMetrics;
+
+import io.prometheus.client.CollectorRegistry;
+import io.prometheus.client.exporter.common.TextFormat;
+
+public class PrometheusServer {
+private static ComponentLog logger;
+private Server server;
+private ServletContextHandler handler;
+
+public static ReportingContext context;
+public static boolean sendJvmMetrics;
+public static String applicationId;
+
+// set context, SEND_JVM_METRICS, applicationId values are set in onTrigger
+
+static class MetricsServlet extends HttpServlet {
+private CollectorRegistry nifiRegistry, jvmRegistry;
+private ProcessGroupStatus rootGroupStatus;
+
+@Override
+protected void doGet(final HttpServletRequest req, final 
HttpServletResponse resp) throws ServletException, IOException {
+logger.info("Do get called");
+
+rootGroupStatus = context.getEventAccess().getControllerStatus();
+ServletOutputStream response = resp.getOutputStream();
+OutputStreamWriter osw = new OutputStreamWriter(response);
+
+nifiRegistry = 
PrometheusMetricsFactory.createNifiMetrics(rootGroupStatus, applicationId);
+
+TextFormat.write004(osw, nifiRegistry.metricFamilySamples());
+
+if (sendJvmMetrics == true) {
+jvmRegistry = 
PrometheusMetricsFactory.createJvmMetrics(VirtualMachineMetrics.getInstance());
+TextFormat.write004(osw, jvmRegistry.metricFamilySamples());
+}
+
+osw.flush();
+osw.close();
+response.flush();
+response.close();
+resp.setHeader("Content-Type", TextFormat.CONTENT_TYPE_004);
+resp.setStatus(HttpURLConnection.HTTP_OK);
+resp.flushBuffer();
+}
+}
+
+public PrometheusServer(InetSocketAddress addr, ComponentLog logger) 
throws Exception {
+PrometheusServer.logger = logger;
+this.server = new Server(addr);
+
+this.handler = new ServletContextHandler(server, "/metrics");
 
 Review comment:
   Not a huge deal as this runs on its own port, but should this `/metrics` 
path be configurable?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256553858
 
 

 ##
 File path: nifi-nar-bundles/nifi-prometheus-bundle/pom.xml
 ##
 @@ -0,0 +1,44 @@
+
+http://www.w3.org/2001/XMLSchema-instance; 
xmlns="http://maven.apache.org/POM/4.0.0;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+
+4.0.0
+
+
+org.apache.nifi
+nifi-nar-bundles
+1.9.0-SNAPSHOT
+
+
+nifi-prometheus-bundle
+1.9.0-SNAPSHOT
+pom
+
+
+nifi-prometheus-reporting-task
+nifi-prometheus-nar
+
+
+
+
+
+org.glassfish.jersey.core
+jersey-client
 
 Review comment:
   I don't see jersey-client used in this extension unless I missed it. Is it 
brought in transitively by a dependency?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256556227
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/test/java/org/apache/nifi/reporting/prometheus/TestPrometheusReportingTask.java
 ##
 @@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.reporting.prometheus;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.Collections;
+
+import org.apache.nifi.controller.status.ProcessGroupStatus;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.state.MockStateManager;
+import org.apache.nifi.util.MockComponentLog;
+import org.apache.nifi.util.MockConfigurationContext;
+import org.apache.nifi.util.MockReportingContext;
+import org.apache.nifi.util.MockReportingInitializationContext;
+import org.apache.nifi.util.MockVariableRegistry;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+public class TestPrometheusReportingTask {
+private static final String TEST_INIT_CONTEXT_ID = "test-init-context-id";
+private static final String TEST_INIT_CONTEXT_NAME = 
"test-init-context-name";
+private static final String TEST_TASK_ID = "test-task-id";
+private MockReportingInitializationContext reportingInitContextStub;
+private MockReportingContext reportingContextStub;
+private MockConfigurationContext configurationContextStub; // new
+private PrometheusReportingTask testedReportingTask;
+private ProcessGroupStatus rootGroupStatus;
+
+@Before
+public void setup() {
+testedReportingTask = new PrometheusReportingTask();
+rootGroupStatus = new ProcessGroupStatus();
+reportingInitContextStub = new 
MockReportingInitializationContext(TEST_INIT_CONTEXT_ID, 
TEST_INIT_CONTEXT_NAME, new MockComponentLog(TEST_TASK_ID, 
testedReportingTask));
+
+reportingContextStub = new 
MockReportingContext(Collections.emptyMap(), new 
MockStateManager(testedReportingTask), new MockVariableRegistry());
+
+
reportingContextStub.setProperty(PrometheusReportingTask.INSTANCE_ID.getName(), 
"localhost");
+
+configurationContextStub = new 
MockConfigurationContext(reportingContextStub.getProperties(), 
reportingContextStub.getControllerServiceLookup());
+
+rootGroupStatus.setId("1234");
+rootGroupStatus.setFlowFilesReceived(5);
+rootGroupStatus.setBytesReceived(1);
+rootGroupStatus.setFlowFilesSent(10);
+rootGroupStatus.setBytesSent(2);
+rootGroupStatus.setQueuedCount(100);
+rootGroupStatus.setQueuedContentSize(1024L);
+rootGroupStatus.setBytesRead(6L);
+rootGroupStatus.setBytesWritten(8L);
+rootGroupStatus.setActiveThreadCount(5);
+rootGroupStatus.setName("root");
+rootGroupStatus.setFlowFilesTransferred(5);
+rootGroupStatus.setBytesTransferred(1);
+rootGroupStatus.setOutputContentSize(1000L);
+rootGroupStatus.setInputContentSize(1000L);
+rootGroupStatus.setOutputCount(100);
+rootGroupStatus.setInputCount(1000);
+
+}
+
+@Test
+public void testOnTrigger() throws IOException, InterruptedException, 
InitializationException {
+testedReportingTask.initialize(reportingInitContextStub);
+testedReportingTask.onScheduled(configurationContextStub);
+
reportingContextStub.getEventAccess().setProcessGroupStatus(rootGroupStatus);
+testedReportingTask.onTrigger(reportingContextStub);
+
+URL url = new URL("http://localhost:9092/metrics;);
+HttpURLConnection con = (HttpURLConnection) url.openConnection();
+con.setRequestMethod("GET");
+int status = con.getResponseCode();
+Assert.assertEquals(HttpURLConnection.HTTP_OK, status);
 
 Review comment:
   It would be nice to have some assertions on the results as well.


This is an automated message from the Apache Git Service.
To 

[GitHub] kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
kevdoran commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256547924
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-nar/src/main/resources/META-INF/NOTICE
 ##
 @@ -0,0 +1,46 @@
+nifi-prometheus-nar
 
 Review comment:
   At a glance it looks like this NOTICE file will need to be updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6023) ListHDFS throws misleading exception regarding Distributed Cache Service

2019-02-13 Thread Jeff Storck (JIRA)
Jeff Storck created NIFI-6023:
-

 Summary: ListHDFS throws misleading exception regarding 
Distributed Cache Service
 Key: NIFI-6023
 URL: https://issues.apache.org/jira/browse/NIFI-6023
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.8.0
Reporter: Jeff Storck


In the `ListHDFS.onTrigger` method, the processor attempts to retrieve the 
state for the processor which can throw an IOException.  The catch block for 
that code logs the exception with a message of `Failed to retrieve timestamp of 
last listing from Distributed Cache Service. Will not perform listing until 
this is accomplished`.  This is misleading, since the DistributedMapCacheClient 
is not used by ListHDFS.

The error message should be updated to inform the user of the actual error, 
that the state manager could not be reached, or state could not be retrieved 
from the state manager.  It'd also help to include the exception (stack trace) 
itself in the log.

The `Distributed Cache Service` property should be deprecated and its 
description should be updated, since the property will be ignored.  State is 
stored by the state manager locally, or in ZK if NiFi is clustered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] markap14 commented on issue #3131: NIFI-3229 When a queue contains only Penalized FlowFile's the next pr…

2019-02-13 Thread GitBox
markap14 commented on issue #3131: NIFI-3229 When a queue contains only 
Penalized FlowFile's the next pr…
URL: https://github.com/apache/nifi/pull/3131#issuecomment-46448
 
 
   Hey @patricker sorry for the delay in getting back to this. I looked into 
the unit tests, and did some quick profiling. It looks like the reason that it 
seemed to "go out to lunch" as you say was because of Mockito objects being 
used. They are super useful but don't provide great performance. I updated the 
test so that instead of using Mockito it just created simple objects like 
Funnels or implemented interfaces directly without much of anything happening. 
That resulted in much better performance.
   
   I updated the test about to run in multiple threads, as well, because this 
is where we are going to see the heavy contention and therefore the performance 
concerns. When I run with a single thread, we see pretty comparable results 
between the existing implementation and the new implementation that checks for 
penalization:
   
   > 1M checks for FlowFiles, non-empty queue: 57 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 101 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 47 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 67 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 60 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 77 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 38 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 66 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 34 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 54 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 28 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 91 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 67 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 96 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 35 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 59 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 29 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 55 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 36 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 50 millis
   > 
   > 
   
   The difference is measurable but not necessarily concerning.
   At 2 threads, the performance is more concerning but still not necessarily a 
deal breaker:
   
   > 1M checks for FlowFiles, non-empty queue: 314 millis
   > 1M checks for FlowFiles, non-empty queue: 315 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 4 seconds, 
410 millis
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 4 seconds, 
412 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 240 millis
   > 1M checks for FlowFiles, non-empty queue: 240 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 4 seconds, 
166 millis
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 4 seconds, 
167 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 235 millis
   > 1M checks for FlowFiles, non-empty queue: 236 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 4 seconds, 
510 millis
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 4 seconds, 
510 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 225 millis
   > 1M checks for FlowFiles, non-empty queue: 227 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 3 seconds, 
994 millis
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 3 seconds, 
995 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 239 millis
   > 1M checks for FlowFiles, non-empty queue: 239 millis
   > 
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 4 seconds, 
102 millis
   > 1M checks for FlowFiles, non-penalized method, non-empty queue: 4 seconds, 
102 millis
   > 
   > 1M checks for FlowFiles, non-empty queue: 229 millis
   > 1M checks for FlowFiles, non-empty queue: 231 millis
   > 
   > 1M 

[jira] [Created] (NIFI-6022) ConsumeJMS

2019-02-13 Thread Steven Youtsey (JIRA)
Steven Youtsey created NIFI-6022:


 Summary: ConsumeJMS
 Key: NIFI-6022
 URL: https://issues.apache.org/jira/browse/NIFI-6022
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.7.1
Reporter: Steven Youtsey


Processor admin yields when session with JMS provider is closed.

When an exception occurs (no idea what as it was stepped on) and the session is 
closed, the exception handler attempts to use the session and another exception 
is thrown and not caught by the processor. See JMSConsumer, line 129. Need to 
wrap that with a try/catch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-6022) ConsumeJMS - admin yielding

2019-02-13 Thread Steven Youtsey (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Youtsey updated NIFI-6022:
-
Summary: ConsumeJMS - admin yielding  (was: ConsumeJMS)

> ConsumeJMS - admin yielding
> ---
>
> Key: NIFI-6022
> URL: https://issues.apache.org/jira/browse/NIFI-6022
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Steven Youtsey
>Priority: Major
>  Labels: easyfix
>
> Processor admin yields when session with JMS provider is closed.
> When an exception occurs (no idea what as it was stepped on) and the session 
> is closed, the exception handler attempts to use the session and another 
> exception is thrown and not caught by the processor. See JMSConsumer, line 
> 129. Need to wrap that with a try/catch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5460) TLS Toolkit should allow custom CAs to be added to generated truststores

2019-02-13 Thread Andy LoPresto (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto reassigned NIFI-5460:
---

Assignee: Andy LoPresto

> TLS Toolkit should allow custom CAs to be added to generated truststores
> 
>
> Key: NIFI-5460
> URL: https://issues.apache.org/jira/browse/NIFI-5460
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions, Security, Tools and Build
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: certificate, pki, security, tls, tls-toolkit, truststore
>
> The TLS Toolkit should allow a command-line flag to accept custom signing 
> authorities so that a generated truststore can contain multiple trusted 
> certificates rather than requiring manual joining of separate truststores. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] kevdoran edited a comment on issue #156: NIFIREG-225: Fix NPE in getAccessPolicesForUser

2019-02-13 Thread GitBox
kevdoran edited a comment on issue #156: NIFIREG-225: Fix NPE in 
getAccessPolicesForUser
URL: https://github.com/apache/nifi-registry/pull/156#issuecomment-463310399
 
 
   @mcgilman this is the corresponding bug fix in NiFi Registry for 
apache/nifi#3304
   
   cc @bbende 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kevdoran commented on issue #156: NIFIREG-225: Fix NPE in getAccessPolicesForUser

2019-02-13 Thread GitBox
kevdoran commented on issue #156: NIFIREG-225: Fix NPE in 
getAccessPolicesForUser
URL: https://github.com/apache/nifi-registry/pull/156#issuecomment-463310399
 
 
   @mcgilman this is the corresponding bug fix in NiFi Registry for 
apache/nifi#3304


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kevdoran opened a new pull request #156: NIFIREG-225: Fix NPE in getAccessPolicesForUser

2019-02-13 Thread GitBox
kevdoran opened a new pull request #156: NIFIREG-225: Fix NPE in 
getAccessPolicesForUser
URL: https://github.com/apache/nifi-registry/pull/156
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFIREG-225) NPE when an access policy contains a deleted group

2019-02-13 Thread Kevin Doran (JIRA)
Kevin Doran created NIFIREG-225:
---

 Summary: NPE when an access policy contains a deleted group
 Key: NIFIREG-225
 URL: https://issues.apache.org/jira/browse/NIFIREG-225
 Project: NiFi Registry
  Issue Type: Bug
Reporter: Kevin Doran
Assignee: Kevin Doran
 Fix For: 0.4.0


This relates to NIFI-6020. It is the corresponding NiFi Registry Bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-6021) NPE when an access policy contains a deleted group

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-6021:
--
Description: 
Dupe of NIFI-6020

 

  was:
Originally reported on the Apache NiFi Slack.

when groups are removed in ldap, it impacts access policies that had the group 
id.

[https://apachenifi.slack.com/archives/C0L9UPWJZ/p1549493200163800]

This relates to NIFI-5948.

Steps to reproduce:
 # Configure NiFi to use the LDAP UserGroupProvider.
 # Then in Nifi, using the UI, create some access policies that contain the 
LDAP groups.
 # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
different group search base/filter such that a subset of groups are returned, 
and at least one group that belongs to an access policy is no longer synced 
from ldap.
 # Go to buger menu -> users as observe an NPE. stack trace below

The only way to fix this problem is to delete the association of the access 
policy -> group in the file: authorizations.xml.

+*Stack trace:*+

 
{noformat}
2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
java.lang.NullPointerException. Returning Internal Server Error response.
java.lang.NullPointerException: null
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
at 
org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.getUsers(StandardNiFiServiceFacade.java:3277)
at 
org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 

[jira] [Resolved] (NIFI-6021) NPE when an access policy contains a deleted group

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran resolved NIFI-6021.
---
Resolution: Duplicate

> NPE when an access policy contains a deleted group
> --
>
> Key: NIFI-6021
> URL: https://issues.apache.org/jira/browse/NIFI-6021
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
>
> Originally reported on the Apache NiFi Slack.
> when groups are removed in ldap, it impacts access policies that had the 
> group id.
> [https://apachenifi.slack.com/archives/C0L9UPWJZ/p1549493200163800]
> This relates to NIFI-5948.
> Steps to reproduce:
>  # Configure NiFi to use the LDAP UserGroupProvider.
>  # Then in Nifi, using the UI, create some access policies that contain the 
> LDAP groups.
>  # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
> different group search base/filter such that a subset of groups are returned, 
> and at least one group that belongs to an access policy is no longer synced 
> from ldap.
>  # Go to buger menu -> users as observe an NPE. stack trace below
> The only way to fix this problem is to delete the association of the access 
> policy -> group in the file: authorizations.xml.
> +*Stack trace:*+
>  
> {noformat}
> 2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
> at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
> at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> 

[jira] [Updated] (NIFI-6021) NPE when an access policy contains a deleted group

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-6021:
--
Affects Version/s: (was: 1.7.1)
   (was: 1.8.0)
   (was: 1.7.0)
   (was: 1.6.0)
   (was: 1.5.0)

> NPE when an access policy contains a deleted group
> --
>
> Key: NIFI-6021
> URL: https://issues.apache.org/jira/browse/NIFI-6021
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
>
> Originally reported on the Apache NiFi Slack.
> when groups are removed in ldap, it impacts access policies that had the 
> group id.
> [https://apachenifi.slack.com/archives/C0L9UPWJZ/p1549493200163800]
> This relates to NIFI-5948.
> Steps to reproduce:
>  # Configure NiFi to use the LDAP UserGroupProvider.
>  # Then in Nifi, using the UI, create some access policies that contain the 
> LDAP groups.
>  # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
> different group search base/filter such that a subset of groups are returned, 
> and at least one group that belongs to an access policy is no longer synced 
> from ldap.
>  # Go to buger menu -> users as observe an NPE. stack trace below
> The only way to fix this problem is to delete the association of the access 
> policy -> group in the file: authorizations.xml.
> +*Stack trace:*+
>  
> {noformat}
> 2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
> at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
> at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 

[jira] [Created] (NIFI-6021) NPE when an access policy contains a deleted group

2019-02-13 Thread Kevin Doran (JIRA)
Kevin Doran created NIFI-6021:
-

 Summary: NPE when an access policy contains a deleted group
 Key: NIFI-6021
 URL: https://issues.apache.org/jira/browse/NIFI-6021
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1
Reporter: Kevin Doran
Assignee: Kevin Doran


Originally reported on the Apache NiFi Slack.

when groups are removed in ldap, it impacts access policies that had the group 
id.

[https://apachenifi.slack.com/archives/C0L9UPWJZ/p1549493200163800]

This relates to NIFI-5948.

Steps to reproduce:
 # Configure NiFi to use the LDAP UserGroupProvider.
 # Then in Nifi, using the UI, create some access policies that contain the 
LDAP groups.
 # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
different group search base/filter such that a subset of groups are returned, 
and at least one group that belongs to an access policy is no longer synced 
from ldap.
 # Go to buger menu -> users as observe an NPE. stack trace below

The only way to fix this problem is to delete the association of the access 
policy -> group in the file: authorizations.xml.

+*Stack trace:*+

 
{noformat}
2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
java.lang.NullPointerException. Returning Internal Server Error response.
java.lang.NullPointerException: null
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
at 
org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.getUsers(StandardNiFiServiceFacade.java:3277)
at 
org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 

[jira] [Updated] (NIFI-6020) Cannot list users or policies when an access policy contains a group that is deleted

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-6020:
--
Description: 
Originally reported on the Apache NiFi Slack.

when groups are removed in ldap, it impacts access policies that had the group 
id.

[https://apachenifi.slack.com/archives/C0L9UPWJZ/p1549493200163800]

This relates to NIFI-5948.

Steps to reproduce:
 # Configure NiFi to use the LDAP UserGroupProvider.
 # Then in Nifi, using the UI, create some access policies that contain the 
LDAP groups.
 # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
different group search base/filter such that a subset of groups are returned, 
and at least one group that belongs to an access policy is no longer synced 
from ldap.
 # Go to buger menu -> users as observe an NPE. stack trace below

The only way to fix this problem is to delete the association of the access 
policy -> group in the file: authorizations.xml.

+*Stack trace:*+

 
{noformat}
2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
java.lang.NullPointerException. Returning Internal Server Error response.
java.lang.NullPointerException: null
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
at 
org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.getUsers(StandardNiFiServiceFacade.java:3277)
at 
org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
at 

[GitHub] bbende closed pull request #155: NIFIREG-213 Second phase of extension registry work

2019-02-13 Thread GitBox
bbende closed pull request #155: NIFIREG-213 Second phase of extension registry 
work
URL: https://github.com/apache/nifi-registry/pull/155
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bbende commented on issue #155: NIFIREG-213 Second phase of extension registry work

2019-02-13 Thread GitBox
bbende commented on issue #155: NIFIREG-213 Second phase of extension registry 
work
URL: https://github.com/apache/nifi-registry/pull/155#issuecomment-463296162
 
 
   Working on some more changes I want to include with this, so going to close 
this PR for now and re-open later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mcgilman commented on issue #3304: NIFI-6020: Fix NPE in getAccessPoliciesForUser

2019-02-13 Thread GitBox
mcgilman commented on issue #3304: NIFI-6020: Fix NPE in 
getAccessPoliciesForUser
URL: https://github.com/apache/nifi/pull/3304#issuecomment-463270545
 
 
   Will review...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6020) Cannot list users or policies when an access policy contains a group that is deleted

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-6020:
--
Status: Patch Available  (was: In Progress)

> Cannot list users or policies when an access policy contains a group that is 
> deleted
> 
>
> Key: NIFI-6020
> URL: https://issues.apache.org/jira/browse/NIFI-6020
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.7.1, 1.8.0, 1.7.0, 1.6.0, 1.5.0
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This relates to NIFI-5948.
> Steps to reproduce:
>   
>  # Configure NiFi to use the LDAP UserGroupProvider.
>  # Then in Nifi, using the UI, create some access policies that contain the 
> LDAP groups.
>  # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
> different group search base/filter such that a subset of groups are returned, 
> and at least one group that belongs to an access policy is no longer synced 
> from ldap.
>  # Go to buger menu -> users as observe an NPE. stack trace below
> The only way to fix this problem is to delete the association of the access 
> policy -> group in the file: authorizations.xml.
> +*Stack trace:*+
>  
> {noformat}
> 2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
> java.lang.NullPointerException. Returning Internal Server Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
> at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
> at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
> at 
> org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> 

[GitHub] kevdoran opened a new pull request #3304: NIFI-6020: Fix NPE in getAccessPoliciesForUser

2019-02-13 Thread GitBox
kevdoran opened a new pull request #3304: NIFI-6020: Fix NPE in 
getAccessPoliciesForUser
URL: https://github.com/apache/nifi/pull/3304
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
   - [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
   - [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (NIFI-5267) Add Kafka record timestamp to flowfile attributes

2019-02-13 Thread Sandish Kumar HN (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandish Kumar HN reassigned NIFI-5267:
--

Assignee: Sandish Kumar HN

> Add Kafka record timestamp to flowfile attributes
> -
>
> Key: NIFI-5267
> URL: https://issues.apache.org/jira/browse/NIFI-5267
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Jasper Knulst
>Assignee: Sandish Kumar HN
>Priority: Minor
>  Labels: newbie
> Fix For: 2.0.0
>
>
> The ConsumeKafkaRecord and ConsumeKafka processors (0_10, 0_11 and 1_0) can 
> yield 1 flowfile holding many Kafka records. For ConsumeKafka this is 
> optional (using demarcator). 
> Currently the resulting flowfile already gets an attribute 'kafka.offset' 
> which indicates the starting offset (lowest) of any Kafka record within that 
> bundle. 
> It would be valuable to also have a 'kafka.timestamp' attribute there (also 
> only related to the first record of that bundle) to be able to relate all the 
> records in the flowfile to the kafka timestamp and be able to replay some 
> kafka records based on this timestamp (feature in Kafka > 0.9 where replay by 
> offset and by timestamp is now a possibility)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5948) Cannot list users when a ldap user that belongs to a group is deleted

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran reassigned NIFI-5948:
-

Assignee: (was: Kevin Doran)

> Cannot list users when a ldap user that belongs to a group is deleted
> -
>
> Key: NIFI-5948
> URL: https://issues.apache.org/jira/browse/NIFI-5948
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1
>Reporter: Juan Pablo Converso
>Priority: Major
> Attachments: authorizers.xml, error.png, 
> login-identity-providers.xml, nifi.properties
>
>
> I configured Nifi to use LDAP to provide users. Then in Nifi, using the UI, I 
> created groups and associated some LDAP users to the groups.
> This worked fine until a user was deleted in LDAP, when this happened when I 
> tried to list users using the "burgerMenu -> Users" option the error: "Unable 
> to find user with id 'ae1523b4-1336-3a1c-8283-0a784f5cb017'." was displayed.
> The only way to fix this problem was to delete the association of the user to 
> the group in the file: users.xml.
> I think that when a user is deleted from LDAP and Nifi refreshes the user 
> list it should check also for deleted users in order to remove them from 
> user/groups associations as well to prevent this problem.
> I've provided in this report a screenshot of the error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5948) Cannot list users when a ldap user that belongs to a group is deleted

2019-02-13 Thread Kevin Doran (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767339#comment-16767339
 ] 

Kevin Doran commented on NIFI-5948:
---

Created NIFI-6020 for the related issue, and submitting a patch to fix that as 
the two bugs require a bit of different scope to fix.

> Cannot list users when a ldap user that belongs to a group is deleted
> -
>
> Key: NIFI-5948
> URL: https://issues.apache.org/jira/browse/NIFI-5948
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1
>Reporter: Juan Pablo Converso
>Assignee: Kevin Doran
>Priority: Major
> Attachments: authorizers.xml, error.png, 
> login-identity-providers.xml, nifi.properties
>
>
> I configured Nifi to use LDAP to provide users. Then in Nifi, using the UI, I 
> created groups and associated some LDAP users to the groups.
> This worked fine until a user was deleted in LDAP, when this happened when I 
> tried to list users using the "burgerMenu -> Users" option the error: "Unable 
> to find user with id 'ae1523b4-1336-3a1c-8283-0a784f5cb017'." was displayed.
> The only way to fix this problem was to delete the association of the user to 
> the group in the file: users.xml.
> I think that when a user is deleted from LDAP and Nifi refreshes the user 
> list it should check also for deleted users in order to remove them from 
> user/groups associations as well to prevent this problem.
> I've provided in this report a screenshot of the error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-6020) Cannot list users or policies when an access policy contains a group that is deleted

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-6020:
--
Description: 
This relates to NIFI-5948.

Steps to reproduce:
  
 # Configure NiFi to use the LDAP UserGroupProvider.
 # Then in Nifi, using the UI, create some access policies that contain the 
LDAP groups.
 # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
different group search base/filter such that a subset of groups are returned, 
and at least one group that belongs to an access policy is no longer synced 
from ldap.
 # Go to buger menu -> users as observe an NPE. stack trace below

The only way to fix this problem is to delete the association of the access 
policy -> group in the file: authorizations.xml.

+*Stack trace:*+

 
{noformat}
2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
java.lang.NullPointerException. Returning Internal Server Error response.
java.lang.NullPointerException: null
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
at 
org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.getUsers(StandardNiFiServiceFacade.java:3277)
at 
org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
at 
org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithReadLock(NiFiServiceFacadeLock.java:155)
at 
org.apache.nifi.web.NiFiServiceFacadeLock.getLock(NiFiServiceFacadeLock.java:120)
at 

[jira] [Updated] (NIFI-6020) Cannot list users or policies when an access policy contains a group that is deleted

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFI-6020:
--
Description: 
This relates to NIFI-5948.

Steps to reproduce:
 
 # Configure NiFi to use the LDAP UserGroupProvider.
 # Then in Nifi, using the UI, create some access policies that contain the 
LDAP groups.
 # Delete groups from LDAP or change the NiFi LdapUserGroupProvider to use a 
different group search base/filter such that a subset of groups are returned, 
and at least one group that belongs to an access policy is no longer synced 
from ldap.
 # Go to buger menu -> users as observe an NPE. stack trace below

The only way to fix this problem is to delete the association of the access 
policy -> group in the file: authorizations.xml.

+*Stack trace:*+

 
{noformat}
2019-02-06 22:42:46,373 ERROR [NiFi Web Server-41682] 
o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: 
java.lang.NullPointerException. Returning Internal Server Error response.
java.lang.NullPointerException: null
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$null$2(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.lambda$getAccessPoliciesForUser$3(StandardPolicyBasedAuthorizerDAO.java:285)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.getAccessPoliciesForUser(StandardPolicyBasedAuthorizerDAO.java:287)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
at 
org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$EnhancerBySpringCGLIB$$9bc4b502.getAccessPoliciesForUser()
at 
org.apache.nifi.web.StandardNiFiServiceFacade.createUserEntity(StandardNiFiServiceFacade.java:3285)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.lambda$getUsers$163(StandardNiFiServiceFacade.java:3276)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.getUsers(StandardNiFiServiceFacade.java:3277)
at 
org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
at 
org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithReadLock(NiFiServiceFacadeLock.java:155)
at 
org.apache.nifi.web.NiFiServiceFacadeLock.getLock(NiFiServiceFacadeLock.java:120)
at 

[GitHub] aeaversa commented on issue #3281: NIFI-5986 Adding "Stop & Configure" button functionality to Processor…

2019-02-13 Thread GitBox
aeaversa commented on issue #3281: NIFI-5986 Adding "Stop & Configure" button 
functionality to Processor…
URL: https://github.com/apache/nifi/pull/3281#issuecomment-463251894
 
 
   Saw that some tests were failing the continuous integration checks and 
re-based to current master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6020) Cannot list users or policies when an access policy contains a group that is deleted

2019-02-13 Thread Kevin Doran (JIRA)
Kevin Doran created NIFI-6020:
-

 Summary: Cannot list users or policies when an access policy 
contains a group that is deleted
 Key: NIFI-6020
 URL: https://issues.apache.org/jira/browse/NIFI-6020
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.7.1, 1.8.0, 1.7.0, 1.6.0, 1.5.0
Reporter: Kevin Doran
Assignee: Kevin Doran


This relates to NIFI-5948.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5948) Cannot list users when a ldap user that belongs to a group is deleted

2019-02-13 Thread Kevin Doran (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran reassigned NIFI-5948:
-

Assignee: Kevin Doran

> Cannot list users when a ldap user that belongs to a group is deleted
> -
>
> Key: NIFI-5948
> URL: https://issues.apache.org/jira/browse/NIFI-5948
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1
>Reporter: Juan Pablo Converso
>Assignee: Kevin Doran
>Priority: Major
> Attachments: authorizers.xml, error.png, 
> login-identity-providers.xml, nifi.properties
>
>
> I configured Nifi to use LDAP to provide users. Then in Nifi, using the UI, I 
> created groups and associated some LDAP users to the groups.
> This worked fine until a user was deleted in LDAP, when this happened when I 
> tried to list users using the "burgerMenu -> Users" option the error: "Unable 
> to find user with id 'ae1523b4-1336-3a1c-8283-0a784f5cb017'." was displayed.
> The only way to fix this problem was to delete the association of the user to 
> the group in the file: users.xml.
> I think that when a user is deleted from LDAP and Nifi refreshes the user 
> list it should check also for deleted users in order to remove them from 
> user/groups associations as well to prevent this problem.
> I've provided in this report a screenshot of the error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] pepov commented on issue #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
pepov commented on issue #3257: NIFI-5435 Prometheus /metrics http endpoint for 
monitoring integration
URL: https://github.com/apache/nifi/pull/3257#issuecomment-463223444
 
 
   Also I tested the PR and I can confirm it's working (with the above 
mentioned caveats) on a two node cluster.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] arpadboda commented on issue #476: MINIFICPP-720 - Add UT containers

2019-02-13 Thread GitBox
arpadboda commented on issue #476: MINIFICPP-720 - Add UT containers
URL: https://github.com/apache/nifi-minifi-cpp/pull/476#issuecomment-463223415
 
 
   Closing this as became part of a new PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] arpadboda closed pull request #476: MINIFICPP-720 - Add UT containers

2019-02-13 Thread GitBox
arpadboda closed pull request #476: MINIFICPP-720 - Add UT containers
URL: https://github.com/apache/nifi-minifi-cpp/pull/476
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pepov commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
pepov commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256414475
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/PrometheusReportingTask.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting.prometheus;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.nifi.annotation.configuration.DefaultSchedule;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.AbstractReportingTask;
+import org.apache.nifi.reporting.ReportingContext;
+import org.apache.nifi.scheduling.SchedulingStrategy;
+import org.eclipse.jetty.server.Server;
+
+@Tags({ "reporting", "prometheus", "metrics" })
+@CapabilityDescription("")
+@DefaultSchedule(strategy = SchedulingStrategy.TIMER_DRIVEN, period = "1 sec")
+
+public class PrometheusReportingTask extends AbstractReportingTask {
+
+private PrometheusServer prometheusServer;
+
+static final PropertyDescriptor METRICS_ENDPOINT_PORT = new 
PropertyDescriptor.Builder().name("Prometheus Metrics Endpoint 
Port").description("The Port where prometheus metrics can be accessed")
+
.required(true).expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY).defaultValue("9092").addValidator(StandardValidators.INTEGER_VALIDATOR).build();
+
+static final PropertyDescriptor APPLICATION_ID = new 
PropertyDescriptor.Builder().name("Application ID").description("The 
Application ID to be included in the metrics sent to Prometheus")
+
.required(true).expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY).defaultValue("nifi").addValidator(StandardValidators.NON_EMPTY_VALIDATOR).build();
+
+static final PropertyDescriptor INSTANCE_ID = new 
PropertyDescriptor.Builder().name("Instance ID").description("Id of this NiFi 
instance to be included in the metrics sent to Prometheus")
+
.required(true).expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY).defaultValue("${hostname(true)}").addValidator(StandardValidators.NON_EMPTY_VALIDATOR).build();
+
+static final PropertyDescriptor SEND_JVM_METRICS = new 
PropertyDescriptor.Builder().name("Send JVM-metrics").description("Send 
JVM-metrics in addition to the Nifi-metrics")
+.allowableValues("true", 
"false").defaultValue("false").required(true).build();
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = new ArrayList<>();
+
+properties.add(METRICS_ENDPOINT_PORT);
+properties.add(APPLICATION_ID);
+properties.add(INSTANCE_ID);
+properties.add(SEND_JVM_METRICS);
+
+return properties;
+}
+
+@OnScheduled
+public void onScheduled(final ConfigurationContext context) {
+
+final String metricsEndpointPort = 
context.getProperty(METRICS_ENDPOINT_PORT).getValue();
+
+try {
+this.prometheusServer = new PrometheusServer(new 
InetSocketAddress(Integer.parseInt(metricsEndpointPort)), getLogger());
+getLogger().info("Started JETTY server");
+} catch (Exception e) {
+getLogger().error("Error: " + e);
+e.printStackTrace();
+}
+
+}
+
+@OnStopped
+public void OnStopped() throws Exception {
+Server server = prometheusServer.getServer();
+server.stop();
+}
+
+@OnShutdown
+public void onShutDown() throws Exception {
+Server server = 

[GitHub] pepov commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
pepov commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256412581
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/PrometheusReportingTask.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting.prometheus;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.nifi.annotation.configuration.DefaultSchedule;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.AbstractReportingTask;
+import org.apache.nifi.reporting.ReportingContext;
+import org.apache.nifi.scheduling.SchedulingStrategy;
+import org.eclipse.jetty.server.Server;
+
+@Tags({ "reporting", "prometheus", "metrics" })
+@CapabilityDescription("")
+@DefaultSchedule(strategy = SchedulingStrategy.TIMER_DRIVEN, period = "1 sec")
+
+public class PrometheusReportingTask extends AbstractReportingTask {
+
+private PrometheusServer prometheusServer;
+
+static final PropertyDescriptor METRICS_ENDPOINT_PORT = new 
PropertyDescriptor.Builder().name("Prometheus Metrics Endpoint 
Port").description("The Port where prometheus metrics can be accessed")
+
.required(true).expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY).defaultValue("9092").addValidator(StandardValidators.INTEGER_VALIDATOR).build();
+
+static final PropertyDescriptor APPLICATION_ID = new 
PropertyDescriptor.Builder().name("Application ID").description("The 
Application ID to be included in the metrics sent to Prometheus")
+
.required(true).expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY).defaultValue("nifi").addValidator(StandardValidators.NON_EMPTY_VALIDATOR).build();
+
+static final PropertyDescriptor INSTANCE_ID = new 
PropertyDescriptor.Builder().name("Instance ID").description("Id of this NiFi 
instance to be included in the metrics sent to Prometheus")
+
.required(true).expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY).defaultValue("${hostname(true)}").addValidator(StandardValidators.NON_EMPTY_VALIDATOR).build();
+
+static final PropertyDescriptor SEND_JVM_METRICS = new 
PropertyDescriptor.Builder().name("Send JVM-metrics").description("Send 
JVM-metrics in addition to the Nifi-metrics")
+.allowableValues("true", 
"false").defaultValue("false").required(true).build();
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = new ArrayList<>();
+
+properties.add(METRICS_ENDPOINT_PORT);
+properties.add(APPLICATION_ID);
+properties.add(INSTANCE_ID);
+properties.add(SEND_JVM_METRICS);
+
+return properties;
+}
+
+@OnScheduled
+public void onScheduled(final ConfigurationContext context) {
+
+final String metricsEndpointPort = 
context.getProperty(METRICS_ENDPOINT_PORT).getValue();
+
+try {
+this.prometheusServer = new PrometheusServer(new 
InetSocketAddress(Integer.parseInt(metricsEndpointPort)), getLogger());
+getLogger().info("Started JETTY server");
+} catch (Exception e) {
+getLogger().error("Error: " + e);
 
 Review comment:
   Wouldn't it be more appropriate to have the whole exception context in a 
single place?
   ```suggestion
   getLogger().error("Failed to start Jetty server", e);
   ```
   And remove the e.printStackTrace() line? (since it goes to a different log 
file by default)


[GitHub] pepov commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
pepov commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256414673
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/PrometheusServer.java
 ##
 @@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting.prometheus;
+
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.net.HttpURLConnection;
+import java.net.InetSocketAddress;
+
+import javax.servlet.ServletException;
+import javax.servlet.ServletOutputStream;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.nifi.controller.status.ProcessGroupStatus;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.reporting.ReportingContext;
+import org.apache.nifi.reporting.prometheus.api.PrometheusMetricsFactory;
+import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.servlet.ServletContextHandler;
+import org.eclipse.jetty.servlet.ServletHolder;
+
+import com.yammer.metrics.core.VirtualMachineMetrics;
+
+import io.prometheus.client.CollectorRegistry;
+import io.prometheus.client.exporter.common.TextFormat;
+
+public class PrometheusServer {
+private static ComponentLog logger;
+private Server server;
+private ServletContextHandler handler;
+
+public static ReportingContext context;
+public static boolean sendJvmMetrics;
+public static String applicationId;
 
 Review comment:
   these shouldnt be static, see above


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pepov commented on a change in pull request #3257: NIFI-5435 Prometheus /metrics http endpoint for monitoring integration

2019-02-13 Thread GitBox
pepov commented on a change in pull request #3257: NIFI-5435 Prometheus 
/metrics http endpoint for monitoring integration
URL: https://github.com/apache/nifi/pull/3257#discussion_r256410246
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-reporting-task/src/main/java/org/apache/nifi/reporting/prometheus/PrometheusReportingTask.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting.prometheus;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.nifi.annotation.configuration.DefaultSchedule;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.AbstractReportingTask;
+import org.apache.nifi.reporting.ReportingContext;
+import org.apache.nifi.scheduling.SchedulingStrategy;
+import org.eclipse.jetty.server.Server;
+
+@Tags({ "reporting", "prometheus", "metrics" })
+@CapabilityDescription("")
+@DefaultSchedule(strategy = SchedulingStrategy.TIMER_DRIVEN, period = "1 sec")
 
 Review comment:
   To stay safe I would say this would be something between 10sec and 1min by 
default instead of 1sec.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6017) ArrayIndexOutOfBounds Load Balancer CorrelationAttributePartitioner

2019-02-13 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-6017:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ArrayIndexOutOfBounds Load Balancer CorrelationAttributePartitioner
> ---
>
> Key: NIFI-6017
> URL: https://issues.apache.org/jira/browse/NIFI-6017
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Dorian Bugeja
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.9.0
>
> Attachments: 
> 0001-NIFI-6017-ArrayIndexOutOfBounds-Load-Balancer-Correl.patch
>
>
> When trying to perform load balancing on queue using 'partition by 
> attribute', the use of the hashCode() function can return a negative value 
> which in turns, the '%' operation might return a negative value (which is 
> used as index) which throws an ArrayIndexOutOfBounds exception. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-6017) ArrayIndexOutOfBounds Load Balancer CorrelationAttributePartitioner

2019-02-13 Thread Mark Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767231#comment-16767231
 ] 

Mark Payne commented on NIFI-6017:
--

Thanks [~SunSatION] that worked perfectly! Have been able to verify behavior 
and reviewed code. +1 merged to master. Thanks again!

> ArrayIndexOutOfBounds Load Balancer CorrelationAttributePartitioner
> ---
>
> Key: NIFI-6017
> URL: https://issues.apache.org/jira/browse/NIFI-6017
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Dorian Bugeja
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.9.0
>
> Attachments: 
> 0001-NIFI-6017-ArrayIndexOutOfBounds-Load-Balancer-Correl.patch
>
>
> When trying to perform load balancing on queue using 'partition by 
> attribute', the use of the hashCode() function can return a negative value 
> which in turns, the '%' operation might return a negative value (which is 
> used as index) which throws an ArrayIndexOutOfBounds exception. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-6017) ArrayIndexOutOfBounds Load Balancer CorrelationAttributePartitioner

2019-02-13 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767229#comment-16767229
 ] 

ASF subversion and git services commented on NIFI-6017:
---

Commit da8c8a14a13e0f52ba6bdeaa34afdf3bf259d6b7 in nifi's branch 
refs/heads/master from Dorian Bugeja
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=da8c8a1 ]

NIFI-6017 - ArrayIndexOutOfBounds Load Balancer CorrelationAttributePartitioner

Signed-off-by: Mark Payne 


> ArrayIndexOutOfBounds Load Balancer CorrelationAttributePartitioner
> ---
>
> Key: NIFI-6017
> URL: https://issues.apache.org/jira/browse/NIFI-6017
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Dorian Bugeja
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.9.0
>
> Attachments: 
> 0001-NIFI-6017-ArrayIndexOutOfBounds-Load-Balancer-Correl.patch
>
>
> When trying to perform load balancing on queue using 'partition by 
> attribute', the use of the hashCode() function can return a negative value 
> which in turns, the '%' operation might return a negative value (which is 
> used as index) which throws an ArrayIndexOutOfBounds exception. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4970) EOF Exception in InvokeHttp when body's response is empty

2019-02-13 Thread Shaun Longworth (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767224#comment-16767224
 ] 

Shaun Longworth commented on NIFI-4970:
---

I have been troubleshooting this recently.  The existing code is checking if 
the variable responseBody is not null.  responseBody is an object and based on 
the definition of the property, it will never be null based on how it is used 
in the program.  To rectify this, I suggest the following update:

 
{code:java}
// 
Nifi\nifi-nar-bundles\nifi-standard-bundle\nifi-standard-processors\src\main\java\org\apache\nifi\processors\standard\InvokeHTTP.java
// Line #824 (v1.8.0)

// Original code
boolean bodyExists = responseBody != null; 

// Suggested update
boolean bodyExists = (responseBody != null ? (responseBody.contentLength() > 0) 
: false);
{code}
 

This will ensure that the object is not null and if not null, check the length 
of the body.  If it is not found it will return -1.

Assuming the bodyExists variable gets set accordingly, the remaining portion of 
the code appears to work as expected.

 

> EOF Exception in InvokeHttp when body's response is empty 
> --
>
> Key: NIFI-4970
> URL: https://issues.apache.org/jira/browse/NIFI-4970
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: Apache NiFi - Version 1.5.0.3.1.1.0-35
>Reporter: Francois Brillon
>Priority: Major
>
> *Description*
> A POST to an API that returns an empty body on success (status code 200) will 
> generate an EOF Exception, causing the processor to always remain in error 
> and routing all flow files to failures, even if the API call succeeded.
> An example of such API is the Streaming API of PowerBI: 
> [https://docs.microsoft.com/en-us/power-bi/service-real-time-streaming]
> *Exception Stack Traces*
> When the +property "Put Response Body In Attribute" is not set+, the 
> exception is as follows:
> {code:java}
> rocessor.exception.FlowFileAccessException: Unable to create ContentClaim due 
> to java.io.EOFException: org.apache.nifi.pro
> cessor.exception.FlowFileAccessException: Failed to import data from 
> buffer(okio.GzipSource@159311b9).inputStream() for St
> andardFlowFileRecord[uuid=05a89e7b-d500-4d48-b034-52c7324fa6e6,claim=,offset=0,name=rtm-vehicle-position-20180313-182039.p
> b,size=0] due to org.apache.nifi.processor.exception.FlowFileAccessException: 
> Unable to create ContentClaim due to java.io
> .EOFException
> org.apache.nifi.processor.exception.FlowFileAccessException: Failed to import 
> data from buffer(okio.GzipSource@159311b9).i
> nputStream() for 
> StandardFlowFileRecord[uuid=05a89e7b-d500-4d48-b034-52c7324fa6e6,claim=,offset=0,name=rtm-vehicle-positio
> n-20180313-182039.pb,size=0] due to 
> org.apache.nifi.processor.exception.FlowFileAccessException: Unable to create 
> ContentC
> laim due to java.io.EOFException
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.importFrom(StandardProcessSession.java:2942)
> at 
> org.apache.nifi.processors.standard.InvokeHTTP.onTrigger(InvokeHTTP.java:817)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.nifi.processor.exception.FlowFileAccessException: 
> Unable to create ContentClaim due to java.io.EOFException
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.importFrom(StandardProcessSession.java:2935)
> ... 13 common frames omitted
> Caused by: java.io.EOFException: null
> at okio.RealBufferedSource.require(RealBufferedSource.java:59)
> at okio.GzipSource.consumeHeader(GzipSource.java:114)
> at okio.GzipSource.read(GzipSource.java:73)
> at 

[GitHub] Salatich commented on a change in pull request #3302: NIFI-6000 Catch also IllegalArgumentException in ConvertAvroToORC hiv…

2019-02-13 Thread GitBox
Salatich commented on a change in pull request #3302: NIFI-6000 Catch also 
IllegalArgumentException in ConvertAvroToORC hiv…
URL: https://github.com/apache/nifi/pull/3302#discussion_r256310662
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/ConvertAvroToORC.java
 ##
 @@ -283,8 +283,8 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 session.transfer(flowFile, REL_SUCCESS);
 session.getProvenanceReporter().modifyContent(flowFile, "Converted 
"+totalRecordCount.get()+" records", System.currentTimeMillis() - startTime);
 
-} catch (final ProcessException pe) {
-getLogger().error("Failed to convert {} from Avro to ORC due to 
{}; transferring to failure", new Object[]{flowFile, pe});
+} catch (ProcessException | IllegalArgumentException e) {
+getLogger().error("Failed to convert {} from Avro to ORC due to 
{}; transferring to failure", new Object[]{flowFile, e});
 
 Review comment:
   @mattyb149 Hi! Thx for valuable remarks. I added your commit to my PR and 
squashed it. Could you review it again? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-4886) Slack processor - allow expression language in Webhook Url property

2019-02-13 Thread Pierre Villard (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4886:
-
   Resolution: Fixed
Fix Version/s: 1.9.0
   Status: Resolved  (was: Patch Available)

> Slack processor - allow expression language in Webhook Url property
> ---
>
> Key: NIFI-4886
> URL: https://issues.apache.org/jira/browse/NIFI-4886
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Eugeny Kolpakov
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.9.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Webhook URL in PutSlack processor does not allow expression language.
> This makes it somewhat problematic to use, especially in the face of multiple 
> Nifi environments (staging/production), not to say it is quite tedious to 
> change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >