[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217233852
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/ControllerServiceEntityMerger.java
 ---
@@ -137,7 +138,9 @@ public static void 
mergeControllerServiceReferences(final Set

[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612914#comment-16612914
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217233852
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/ControllerServiceEntityMerger.java
 ---
@@ -137,7 +138,9 @@ public static void 
mergeControllerServiceReferences(final Set New user role: Operator who can start and stop components
> -
>
> Key: NIFI-375
> URL: https://issues.apache.org/jira/browse/NIFI-375
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Daniel Ueberfluss
>Assignee: Koji Kawamura
>Priority: Major
>
> Would like to have a user role that allows a user to stop/start processors 
> but perform no other changes to the dataflow.
> This would allow users to address simple problems without providing full 
> access to modifying a data flow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5534) Create a Nifi Processor using Boilerpipe Article Extractor

2018-09-12 Thread Paul Vidal (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612879#comment-16612879
 ] 

Paul Vidal commented on NIFI-5534:
--

That makes sense. Let me see if the boilerpipe library has something that could 
read the content of GetHTTP / InvokeHTTP without passing a URL. Not sure that 
is the case.

> Create a Nifi Processor using Boilerpipe Article Extractor
> --
>
> Key: NIFI-5534
> URL: https://issues.apache.org/jira/browse/NIFI-5534
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Paul Vidal
>Priority: Minor
>  Labels: github-import
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Using the boilerpipe library ([https://code.google.com/archive/p/boilerpipe/] 
> ), I created a simple processor that reads the content of a URL and extract 
> its text into a flowfile.
> I think it is a good complement to the HMTL nar bundle.
>  
> Link to my implementation: 
> https://github.com/paulvid/nifi/tree/NIFI-5534/nifi-nar-bundles/nifi-html-bundle/nifi-html-processors/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5569) Add tags to Route* processors for discoverability

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612783#comment-16612783
 ] 

ASF GitHub Bot commented on NIFI-5569:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2984


> Add tags to Route* processors for discoverability
> -
>
> Key: NIFI-5569
> URL: https://issues.apache.org/jira/browse/NIFI-5569
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Trivial
>  Labels: beginner, documentation, keyword, label
> Fix For: 1.8.0
>
>
> In a fit of forgetfulness, I could not remember the {{RouteOn*}} processors 
> when trying to detect the presence of some bytes in flowfile content. I 
> propose adding the keywords "find", "search", "scan", and "detect" to the 
> following processors, as they are used for those functions but do not come up 
> in a search for those keywords. 
> * {{RouteOnAttribute}}
> * {{RouteOnContent}}
> * {{RouteText}}
> Additionally, {{ScanContent}} and {{ReplaceText}} can have additional 
> keywords added to improve discoverability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612784#comment-16612784
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2999


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612782#comment-16612782
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2999
  
Merged. Thanks for the contribution.


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2999


---


[GitHub] nifi pull request #2984: NIFI-5569 Added keywords to Route* and ScanAttribut...

2018-09-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2984


---


[GitHub] nifi issue #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2999
  
Merged. Thanks for the contribution.


---


[jira] [Commented] (NIFI-5560) Sub directory(symbolic link to directory) files are not getting listed in ListSFTP(ListSFTP does not Follow symbolic links)

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612758#comment-16612758
 ] 

ASF GitHub Bot commented on NIFI-5560:
--

GitHub user hemantha-kumara opened a pull request:

https://github.com/apache/nifi/pull/3000

NIFI-5560 Added Follow SYMLINK support for ListFTP & ListSFTP and GetFTP & 
GetSFTP Processors



Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hemantha-kumara/nifi nifi-5560

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3000.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3000


commit 4990c3f862746370dc93bdd3ee556cd17ee60eff
Author: Kumara M S Hemantha 
Date:   2018-09-12T20:49:32Z

NIFI-5560 Added Follow SYMLINK support for listFTP & listSFTP and getFTP & 
getSFTP processors




> Sub directory(symbolic link to directory) files are not getting listed in 
> ListSFTP(ListSFTP does not Follow symbolic links)
> ---
>
> Key: NIFI-5560
> URL: https://issues.apache.org/jira/browse/NIFI-5560
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.1
>Reporter: Hemantha kumara M S
>Assignee: Hemantha kumara M S
>Priority: Major
>
> *Here is the configuration*
> 
> *SFTP Server side:*
> -bash-4.2$ mkdir -p /tmp/testData
>  -bash-4.2$
>  -bash-4.2$ mkdir -p /tmp/toRead
>  -bash-4.2$ ln -s /tmp/testData /tmp/toRead/data1
>  -bash-4.2$ touch /tmp/testData/1.txt
>  -bash-4.2$ touch /tmp/testData/2.txt
>  -bash-4.2$ touch /tmp/toRead/t.txt
> -bash-4.2$ mkdir /tmp/toRead/data2
> -bash-4.2$ touch /tmp/toRead/data2/22.txt
> -bash-4.2$ cd /tmp/toRead/
> -bash-4.2$ tree
> .
> ├── data1 -> /tmp/testData
> ├── data2
> │   └── 22.txt
> └── t.txt
> 2 directories, 2 files
> -bash-4.2$ pwd
> /tmp/toRead
> -bash-4.2$ tree
> .
> ├── data1 -> /tmp/testData
> ├── data2
> │   └── 22.txt
> └── t.txt
> 2 directories, 2 files
> -bash-4.2$ touch data
> data1/ data2/
> -bash-4.2$ touch data2/22.txt
> -bash-4.2$ touch t.txt
> -bash-4.2$ tree /tmp/testData
> /tmp/testData
> ├── 1.txt
> └── 2.txt
> 0 directories, 2 files
>  
> *Nifi:*
> Configured ListSFTP  +Remote Path+ to +/tmp/toRead/+ and +Search Recursively+ 
> to +true+
>   
> *+Expected result:+*
> Should list 4 files(1.txt, 2.txt, t.txt, data2/22.txt)
> *+Actual result:+*
> listed only two files(t.txt, data2/22.txt)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3000: NIFI-5560 Added Follow SYMLINK support for ListFTP ...

2018-09-12 Thread hemantha-kumara
GitHub user hemantha-kumara opened a pull request:

https://github.com/apache/nifi/pull/3000

NIFI-5560 Added Follow SYMLINK support for ListFTP & ListSFTP and GetFTP & 
GetSFTP Processors



Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hemantha-kumara/nifi nifi-5560

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3000.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3000


commit 4990c3f862746370dc93bdd3ee556cd17ee60eff
Author: Kumara M S Hemantha 
Date:   2018-09-12T20:49:32Z

NIFI-5560 Added Follow SYMLINK support for listFTP & listSFTP and getFTP & 
getSFTP processors




---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612756#comment-16612756
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user josephxsxn commented on the issue:

https://github.com/apache/nifi/pull/2999
  
Thanks for helping with this @MikeThomsen, Our dev team spent a long time 
trying to understand exactly how to use the processor and so we hope these 
clarifications will save someone else in the future :)  Once you know its very 
straight forward. 


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread josephxsxn
Github user josephxsxn commented on the issue:

https://github.com/apache/nifi/pull/2999
  
Thanks for helping with this @MikeThomsen, Our dev team spent a long time 
trying to understand exactly how to use the processor and so we hope these 
clarifications will save someone else in the future :)  Once you know its very 
straight forward. 


---


[jira] [Updated] (MINIFICPP-604) Convert C++ namespace operator to Java packing to keep responses aligned.

2018-09-12 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-604:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Convert C++ namespace operator to Java packing to keep responses aligned. 
> --
>
> Key: MINIFICPP-604
> URL: https://issues.apache.org/jira/browse/MINIFICPP-604
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> This ensures that we have a uniform response from agents. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-604) Convert C++ namespace operator to Java packing to keep responses aligned.

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612719#comment-16612719
 ] 

ASF GitHub Bot commented on MINIFICPP-604:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/396


> Convert C++ namespace operator to Java packing to keep responses aligned. 
> --
>
> Key: MINIFICPP-604
> URL: https://issues.apache.org/jira/browse/MINIFICPP-604
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> This ensures that we have a uniform response from agents. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #396: MINIFICPP-604: Convert C++ namespace oper...

2018-09-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/396


---


[jira] [Updated] (MINIFICPP-604) Convert C++ namespace operator to Java packing to keep responses aligned.

2018-09-12 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-604:
--
Fix Version/s: 0.6.0

> Convert C++ namespace operator to Java packing to keep responses aligned. 
> --
>
> Key: MINIFICPP-604
> URL: https://issues.apache.org/jira/browse/MINIFICPP-604
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
> Fix For: 0.6.0
>
>
> This ensures that we have a uniform response from agents. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-604) Convert C++ namespace operator to Java packing to keep responses aligned.

2018-09-12 Thread Aldrin Piri (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-604:
--
Status: Patch Available  (was: Open)

> Convert C++ namespace operator to Java packing to keep responses aligned. 
> --
>
> Key: MINIFICPP-604
> URL: https://issues.apache.org/jira/browse/MINIFICPP-604
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> This ensures that we have a uniform response from agents. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-604) Convert C++ namespace operator to Java packing to keep responses aligned.

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612716#comment-16612716
 ] 

ASF GitHub Bot commented on MINIFICPP-604:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/396
  
changes look good.  verified build and tests.  saw the appropriate format 
change in generated output.

will merge


> Convert C++ namespace operator to Java packing to keep responses aligned. 
> --
>
> Key: MINIFICPP-604
> URL: https://issues.apache.org/jira/browse/MINIFICPP-604
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> This ensures that we have a uniform response from agents. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #396: MINIFICPP-604: Convert C++ namespace operators t...

2018-09-12 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/396
  
changes look good.  verified build and tests.  saw the appropriate format 
change in generated output.

will merge


---


[jira] [Assigned] (NIFI-5560) Sub directory(symbolic link to directory) files are not getting listed in ListSFTP(ListSFTP does not Follow symbolic links)

2018-09-12 Thread Hemantha kumara M S (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemantha kumara M S reassigned NIFI-5560:
-

Assignee: Hemantha kumara M S

> Sub directory(symbolic link to directory) files are not getting listed in 
> ListSFTP(ListSFTP does not Follow symbolic links)
> ---
>
> Key: NIFI-5560
> URL: https://issues.apache.org/jira/browse/NIFI-5560
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.1
>Reporter: Hemantha kumara M S
>Assignee: Hemantha kumara M S
>Priority: Major
>
> *Here is the configuration*
> 
> *SFTP Server side:*
> -bash-4.2$ mkdir -p /tmp/testData
>  -bash-4.2$
>  -bash-4.2$ mkdir -p /tmp/toRead
>  -bash-4.2$ ln -s /tmp/testData /tmp/toRead/data1
>  -bash-4.2$ touch /tmp/testData/1.txt
>  -bash-4.2$ touch /tmp/testData/2.txt
>  -bash-4.2$ touch /tmp/toRead/t.txt
> -bash-4.2$ mkdir /tmp/toRead/data2
> -bash-4.2$ touch /tmp/toRead/data2/22.txt
> -bash-4.2$ cd /tmp/toRead/
> -bash-4.2$ tree
> .
> ├── data1 -> /tmp/testData
> ├── data2
> │   └── 22.txt
> └── t.txt
> 2 directories, 2 files
> -bash-4.2$ pwd
> /tmp/toRead
> -bash-4.2$ tree
> .
> ├── data1 -> /tmp/testData
> ├── data2
> │   └── 22.txt
> └── t.txt
> 2 directories, 2 files
> -bash-4.2$ touch data
> data1/ data2/
> -bash-4.2$ touch data2/22.txt
> -bash-4.2$ touch t.txt
> -bash-4.2$ tree /tmp/testData
> /tmp/testData
> ├── 1.txt
> └── 2.txt
> 0 directories, 2 files
>  
> *Nifi:*
> Configured ListSFTP  +Remote Path+ to +/tmp/toRead/+ and +Search Recursively+ 
> to +true+
>   
> *+Expected result:+*
> Should list 4 files(1.txt, 2.txt, t.txt, data2/22.txt)
> *+Actual result:+*
> listed only two files(t.txt, data2/22.txt)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612688#comment-16612688
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user VijetaH commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217175661
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

https://github.com/Effyis/nifi/tree/NIFI-5589


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread VijetaH
Github user VijetaH commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217175661
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

https://github.com/Effyis/nifi/tree/NIFI-5589


---


[jira] [Commented] (MINIFICPP-604) Convert C++ namespace operator to Java packing to keep responses aligned.

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612685#comment-16612685
 ] 

ASF GitHub Bot commented on MINIFICPP-604:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/396
  
reviewing


> Convert C++ namespace operator to Java packing to keep responses aligned. 
> --
>
> Key: MINIFICPP-604
> URL: https://issues.apache.org/jira/browse/MINIFICPP-604
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> This ensures that we have a uniform response from agents. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #396: MINIFICPP-604: Convert C++ namespace operators t...

2018-09-12 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/396
  
reviewing


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612674#comment-16612674
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217172110
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

(Just push that branch to GitHub, I'll fetch it and merge)


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217172110
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

(Just push that branch to GitHub, I'll fetch it and merge)


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612673#comment-16612673
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217171902
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

Can you git cherry-pick this onto a branch named NIFI-5589? It's currently 
sitting on your master.


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217171902
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

Can you git cherry-pick this onto a branch named NIFI-5589? It's currently 
sitting on your master.


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612672#comment-16612672
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user VijetaH commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217171234
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

@MikeThomsen yes please go ahead with the changes.


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread VijetaH
Github user VijetaH commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217171234
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

@MikeThomsen yes please go ahead with the changes.


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612670#comment-16612670
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217170815
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

I'd suggest we strike that part and replace it with something like this:

> and the update query will come from the configured `Update Query` 
property.

If that works for you, I can make the change myself.


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217170815
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

I'd suggest we strike that part and replace it with something like this:

> and the update query will come from the configured `Update Query` 
property.

If that works for you, I can make the change myself.


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612666#comment-16612666
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217169836
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

The part here after `and` is somewhat ambiguous and could potentially 
confuse people doing upserts. Other than that, it looks fine.


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2999#discussion_r217169836
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/PutMongo.java
 ---
@@ -112,8 +112,10 @@
 .allowableValues(UPDATE_WITH_DOC, UPDATE_WITH_OPERATORS)
 .defaultValue(UPDATE_WITH_DOC.getValue())
 .description("Choose an update mode. You can either supply a JSON 
document to use as a direct replacement " +
-"or specify a document that contains update operators like 
$set and $unset")
-.build();
+"or specify a document that contains update operators like 
$set, $unset, and $inc."+
+"When Operators mode is enabled, the flowfile 
content is expected to be the operator part"+
+"for example: {$set:{\"key\": 
\"value\"},$inc:{\"count\":1234}} and Update query has to be the record to be 
updated")
--- End diff --

The part here after `and` is somewhat ambiguous and could potentially 
confuse people doing upserts. Other than that, it looks fine.


---


[jira] [Commented] (NIFI-5534) Create a Nifi Processor using Boilerpipe Article Extractor

2018-09-12 Thread Brandon DeVries (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612604#comment-16612604
 ] 

Brandon DeVries commented on NIFI-5534:
---

[~paulvid3], this is interesting, but I think it might be more appropriate to 
limit your processor to simply the extraction portion.  In other words, leave 
getting the HTML to be parsed to GetHTTP / InvokeHTTP / whatever, and simply 
operate on the contents of the FlowFile coming in to your processor...

> Create a Nifi Processor using Boilerpipe Article Extractor
> --
>
> Key: NIFI-5534
> URL: https://issues.apache.org/jira/browse/NIFI-5534
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Paul Vidal
>Priority: Minor
>  Labels: github-import
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Using the boilerpipe library ([https://code.google.com/archive/p/boilerpipe/] 
> ), I created a simple processor that reads the content of a URL and extract 
> its text into a flowfile.
> I think it is a good complement to the HMTL nar bundle.
>  
> Link to my implementation: 
> https://github.com/paulvid/nifi/tree/NIFI-5534/nifi-nar-bundles/nifi-html-bundle/nifi-html-processors/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612596#comment-16612596
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user VijetaH commented on the issue:

https://github.com/apache/nifi/pull/2999
  
@MikeThomsen Could you please review MongoDB document changes?


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread VijetaH
Github user VijetaH commented on the issue:

https://github.com/apache/nifi/pull/2999
  
@MikeThomsen Could you please review MongoDB document changes?


---


[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612586#comment-16612586
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217135735
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/RemoteProcessGroupResource.java
 ---
@@ -557,6 +750,90 @@ public Response updateRemoteProcessGroup(
 );
 }
 
+/**
+ * Updates the operational status for the specified remote process 
group with the specified value.
+ *
+ * @param httpServletRequest   request
+ * @param id   The id of the remote process group 
to update.
+ * @param requestRemotePortRunStatusEntity A remotePortRunStatusEntity.
+ * @return A remoteProcessGroupEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/run-status")
+@ApiOperation(
+value = "Updates run status of a remote process group",
+response = RemoteProcessGroupEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/remote-process-groups/{uuid} or /operation/remote-process-groups/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRemoteProcessGroupRunStatus(
+@Context HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The remote process group id.",
+required = true
+)
+@PathParam("id") String id,
+@ApiParam(
+value = "The remote process group run status.",
+required = true
+) final RemotePortRunStatusEntity 
requestRemotePortRunStatusEntity) {
+
+if (requestRemotePortRunStatusEntity == null) {
+throw new IllegalArgumentException("Remote process group run 
status must be specified.");
+}
+
+if (requestRemotePortRunStatusEntity.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRemotePortRunStatusEntity.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, 
requestRemotePortRunStatusEntity);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRemotePortRunStatusEntity.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRemotePortRunStatusEntity.getRevision(), id);
+final RemoteProcessGroupDTO remoteProcessGroupDTO = new 
RemoteProcessGroupDTO();
+remoteProcessGroupDTO.setId(id);
+
remoteProcessGroupDTO.setTransmitting(shouldTransmit(requestRemotePortRunStatusEntity));
+return withWriteLock(
+serviceFacade,
+requestRemotePortRunStatusEntity,
+requestRevision,
+lookup -> {
+Authorizable authorizable = 
lookup.getRemoteProcessGroup(id);
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 
serviceFacade.verifyUpdateRemoteProcessGroup(remoteProcessGroupDTO),
+(revision, remoteProcessGroupEntity) -> {
+// update the specified remote process group
+final RemoteProcessGroupEntity entity = 
serviceFacade.updateRemoteProcessGroup(revision, remoteProcessGroupDTO);
--- End diff --

We need to recreate this `remoteProcessGroupDTO` using the 

[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612591#comment-16612591
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217136120
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ReportingTaskResource.java
 ---
@@ -542,6 +547,88 @@ public Response removeReportingTask(
 );
 }
 
+/**
+ * Updates the operational status for the specified ReportingTask with 
the specified values.
+ *
+ * @param httpServletRequest  request
+ * @param id  The id of the reporting task to update.
+ * @param requestRunStatus A runStatusEntity.
+ * @return A reportingTaskEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/run-status")
+@ApiOperation(
+value = "Updates run status of a reporting task",
+response = ReportingTaskEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/reporting-tasks/{uuid} or  or /operation/reporting-tasks/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The reporting task id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The reporting task run status.",
+required = true
+) final ReportingTaskRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Reporting task run status 
must be specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create DTO to verify if it can be updated.
+final ReportingTaskDTO reportingTaskDTO = new ReportingTaskDTO();
+reportingTaskDTO.setId(id);
+reportingTaskDTO.setState(requestRunStatus.getState());
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+// authorize reporting task
+final Authorizable authorizable = 
lookup.getReportingTask(id).getAuthorizable();
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 
serviceFacade.verifyUpdateReportingTask(reportingTaskDTO),
+(revision, reportingTaskEntity) -> {
+// update the reporting task
+final ReportingTaskEntity entity = 
serviceFacade.updateReportingTask(revision, reportingTaskDTO);
--- End diff --

We need to recreate this `reportingTaskDTO` using the `reportingTaskEntity` 
due to how we authorize/cache requests during our two phase commit.


> New user role: Operator who can start and stop components
> -
>
>  

[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612583#comment-16612583
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217121570
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/ControllerServiceEntityMerger.java
 ---
@@ -137,7 +138,9 @@ public static void 
mergeControllerServiceReferences(final Set New user role: Operator who can start and stop components
> -
>
> Key: NIFI-375
> URL: https://issues.apache.org/jira/browse/NIFI-375
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Daniel Ueberfluss
>Assignee: Koji Kawamura
>Priority: Major
>
> Would like to have a user role that allows a user to stop/start processors 
> but perform no other changes to the dataflow.
> This would allow users to address simple problems without providing full 
> access to modifying a data flow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612585#comment-16612585
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217135024
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessorResource.java
 ---
@@ -668,6 +670,91 @@ public Response deleteProcessor(
 );
 }
 
+/**
+ * Updates the operational status for the specified processor with the 
specified values.
+ *
+ * @param httpServletRequest request
+ * @param id The id of the processor to update.
+ * @param requestRunStatusA processorEntity.
+ * @return A processorEntity.
+ * @throws InterruptedException if interrupted
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("/{id}/run-status")
+@ApiOperation(
+value = "Updates run status of a processor",
+response = ProcessorEntity.class,
+authorizations = {
+@Authorization(value = "Write - /processors/{uuid} or 
/operation/processors/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The processor id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The processor run status.",
+required = true
+) final ProcessorRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Processor run status must 
be specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create processor DTO to verify if it can be updated.
+final ProcessorDTO requestProcessorDTO = new ProcessorDTO();
+requestProcessorDTO.setId(id);
+requestProcessorDTO.setState(requestRunStatus.getState());
+
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+final NiFiUser user = NiFiUserUtils.getNiFiUser();
+
+final Authorizable authorizable = 
lookup.getProcessor(id).getAuthorizable();
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, user);
+},
+() -> 
serviceFacade.verifyUpdateProcessor(requestProcessorDTO),
+(revision, runStatusEntity) -> {
+// update the processor
+final ProcessorEntity entity = 
serviceFacade.updateProcessor(revision, requestProcessorDTO);
--- End diff --

We need to recreate this `requestProcessorDTO` using the `runStatusEntity` 
due to how we authorize/cache requests during our two phase commit.


> New user role: Operator who can start and stop components
> -
>
>

[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612584#comment-16612584
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217134851
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/OutputPortResource.java
 ---
@@ -315,6 +319,90 @@ public Response removeOutputPort(
 );
 }
 
+
+/**
+ * Updates the operational status for the specified input port with 
the specified values.
+ *
+ * @param httpServletRequest request
+ * @param id The id of the port to update.
+ * @param requestRunStatusA portRunStatusEntity.
+ * @return A portEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("/{id}/run-status")
+@ApiOperation(
+value = "Updates run status of an output-port",
+response = ProcessorEntity.class,
+authorizations = {
+@Authorization(value = "Write - /output-ports/{uuid} 
or /operation/output-ports/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The port id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The port run status.",
+required = true
+) final PortRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Port run status must be 
specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create port DTO to verify if it can be updated.
+final PortDTO portDTO = new PortDTO();
+portDTO.setId(id);
+portDTO.setState(requestRunStatus.getState());
+
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+final NiFiUser user = NiFiUserUtils.getNiFiUser();
+
+final Authorizable authorizable = 
lookup.getOutputPort(id);
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, user);
+},
+() -> serviceFacade.verifyUpdateOutputPort(portDTO),
+(revision, runStatusEntity) -> {
+// update the input port
+final PortEntity entity = 
serviceFacade.updateOutputPort(revision, portDTO);
--- End diff --

We need to recreate this `portDTO` using the `runStatusEntity` due to how 
we authorize/cache requests during our two phase commit.


> New user role: Operator who can start and stop components
> -
>
> Key: NIFI-375
> URL: https://issues.apache.org/jira/browse/NIFI-375
> Project: Apache NiFi
>  Issue Type: New Feature
> 

[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612589#comment-16612589
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217135560
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/RemoteProcessGroupResource.java
 ---
@@ -434,6 +436,197 @@ public Response updateRemoteProcessGroupOutputPort(
 );
 }
 
+/**
+ * Updates the specified remote process group input port run status.
+ *
+ * @param httpServletRequest   request
+ * @param id   The id of the remote process 
group to update.
+ * @param portId   The id of the input port to 
update.
+ * @param requestRemotePortRunStatusEntity The 
remoteProcessGroupPortRunStatusEntity
+ * @return A remoteProcessGroupPortEntity
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/input-ports/{port-id}/run-status")
+@ApiOperation(
+value = "Updates run status of a remote port",
+notes = NON_GUARANTEED_ENDPOINT,
+response = RemoteProcessGroupPortEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/remote-process-groups/{uuid} or /operation/remote-process-groups/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRemoteProcessGroupInputPortRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The remote process group id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The remote process group port id.",
+required = true
+)
+@PathParam("port-id") final String portId,
+@ApiParam(
+value = "The remote process group port.",
+required = true
+) final RemotePortRunStatusEntity 
requestRemotePortRunStatusEntity) {
+
+if (requestRemotePortRunStatusEntity == null) {
+throw new IllegalArgumentException("Remote process group port 
run status must be specified.");
+}
+
+if (requestRemotePortRunStatusEntity.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRemotePortRunStatusEntity.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, 
requestRemotePortRunStatusEntity);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRemotePortRunStatusEntity.isDisconnectedNodeAcknowledged());
+}
+
+final Revision requestRevision = 
getRevision(requestRemotePortRunStatusEntity.getRevision(), id);
+final RemoteProcessGroupPortDTO remoteProcessGroupPort = new 
RemoteProcessGroupPortDTO();
+remoteProcessGroupPort.setId(portId);
+remoteProcessGroupPort.setGroupId(id);
+
remoteProcessGroupPort.setTransmitting(shouldTransmit(requestRemotePortRunStatusEntity));
+
+return withWriteLock(
+serviceFacade,
+requestRemotePortRunStatusEntity,
+requestRevision,
+lookup -> {
+final Authorizable remoteProcessGroup = 
lookup.getRemoteProcessGroup(id);
+OperationAuthorizable.isAuthorized(remoteProcessGroup, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 

[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612590#comment-16612590
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217140684
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/ComponentEntityMerger.java
 ---
@@ -68,10 +75,13 @@ default void merge(final EntityType clientEntity, final 
Map 
MAX_BULLETINS_PER_COMPONENT) {
 
clientEntity.setBulletins(clientEntity.getBulletins().subList(0, 
MAX_BULLETINS_PER_COMPONENT));
 }
+} else {
+clientEntity.setBulletins(null);
--- End diff --

I think we need to continue to set the component to null when `canRead` is 
false. It may have been changed to accommodate an earlier iteration of this PR.


> New user role: Operator who can start and stop components
> -
>
> Key: NIFI-375
> URL: https://issues.apache.org/jira/browse/NIFI-375
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Daniel Ueberfluss
>Assignee: Koji Kawamura
>Priority: Major
>
> Would like to have a user role that allows a user to stop/start processors 
> but perform no other changes to the dataflow.
> This would allow users to address simple problems without providing full 
> access to modifying a data flow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612587#comment-16612587
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217135375
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/RemoteProcessGroupResource.java
 ---
@@ -434,6 +436,197 @@ public Response updateRemoteProcessGroupOutputPort(
 );
 }
 
+/**
+ * Updates the specified remote process group input port run status.
+ *
+ * @param httpServletRequest   request
+ * @param id   The id of the remote process 
group to update.
+ * @param portId   The id of the input port to 
update.
+ * @param requestRemotePortRunStatusEntity The 
remoteProcessGroupPortRunStatusEntity
+ * @return A remoteProcessGroupPortEntity
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/input-ports/{port-id}/run-status")
+@ApiOperation(
+value = "Updates run status of a remote port",
+notes = NON_GUARANTEED_ENDPOINT,
+response = RemoteProcessGroupPortEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/remote-process-groups/{uuid} or /operation/remote-process-groups/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRemoteProcessGroupInputPortRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The remote process group id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The remote process group port id.",
+required = true
+)
+@PathParam("port-id") final String portId,
+@ApiParam(
+value = "The remote process group port.",
+required = true
+) final RemotePortRunStatusEntity 
requestRemotePortRunStatusEntity) {
+
+if (requestRemotePortRunStatusEntity == null) {
+throw new IllegalArgumentException("Remote process group port 
run status must be specified.");
+}
+
+if (requestRemotePortRunStatusEntity.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRemotePortRunStatusEntity.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, 
requestRemotePortRunStatusEntity);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRemotePortRunStatusEntity.isDisconnectedNodeAcknowledged());
+}
+
+final Revision requestRevision = 
getRevision(requestRemotePortRunStatusEntity.getRevision(), id);
+final RemoteProcessGroupPortDTO remoteProcessGroupPort = new 
RemoteProcessGroupPortDTO();
+remoteProcessGroupPort.setId(portId);
+remoteProcessGroupPort.setGroupId(id);
+
remoteProcessGroupPort.setTransmitting(shouldTransmit(requestRemotePortRunStatusEntity));
+
+return withWriteLock(
+serviceFacade,
+requestRemotePortRunStatusEntity,
+requestRevision,
+lookup -> {
+final Authorizable remoteProcessGroup = 
lookup.getRemoteProcessGroup(id);
+OperationAuthorizable.isAuthorized(remoteProcessGroup, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 

[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612588#comment-16612588
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217134368
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ControllerServiceResource.java
 ---
@@ -741,6 +743,88 @@ public Response removeControllerService(
 );
 }
 
+/**
+ * Updates the operational status for the specified controller service 
with the specified values.
+ *
+ * @param httpServletRequest  request
+ * @param id  The id of the controller service to 
update.
+ * @param requestRunStatusA runStatusEntity.
+ * @return A controllerServiceEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/run-status")
+@ApiOperation(
+value = "Updates run status of a controller service",
+response = ControllerServiceEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/controller-services/{uuid} or /operation/controller-services/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The controller service id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The controller service run status.",
+required = true
+) final ControllerServiceRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Controller service run 
status must be specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+}  else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create DTO to verify if it can be updated.
+final ControllerServiceDTO controllerServiceDTO = new 
ControllerServiceDTO();
+controllerServiceDTO.setId(id);
+controllerServiceDTO.setState(requestRunStatus.getState());
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+// authorize the service
+final Authorizable authorizable = 
lookup.getControllerService(id).getAuthorizable();
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 
serviceFacade.verifyUpdateControllerService(controllerServiceDTO),
+(revision, runStatusEntity) -> {
+// update the controller service
+final ControllerServiceEntity entity = 
serviceFacade.updateControllerService(revision, controllerServiceDTO);
--- End diff --

We need to recreate this `controllerServiceDTO` using the `runStatusEntity` 
due to how we authorize/cache requests during our two phase commit.


> New user role: Operator who 

[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612582#comment-16612582
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217134678
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/InputPortResource.java
 ---
@@ -315,6 +319,90 @@ public Response removeInputPort(
 );
 }
 
+/**
+ * Updates the operational status for the specified input port with 
the specified values.
+ *
+ * @param httpServletRequest request
+ * @param id The id of the port to update.
+ * @param requestRunStatusA portRunStatusEntity.
+ * @return A portEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("/{id}/run-status")
+@ApiOperation(
+value = "Updates run status of an input-port",
+response = ProcessorEntity.class,
+authorizations = {
+@Authorization(value = "Write - /input-ports/{uuid} or 
/operation/input-ports/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The port id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The port run status.",
+required = true
+) final PortRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Port run status must be 
specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create port DTO to verify if it can be updated.
+final PortDTO portDTO = new PortDTO();
+portDTO.setId(id);
+portDTO.setState(requestRunStatus.getState());
+
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+final NiFiUser user = NiFiUserUtils.getNiFiUser();
+
+final Authorizable authorizable = 
lookup.getInputPort(id);
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, user);
+},
+() -> serviceFacade.verifyUpdateInputPort(portDTO),
+(revision, runStatusEntity) -> {
+// update the input port
+final PortEntity entity = 
serviceFacade.updateInputPort(revision, portDTO);
--- End diff --

We need to recreate this `portDTO` using the `runStatusEntity` due to how 
we authorize/cache requests during our two phase commit.


> New user role: Operator who can start and stop components
> -
>
> Key: NIFI-375
> URL: https://issues.apache.org/jira/browse/NIFI-375
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: 

[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217134368
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ControllerServiceResource.java
 ---
@@ -741,6 +743,88 @@ public Response removeControllerService(
 );
 }
 
+/**
+ * Updates the operational status for the specified controller service 
with the specified values.
+ *
+ * @param httpServletRequest  request
+ * @param id  The id of the controller service to 
update.
+ * @param requestRunStatusA runStatusEntity.
+ * @return A controllerServiceEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/run-status")
+@ApiOperation(
+value = "Updates run status of a controller service",
+response = ControllerServiceEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/controller-services/{uuid} or /operation/controller-services/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The controller service id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The controller service run status.",
+required = true
+) final ControllerServiceRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Controller service run 
status must be specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+}  else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create DTO to verify if it can be updated.
+final ControllerServiceDTO controllerServiceDTO = new 
ControllerServiceDTO();
+controllerServiceDTO.setId(id);
+controllerServiceDTO.setState(requestRunStatus.getState());
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+// authorize the service
+final Authorizable authorizable = 
lookup.getControllerService(id).getAuthorizable();
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 
serviceFacade.verifyUpdateControllerService(controllerServiceDTO),
+(revision, runStatusEntity) -> {
+// update the controller service
+final ControllerServiceEntity entity = 
serviceFacade.updateControllerService(revision, controllerServiceDTO);
--- End diff --

We need to recreate this `controllerServiceDTO` using the `runStatusEntity` 
due to how we authorize/cache requests during our two phase commit.


---


[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217134851
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/OutputPortResource.java
 ---
@@ -315,6 +319,90 @@ public Response removeOutputPort(
 );
 }
 
+
+/**
+ * Updates the operational status for the specified input port with 
the specified values.
+ *
+ * @param httpServletRequest request
+ * @param id The id of the port to update.
+ * @param requestRunStatusA portRunStatusEntity.
+ * @return A portEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("/{id}/run-status")
+@ApiOperation(
+value = "Updates run status of an output-port",
+response = ProcessorEntity.class,
+authorizations = {
+@Authorization(value = "Write - /output-ports/{uuid} 
or /operation/output-ports/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The port id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The port run status.",
+required = true
+) final PortRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Port run status must be 
specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create port DTO to verify if it can be updated.
+final PortDTO portDTO = new PortDTO();
+portDTO.setId(id);
+portDTO.setState(requestRunStatus.getState());
+
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+final NiFiUser user = NiFiUserUtils.getNiFiUser();
+
+final Authorizable authorizable = 
lookup.getOutputPort(id);
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, user);
+},
+() -> serviceFacade.verifyUpdateOutputPort(portDTO),
+(revision, runStatusEntity) -> {
+// update the input port
+final PortEntity entity = 
serviceFacade.updateOutputPort(revision, portDTO);
--- End diff --

We need to recreate this `portDTO` using the `runStatusEntity` due to how 
we authorize/cache requests during our two phase commit.


---


[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217135024
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ProcessorResource.java
 ---
@@ -668,6 +670,91 @@ public Response deleteProcessor(
 );
 }
 
+/**
+ * Updates the operational status for the specified processor with the 
specified values.
+ *
+ * @param httpServletRequest request
+ * @param id The id of the processor to update.
+ * @param requestRunStatusA processorEntity.
+ * @return A processorEntity.
+ * @throws InterruptedException if interrupted
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("/{id}/run-status")
+@ApiOperation(
+value = "Updates run status of a processor",
+response = ProcessorEntity.class,
+authorizations = {
+@Authorization(value = "Write - /processors/{uuid} or 
/operation/processors/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The processor id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The processor run status.",
+required = true
+) final ProcessorRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Processor run status must 
be specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create processor DTO to verify if it can be updated.
+final ProcessorDTO requestProcessorDTO = new ProcessorDTO();
+requestProcessorDTO.setId(id);
+requestProcessorDTO.setState(requestRunStatus.getState());
+
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+final NiFiUser user = NiFiUserUtils.getNiFiUser();
+
+final Authorizable authorizable = 
lookup.getProcessor(id).getAuthorizable();
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, user);
+},
+() -> 
serviceFacade.verifyUpdateProcessor(requestProcessorDTO),
+(revision, runStatusEntity) -> {
+// update the processor
+final ProcessorEntity entity = 
serviceFacade.updateProcessor(revision, requestProcessorDTO);
--- End diff --

We need to recreate this `requestProcessorDTO` using the `runStatusEntity` 
due to how we authorize/cache requests during our two phase commit.


---


[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217140684
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/ComponentEntityMerger.java
 ---
@@ -68,10 +75,13 @@ default void merge(final EntityType clientEntity, final 
Map 
MAX_BULLETINS_PER_COMPONENT) {
 
clientEntity.setBulletins(clientEntity.getBulletins().subList(0, 
MAX_BULLETINS_PER_COMPONENT));
 }
+} else {
+clientEntity.setBulletins(null);
--- End diff --

I think we need to continue to set the component to null when `canRead` is 
false. It may have been changed to accommodate an earlier iteration of this PR.


---


[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217134678
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/InputPortResource.java
 ---
@@ -315,6 +319,90 @@ public Response removeInputPort(
 );
 }
 
+/**
+ * Updates the operational status for the specified input port with 
the specified values.
+ *
+ * @param httpServletRequest request
+ * @param id The id of the port to update.
+ * @param requestRunStatusA portRunStatusEntity.
+ * @return A portEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("/{id}/run-status")
+@ApiOperation(
+value = "Updates run status of an input-port",
+response = ProcessorEntity.class,
+authorizations = {
+@Authorization(value = "Write - /input-ports/{uuid} or 
/operation/input-ports/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The port id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The port run status.",
+required = true
+) final PortRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Port run status must be 
specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create port DTO to verify if it can be updated.
+final PortDTO portDTO = new PortDTO();
+portDTO.setId(id);
+portDTO.setState(requestRunStatus.getState());
+
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+final NiFiUser user = NiFiUserUtils.getNiFiUser();
+
+final Authorizable authorizable = 
lookup.getInputPort(id);
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, user);
+},
+() -> serviceFacade.verifyUpdateInputPort(portDTO),
+(revision, runStatusEntity) -> {
+// update the input port
+final PortEntity entity = 
serviceFacade.updateInputPort(revision, portDTO);
--- End diff --

We need to recreate this `portDTO` using the `runStatusEntity` due to how 
we authorize/cache requests during our two phase commit.


---


[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217135735
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/RemoteProcessGroupResource.java
 ---
@@ -557,6 +750,90 @@ public Response updateRemoteProcessGroup(
 );
 }
 
+/**
+ * Updates the operational status for the specified remote process 
group with the specified value.
+ *
+ * @param httpServletRequest   request
+ * @param id   The id of the remote process group 
to update.
+ * @param requestRemotePortRunStatusEntity A remotePortRunStatusEntity.
+ * @return A remoteProcessGroupEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/run-status")
+@ApiOperation(
+value = "Updates run status of a remote process group",
+response = RemoteProcessGroupEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/remote-process-groups/{uuid} or /operation/remote-process-groups/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRemoteProcessGroupRunStatus(
+@Context HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The remote process group id.",
+required = true
+)
+@PathParam("id") String id,
+@ApiParam(
+value = "The remote process group run status.",
+required = true
+) final RemotePortRunStatusEntity 
requestRemotePortRunStatusEntity) {
+
+if (requestRemotePortRunStatusEntity == null) {
+throw new IllegalArgumentException("Remote process group run 
status must be specified.");
+}
+
+if (requestRemotePortRunStatusEntity.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRemotePortRunStatusEntity.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, 
requestRemotePortRunStatusEntity);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRemotePortRunStatusEntity.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRemotePortRunStatusEntity.getRevision(), id);
+final RemoteProcessGroupDTO remoteProcessGroupDTO = new 
RemoteProcessGroupDTO();
+remoteProcessGroupDTO.setId(id);
+
remoteProcessGroupDTO.setTransmitting(shouldTransmit(requestRemotePortRunStatusEntity));
+return withWriteLock(
+serviceFacade,
+requestRemotePortRunStatusEntity,
+requestRevision,
+lookup -> {
+Authorizable authorizable = 
lookup.getRemoteProcessGroup(id);
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 
serviceFacade.verifyUpdateRemoteProcessGroup(remoteProcessGroupDTO),
+(revision, remoteProcessGroupEntity) -> {
+// update the specified remote process group
+final RemoteProcessGroupEntity entity = 
serviceFacade.updateRemoteProcessGroup(revision, remoteProcessGroupDTO);
--- End diff --

We need to recreate this `remoteProcessGroupDTO` using the 
`remoteProcessGroupEntity` due to how we authorize/cache requests during our 
two phase commit.


---


[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217135560
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/RemoteProcessGroupResource.java
 ---
@@ -434,6 +436,197 @@ public Response updateRemoteProcessGroupOutputPort(
 );
 }
 
+/**
+ * Updates the specified remote process group input port run status.
+ *
+ * @param httpServletRequest   request
+ * @param id   The id of the remote process 
group to update.
+ * @param portId   The id of the input port to 
update.
+ * @param requestRemotePortRunStatusEntity The 
remoteProcessGroupPortRunStatusEntity
+ * @return A remoteProcessGroupPortEntity
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/input-ports/{port-id}/run-status")
+@ApiOperation(
+value = "Updates run status of a remote port",
+notes = NON_GUARANTEED_ENDPOINT,
+response = RemoteProcessGroupPortEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/remote-process-groups/{uuid} or /operation/remote-process-groups/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRemoteProcessGroupInputPortRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The remote process group id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The remote process group port id.",
+required = true
+)
+@PathParam("port-id") final String portId,
+@ApiParam(
+value = "The remote process group port.",
+required = true
+) final RemotePortRunStatusEntity 
requestRemotePortRunStatusEntity) {
+
+if (requestRemotePortRunStatusEntity == null) {
+throw new IllegalArgumentException("Remote process group port 
run status must be specified.");
+}
+
+if (requestRemotePortRunStatusEntity.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRemotePortRunStatusEntity.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, 
requestRemotePortRunStatusEntity);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRemotePortRunStatusEntity.isDisconnectedNodeAcknowledged());
+}
+
+final Revision requestRevision = 
getRevision(requestRemotePortRunStatusEntity.getRevision(), id);
+final RemoteProcessGroupPortDTO remoteProcessGroupPort = new 
RemoteProcessGroupPortDTO();
+remoteProcessGroupPort.setId(portId);
+remoteProcessGroupPort.setGroupId(id);
+
remoteProcessGroupPort.setTransmitting(shouldTransmit(requestRemotePortRunStatusEntity));
+
+return withWriteLock(
+serviceFacade,
+requestRemotePortRunStatusEntity,
+requestRevision,
+lookup -> {
+final Authorizable remoteProcessGroup = 
lookup.getRemoteProcessGroup(id);
+OperationAuthorizable.isAuthorized(remoteProcessGroup, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 
serviceFacade.verifyUpdateRemoteProcessGroupInputPort(id, 
remoteProcessGroupPort),
+(revision, remoteProcessGroupPortEntity) -> {
+// update the specified remote process group
+final 

[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217135375
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/RemoteProcessGroupResource.java
 ---
@@ -434,6 +436,197 @@ public Response updateRemoteProcessGroupOutputPort(
 );
 }
 
+/**
+ * Updates the specified remote process group input port run status.
+ *
+ * @param httpServletRequest   request
+ * @param id   The id of the remote process 
group to update.
+ * @param portId   The id of the input port to 
update.
+ * @param requestRemotePortRunStatusEntity The 
remoteProcessGroupPortRunStatusEntity
+ * @return A remoteProcessGroupPortEntity
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/input-ports/{port-id}/run-status")
+@ApiOperation(
+value = "Updates run status of a remote port",
+notes = NON_GUARANTEED_ENDPOINT,
+response = RemoteProcessGroupPortEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/remote-process-groups/{uuid} or /operation/remote-process-groups/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRemoteProcessGroupInputPortRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The remote process group id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The remote process group port id.",
+required = true
+)
+@PathParam("port-id") final String portId,
+@ApiParam(
+value = "The remote process group port.",
+required = true
+) final RemotePortRunStatusEntity 
requestRemotePortRunStatusEntity) {
+
+if (requestRemotePortRunStatusEntity == null) {
+throw new IllegalArgumentException("Remote process group port 
run status must be specified.");
+}
+
+if (requestRemotePortRunStatusEntity.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRemotePortRunStatusEntity.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, 
requestRemotePortRunStatusEntity);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRemotePortRunStatusEntity.isDisconnectedNodeAcknowledged());
+}
+
+final Revision requestRevision = 
getRevision(requestRemotePortRunStatusEntity.getRevision(), id);
+final RemoteProcessGroupPortDTO remoteProcessGroupPort = new 
RemoteProcessGroupPortDTO();
+remoteProcessGroupPort.setId(portId);
+remoteProcessGroupPort.setGroupId(id);
+
remoteProcessGroupPort.setTransmitting(shouldTransmit(requestRemotePortRunStatusEntity));
+
+return withWriteLock(
+serviceFacade,
+requestRemotePortRunStatusEntity,
+requestRevision,
+lookup -> {
+final Authorizable remoteProcessGroup = 
lookup.getRemoteProcessGroup(id);
+OperationAuthorizable.isAuthorized(remoteProcessGroup, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 
serviceFacade.verifyUpdateRemoteProcessGroupInputPort(id, 
remoteProcessGroupPort),
+(revision, remoteProcessGroupPortEntity) -> {
+// update the specified remote process group
+final 

[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217136120
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/ReportingTaskResource.java
 ---
@@ -542,6 +547,88 @@ public Response removeReportingTask(
 );
 }
 
+/**
+ * Updates the operational status for the specified ReportingTask with 
the specified values.
+ *
+ * @param httpServletRequest  request
+ * @param id  The id of the reporting task to update.
+ * @param requestRunStatus A runStatusEntity.
+ * @return A reportingTaskEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("{id}/run-status")
+@ApiOperation(
+value = "Updates run status of a reporting task",
+response = ReportingTaskEntity.class,
+authorizations = {
+@Authorization(value = "Write - 
/reporting-tasks/{uuid} or  or /operation/reporting-tasks/{uuid}")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not 
be authenticated."),
+@ApiResponse(code = 403, message = "Client is not 
authorized to make this request."),
+@ApiResponse(code = 404, message = "The specified 
resource could not be found."),
+@ApiResponse(code = 409, message = "The request was 
valid but NiFi was not in the appropriate state to process it. Retrying the 
same request later may be successful.")
+}
+)
+public Response updateRunStatus(
+@Context final HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The reporting task id.",
+required = true
+)
+@PathParam("id") final String id,
+@ApiParam(
+value = "The reporting task run status.",
+required = true
+) final ReportingTaskRunStatusEntity requestRunStatus) {
+
+if (requestRunStatus == null) {
+throw new IllegalArgumentException("Reporting task run status 
must be specified.");
+}
+
+if (requestRunStatus.getRevision() == null) {
+throw new IllegalArgumentException("Revision must be 
specified.");
+}
+
+requestRunStatus.validateState();
+
+if (isReplicateRequest()) {
+return replicate(HttpMethod.PUT, requestRunStatus);
+} else if (isDisconnectedFromCluster()) {
+
verifyDisconnectedNodeModification(requestRunStatus.isDisconnectedNodeAcknowledged());
+}
+
+// handle expects request (usually from the cluster manager)
+final Revision requestRevision = 
getRevision(requestRunStatus.getRevision(), id);
+// Create DTO to verify if it can be updated.
+final ReportingTaskDTO reportingTaskDTO = new ReportingTaskDTO();
+reportingTaskDTO.setId(id);
+reportingTaskDTO.setState(requestRunStatus.getState());
+return withWriteLock(
+serviceFacade,
+requestRunStatus,
+requestRevision,
+lookup -> {
+// authorize reporting task
+final Authorizable authorizable = 
lookup.getReportingTask(id).getAuthorizable();
+OperationAuthorizable.authorize(authorizable, 
authorizer, RequestAction.WRITE, NiFiUserUtils.getNiFiUser());
+},
+() -> 
serviceFacade.verifyUpdateReportingTask(reportingTaskDTO),
+(revision, reportingTaskEntity) -> {
+// update the reporting task
+final ReportingTaskEntity entity = 
serviceFacade.updateReportingTask(revision, reportingTaskDTO);
--- End diff --

We need to recreate this `reportingTaskDTO` using the `reportingTaskEntity` 
due to how we authorize/cache requests during our two phase commit.


---


[GitHub] nifi pull request #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2990#discussion_r217121570
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/manager/ControllerServiceEntityMerger.java
 ---
@@ -137,7 +138,9 @@ public static void 
mergeControllerServiceReferences(final Set

[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612576#comment-16612576
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2999
  
I looked at the code, and the documentation change is syntactically valid, 
but I am not familiar enough with MongoDB to assert that the content is 
correct. You will need a reviewer who is a MongoDB user. There are a few in the 
community and they look for PRs with _Mongo_ in the title quite regularly. 


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2999
  
I looked at the code, and the documentation change is syntactically valid, 
but I am not familiar enough with MongoDB to assert that the content is 
correct. You will need a reviewer who is a MongoDB user. There are a few in the 
community and they look for PRs with _Mongo_ in the title quite regularly. 


---


[jira] [Created] (NIFI-5591) Enable compression of Avro in ExecuteSQL

2018-09-12 Thread Colin Dean (JIRA)
Colin Dean created NIFI-5591:


 Summary: Enable compression of Avro in ExecuteSQL
 Key: NIFI-5591
 URL: https://issues.apache.org/jira/browse/NIFI-5591
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.7.1
 Environment: macOS, Java 8
Reporter: Colin Dean


The Avro stream that comes out of the ExecuteSQL processor is uncompressed. 
It's possible to rewrite it compressed using a combination of ConvertRecord 
processor with AvroReader and AvroRecordSetWriter, but that's a lot of extra 
I/O that could be handled transparently at the moment that the Avro data is 
created.

For implementation, it looks like ExecuteSQL builds a set of 
{{JdbcCommon.AvroConvertionOptions}}[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java#L246].
 That options object would need to gain a compression flag. Then, within 
{{JdbcCommon#convertToAvroStream}} 
[here|https://github.com/apache/nifi/blob/0dd4a91a6741eec04965a260c8aff38b72b3828d/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L281],
 the {{dataFileWriter}} would get a codec set by {{setCodec}}, with the codec 
having been created shortly before.

For example of creating the codec, I looked at how the AvroRecordSetWriter does 
it. The {{setCodec()}} is performed 
[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/WriteAvroResultWithSchema.java#L44]
 after the codec is created by factory option 
[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L104]
 using a factory method 
[here|https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroRecordSetWriter.java#L137].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612561#comment-16612561
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user VijetaH commented on the issue:

https://github.com/apache/nifi/pull/2999
  
@alopresto Can you please review the changes? 


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread VijetaH
Github user VijetaH commented on the issue:

https://github.com/apache/nifi/pull/2999
  
@alopresto Can you please review the changes? 


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612553#comment-16612553
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user josephxsxn commented on the issue:

https://github.com/apache/nifi/pull/2999
  
@alopresto  Thanks Andy :) nice to hear that changed, I always thought it 
was tedious to squash things. 


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread josephxsxn
Github user josephxsxn commented on the issue:

https://github.com/apache/nifi/pull/2999
  
@alopresto  Thanks Andy :) nice to hear that changed, I always thought it 
was tedious to squash things. 


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612551#comment-16612551
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2999
  
Hi @josephxsxn I don't think we have to request that any more as GitHub 
shows the consolidated diff in one view, and rebasing & force-pushing to a 
branch that is used for a PR destroys history and can mess up the reviewer 
comments. The committer rebases & squashes the commits when they merge them. 


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2999
  
Hi @josephxsxn I don't think we have to request that any more as GitHub 
shows the consolidated diff in one view, and rebasing & force-pushing to a 
branch that is used for a PR destroys history and can mess up the reviewer 
comments. The committer rebases & squashes the commits when they merge them. 


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612549#comment-16612549
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user josephxsxn commented on the issue:

https://github.com/apache/nifi/pull/2999
  
Thanks for submitting this documentation change. Can you please squash all 
3 of your commits into 1?


https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit
 


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread josephxsxn
Github user josephxsxn commented on the issue:

https://github.com/apache/nifi/pull/2999
  
Thanks for submitting this documentation change. Can you please squash all 
3 of your commits into 1?


https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit
 


---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612545#comment-16612545
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

GitHub user VijetaH opened a pull request:

https://github.com/apache/nifi/pull/2999

NIFI-5589 : Clarify PutMongo documentation

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Effyis/nifi master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2999.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2999


commit 73af67b793eb1c3248e9f2cd42ea40ec3d5bd041
Author: Vijeta 
Date:   2018-09-12T17:19:35Z

Clarifying upsert query documentation for PutMongo processor.

commit 063bf50f2b44610e9e66d9f2145d3f5cad7ff5f7
Author: VijetaH 
Date:   2018-09-12T17:22:35Z

Clarifying upsert query documentation for PutMongo processor.

commit f0d7e1d87514be1d743408cfdea4c629dafef743
Author: VijetaH 
Date:   2018-09-12T18:09:57Z

Clarifying upsert query documentation for PutMongo processor.




> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2999: NIFI-5589 : Clarify PutMongo documentation

2018-09-12 Thread VijetaH
GitHub user VijetaH opened a pull request:

https://github.com/apache/nifi/pull/2999

NIFI-5589 : Clarify PutMongo documentation

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Effyis/nifi master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2999.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2999


commit 73af67b793eb1c3248e9f2cd42ea40ec3d5bd041
Author: Vijeta 
Date:   2018-09-12T17:19:35Z

Clarifying upsert query documentation for PutMongo processor.

commit 063bf50f2b44610e9e66d9f2145d3f5cad7ff5f7
Author: VijetaH 
Date:   2018-09-12T17:22:35Z

Clarifying upsert query documentation for PutMongo processor.

commit f0d7e1d87514be1d743408cfdea4c629dafef743
Author: VijetaH 
Date:   2018-09-12T18:09:57Z

Clarifying upsert query documentation for PutMongo processor.




---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612528#comment-16612528
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

Github user VijetaH closed the pull request at:

https://github.com/apache/nifi/pull/2998


> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2998: NIFI-5589 : Clarifying PutMongo documentation

2018-09-12 Thread VijetaH
Github user VijetaH closed the pull request at:

https://github.com/apache/nifi/pull/2998


---


[GitHub] nifi pull request #2998: NIFI-5589 : Clarifying PutMongo documentation

2018-09-12 Thread VijetaH
GitHub user VijetaH opened a pull request:

https://github.com/apache/nifi/pull/2998

NIFI-5589 : Clarifying PutMongo documentation

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Effyis/nifi master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2998.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2998


commit 73af67b793eb1c3248e9f2cd42ea40ec3d5bd041
Author: Vijeta 
Date:   2018-09-12T17:19:35Z

Clarifying upsert query documentation for PutMongo processor.

commit 063bf50f2b44610e9e66d9f2145d3f5cad7ff5f7
Author: VijetaH 
Date:   2018-09-12T17:22:35Z

Clarifying upsert query documentation for PutMongo processor.




---


[jira] [Commented] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612497#comment-16612497
 ] 

ASF GitHub Bot commented on NIFI-5589:
--

GitHub user VijetaH opened a pull request:

https://github.com/apache/nifi/pull/2998

NIFI-5589 : Clarifying PutMongo documentation

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Effyis/nifi master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2998.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2998


commit 73af67b793eb1c3248e9f2cd42ea40ec3d5bd041
Author: Vijeta 
Date:   2018-09-12T17:19:35Z

Clarifying upsert query documentation for PutMongo processor.

commit 063bf50f2b44610e9e66d9f2145d3f5cad7ff5f7
Author: VijetaH 
Date:   2018-09-12T17:22:35Z

Clarifying upsert query documentation for PutMongo processor.




> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5590) Allow database fetch processors to store state by database/catalog/schema

2018-09-12 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-5590:
--

 Summary: Allow database fetch processors to store state by 
database/catalog/schema
 Key: NIFI-5590
 URL: https://issues.apache.org/jira/browse/NIFI-5590
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Matt Burgess


The database fetch processors (GenerateTableFetch, QueryDatabaseTable, e.g.) 
use a scheme of "table.column" to store the state of the maximum values for a 
column, as well as a similar entry for the column type.

Now that GenerateTableFetch can accept incoming flow files, and if it uses a 
DBCPConnectionPoolLookup service, the database could differ yet contain the 
same table/column name combinations (MySQL shards, e.g.). The state values as 
currently stored will be shared in this situation, which could certainly lead 
to errors.

If possible/prudent, the fully-qualified state name should include 
database/catalog/schema information such that the state entries are unique at a 
database level. This may prove difficult as different drivers may or may not 
make such information available, for example sometimes the database name is the 
schema, sometimes it is the catalog, etc. As long as the names are unique at 
the database level, there should be no conflicts. This may lead to a more 
complicated naming scheme, and backwards-compatibility should be maintained 
(unless this is implemented for a major release). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-1961) Test Resource - nifi.properties

2018-09-12 Thread Joseph Niemiec (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Niemiec reassigned NIFI-1961:


Assignee: (was: Joseph Niemiec)

> Test Resource - nifi.properties
> ---
>
> Key: NIFI-1961
> URL: https://issues.apache.org/jira/browse/NIFI-1961
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework
>Reporter: Matt Gilman
>Priority: Trivial
>
> Many tests reference a copy of nifi.properties. These copies have become 
> increasingly stale with as the application has evolved. With the changes in 
> 1.x we should at the very least update the copies or better yet, update the 
> tests to mock out the necessary configuration to avoid the copies all 
> together.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread Joseph Niemiec (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Niemiec updated NIFI-5589:
-
Issue Type: Improvement  (was: Bug)

> Clarify putMongo documentation
> --
>
> Key: NIFI-5589
> URL: https://issues.apache.org/jira/browse/NIFI-5589
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Assignee: Vijeta Hingorani
>Priority: Minor
>
> Today the putMongoDB documentation is very vugae and leads to alot of playing 
> around to understand exactly how it works. We would improve the documentation 
> so that others can immediately start to use this processor successfully. 
>  
> My largest issues where around understanding how the UpdateQuery works, and 
> the expected content + operators that can be used when performing the update 
> with operators and not just replacing the entire document. 
>  
>  
> Here is a misc note I made on my experience doing this.
> With the putMongo processor the updateQuery is like a find() in the mongo 
> CLI, all documents that match the find will be replaced with the flowfile 
> content. The update mode has 2 choices; whole document or with operators. If 
> your updating the entire document it expects the json to be properly 
> formated. The UpdateQuery will return to this processor the documents which 
> need to be completely replaced with the incoming FlowFile content.  If your 
> using this with operators its expected that the FlowFile content ONLY be the 
> operator part you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it 
> doesnt not support the find() portion that you would expect in the CLI, that 
> part is the 'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5589) Clarify putMongo documentation

2018-09-12 Thread Joseph Niemiec (JIRA)
Joseph Niemiec created NIFI-5589:


 Summary: Clarify putMongo documentation
 Key: NIFI-5589
 URL: https://issues.apache.org/jira/browse/NIFI-5589
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Joseph Niemiec
Assignee: Vijeta Hingorani


Today the putMongoDB documentation is very vugae and leads to alot of playing 
around to understand exactly how it works. We would improve the documentation 
so that others can immediately start to use this processor successfully. 

 

My largest issues where around understanding how the UpdateQuery works, and the 
expected content + operators that can be used when performing the update with 
operators and not just replacing the entire document. 

 

 

Here is a misc note I made on my experience doing this.


With the putMongo processor the updateQuery is like a find() in the mongo CLI, 
all documents that match the find will be replaced with the flowfile content. 
The update mode has 2 choices; whole document or with operators. If your 
updating the entire document it expects the json to be properly formated. The 
UpdateQuery will return to this processor the documents which need to be 
completely replaced with the incoming FlowFile content.  If your using this 
with operators its expected that the FlowFile content ONLY be the operator part 
you need is {$set: {"f1": "val1"} , $inc :{ "count" : 10}}, it doesnt not 
support the find() portion that you would expect in the CLI, that part is the 
'UpdateQuery'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612368#comment-16612368
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2990
  
@ijokarumawak Thanks for the update. I'm still in the process of reviewing 
but one thing that concerns me is where we've identified Service Only in the 
scenarios above. Currently (before the PR) the Enable case we allow the user to 
specify if they want to enable just this service or this service and all 
components that reference it (including other services and their referencing 
components). During the Disable case, we require that the user disables this 
service and all referencing components. This is because the referencing 
components require this service's availability to continue running.

The issue that we're hitting now is that a user with permissions outlined 
above with Service Only will be able to Enable this service but will be unable 
to subsequently disable it. Because of this, I'm wondering if we need to be 
even more strict to prevent this cases via the UI. I don't think its too 
restrictive as this is more of a corner case. The more common use case here 
will be granting operators permissions to the read policies and operation 
policies for these components.

Thoughts?




> New user role: Operator who can start and stop components
> -
>
> Key: NIFI-375
> URL: https://issues.apache.org/jira/browse/NIFI-375
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Daniel Ueberfluss
>Assignee: Koji Kawamura
>Priority: Major
>
> Would like to have a user role that allows a user to stop/start processors 
> but perform no other changes to the dataflow.
> This would allow users to address simple problems without providing full 
> access to modifying a data flow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2990: NIFI-375: Added operation policy

2018-09-12 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2990
  
@ijokarumawak Thanks for the update. I'm still in the process of reviewing 
but one thing that concerns me is where we've identified Service Only in the 
scenarios above. Currently (before the PR) the Enable case we allow the user to 
specify if they want to enable just this service or this service and all 
components that reference it (including other services and their referencing 
components). During the Disable case, we require that the user disables this 
service and all referencing components. This is because the referencing 
components require this service's availability to continue running.

The issue that we're hitting now is that a user with permissions outlined 
above with Service Only will be able to Enable this service but will be unable 
to subsequently disable it. Because of this, I'm wondering if we need to be 
even more strict to prevent this cases via the UI. I don't think its too 
restrictive as this is more of a corner case. The more common use case here 
will be granting operators permissions to the read policies and operation 
policies for these components.

Thoughts?




---


[GitHub] nifi issue #2983: NIFI-5566 Improve HashContent processor and standardize Ha...

2018-09-12 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2983
  
Thanks. I may have published the wrong version of the template. I had a 
process group with processors configured to work with the current behavior, as 
well as one with the properties configured as you described which will be the 
behavior after [NIFI-5582](https://issues.apache.org/jira/browse/NIFI-5582) is 
implemented. 


---


[jira] [Commented] (NIFI-5566) Bring HashContent inline with HashService and rename legacy components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612343#comment-16612343
 ] 

ASF GitHub Bot commented on NIFI-5566:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2983
  
Thanks. I may have published the wrong version of the template. I had a 
process group with processors configured to work with the current behavior, as 
well as one with the properties configured as you described which will be the 
behavior after [NIFI-5582](https://issues.apache.org/jira/browse/NIFI-5582) is 
implemented. 


> Bring HashContent inline with HashService and rename legacy components
> --
>
> Key: NIFI-5566
> URL: https://issues.apache.org/jira/browse/NIFI-5566
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: backwards-compatibility, hash, security
>
> As documented in [NIFI-5147|https://issues.apache.org/jira/browse/NIFI-5147] 
> and [PR 2980|https://github.com/apache/nifi/pull/2980], the {{HashAttribute}} 
> processor and {{HashContent}} processor are lacking some features, do not 
> offer consistent algorithms across platforms, etc. 
> I propose the following:
> * Rename {{HashAttribute}} (which does not provide the service of calculating 
> a hash over one or more attributes) to {{HashAttributeLegacy}}
> * Renamed {{CalculateAttributeHash}} to {{HashAttribute}} to make semantic 
> sense
> * Rename {{HashContent}} to {{HashContentLegacy}} for users who need obscure 
> digest algorithms which may or may not have been offered on their platform
> * Implement a processor {{HashContent}} with similar semantics to the 
> existing processor but with consistent algorithm offerings and using the 
> common {{HashService}} offering
> With the new component versioning features provided as part of the flow 
> versioning behavior, silently disrupting existing flows which use these 
> processors is no longer a concern. Rather, Any flow currently using the 
> existing processors will either:
> 1. continue normal operation
> 1. require flow manager interaction and provide documentation about the change
>   1. migration notes and upgrade instructions will be provided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5566) Bring HashContent inline with HashService and rename legacy components

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612341#comment-16612341
 ] 

ASF GitHub Bot commented on NIFI-5566:
--

Github user thenatog commented on the issue:

https://github.com/apache/nifi/pull/2983
  
@alopresto Apologies, looks like I had things a bit backward. The template 
given was generating the dynamic content with attributes:

dynamic
generator_type

and the HashAttributeProcessor was looking for:

dynamic_sha256 -> dynamic
generator_type_sha256 -> generator_type
static_sha256 -> static

Changed this to "dynamic -> dynamic_sha256" and files are moving to Success 
and have the correct hashes for ISO-8859-1, US-ASCII, , UTF-8, UTF-16, UTF-16BE 
and UTF-16LE.


> Bring HashContent inline with HashService and rename legacy components
> --
>
> Key: NIFI-5566
> URL: https://issues.apache.org/jira/browse/NIFI-5566
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: backwards-compatibility, hash, security
>
> As documented in [NIFI-5147|https://issues.apache.org/jira/browse/NIFI-5147] 
> and [PR 2980|https://github.com/apache/nifi/pull/2980], the {{HashAttribute}} 
> processor and {{HashContent}} processor are lacking some features, do not 
> offer consistent algorithms across platforms, etc. 
> I propose the following:
> * Rename {{HashAttribute}} (which does not provide the service of calculating 
> a hash over one or more attributes) to {{HashAttributeLegacy}}
> * Renamed {{CalculateAttributeHash}} to {{HashAttribute}} to make semantic 
> sense
> * Rename {{HashContent}} to {{HashContentLegacy}} for users who need obscure 
> digest algorithms which may or may not have been offered on their platform
> * Implement a processor {{HashContent}} with similar semantics to the 
> existing processor but with consistent algorithm offerings and using the 
> common {{HashService}} offering
> With the new component versioning features provided as part of the flow 
> versioning behavior, silently disrupting existing flows which use these 
> processors is no longer a concern. Rather, Any flow currently using the 
> existing processors will either:
> 1. continue normal operation
> 1. require flow manager interaction and provide documentation about the change
>   1. migration notes and upgrade instructions will be provided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2983: NIFI-5566 Improve HashContent processor and standardize Ha...

2018-09-12 Thread thenatog
Github user thenatog commented on the issue:

https://github.com/apache/nifi/pull/2983
  
@alopresto Apologies, looks like I had things a bit backward. The template 
given was generating the dynamic content with attributes:

dynamic
generator_type

and the HashAttributeProcessor was looking for:

dynamic_sha256 -> dynamic
generator_type_sha256 -> generator_type
static_sha256 -> static

Changed this to "dynamic -> dynamic_sha256" and files are moving to Success 
and have the correct hashes for ISO-8859-1, US-ASCII, , UTF-8, UTF-16, UTF-16BE 
and UTF-16LE.


---


[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612227#comment-16612227
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2947
  
@ijokarumawak The scenario that you laid out to drop flowfiles should work 
just fine. I have tested it several times and just tested again. All FlowFiles 
were successfully dropped for me. Can you verify that the destination of your 
connection was stopped and did not have any of the FlowFiles in its possession?
You might also want to try enabling DEBUG logging for 
org.apache.nifi.controller.queue.SwappablePriorityQueue - it does log quite a 
few debug statements when performing a drop request.


> Allow data in a Connection to be Load-Balanced across cluster
> -
>
> Key: NIFI-5516
> URL: https://issues.apache.org/jira/browse/NIFI-5516
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Allow user to configure a Connection to be load balanced across the cluster. 
> For more information, see Feature Proposal at 
> https://cwiki.apache.org/confluence/display/NIFI/Load-Balanced+Connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2947: [WIP] NIFI-5516: Implement Load-Balanced Connections

2018-09-12 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2947
  
@ijokarumawak The scenario that you laid out to drop flowfiles should work 
just fine. I have tested it several times and just tested again. All FlowFiles 
were successfully dropped for me. Can you verify that the destination of your 
connection was stopped and did not have any of the FlowFiles in its possession?
You might also want to try enabling DEBUG logging for 
org.apache.nifi.controller.queue.SwappablePriorityQueue - it does log quite a 
few debug statements when performing a drop request.


---


[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612188#comment-16612188
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r217047466
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/ConnectionDTO.java
 ---
@@ -231,6 +239,46 @@ public void setPrioritizers(List prioritizers) 
{
 this.prioritizers = prioritizers;
 }
 
+@ApiModelProperty(value = "How to load balance the data in this 
Connection across the nodes in the cluster.",
+allowableValues = "DO_NOT_LOAD_BALANCE, PARTITION_BY_ATTRIBUTE, 
ROUND_ROBIN, SINGLE_NODE")
+public String getLoadBalanceStrategy() {
+return loadBalanceStrategy;
+}
+
+public void setLoadBalanceStrategy(String loadBalanceStrategy) {
+this.loadBalanceStrategy = loadBalanceStrategy;
+}
+
+@ApiModelProperty(value = "The FlowFile Attribute to use for 
determining which node a FlowFile will go to if the Load Balancing Strategy is 
set to PARTITION_BY_ATTRIBUTE")
+public String getLoadBalancePartitionAttribute() {
+return loadBalancePartitionAttribute;
+}
+
+public void setLoadBalancePartitionAttribute(String 
partitionAttribute) {
+this.loadBalancePartitionAttribute = partitionAttribute;
+}
+
+@ApiModelProperty(value = "Whether or not data should be compressed 
when being transferred between nodes in the cluster.",
+allowableValues = "DO_NOT_COMPRESS, COMPRESS_ATTRIBUTES_ONLY, 
COMPRESS_ATTRIBUTES_AND_CONTENT")
+public String getLoadBalanceCompression() {
+return loadBalanceCompression;
+}
+
+public void setLoadBalanceCompression(String compression) {
+this.loadBalanceCompression = compression;
+}
+
+@ApiModelProperty(value = "The current status of the Connection's Load 
Balancing Activities. Status can indicate that Load Balancing is not configured 
for the connection, that Load Balancing " +
--- End diff --

Good catch. Will address.


> Allow data in a Connection to be Load-Balanced across cluster
> -
>
> Key: NIFI-5516
> URL: https://issues.apache.org/jira/browse/NIFI-5516
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Allow user to configure a Connection to be load balanced across the cluster. 
> For more information, see Feature Proposal at 
> https://cwiki.apache.org/confluence/display/NIFI/Load-Balanced+Connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-12 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r217047466
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/ConnectionDTO.java
 ---
@@ -231,6 +239,46 @@ public void setPrioritizers(List prioritizers) 
{
 this.prioritizers = prioritizers;
 }
 
+@ApiModelProperty(value = "How to load balance the data in this 
Connection across the nodes in the cluster.",
+allowableValues = "DO_NOT_LOAD_BALANCE, PARTITION_BY_ATTRIBUTE, 
ROUND_ROBIN, SINGLE_NODE")
+public String getLoadBalanceStrategy() {
+return loadBalanceStrategy;
+}
+
+public void setLoadBalanceStrategy(String loadBalanceStrategy) {
+this.loadBalanceStrategy = loadBalanceStrategy;
+}
+
+@ApiModelProperty(value = "The FlowFile Attribute to use for 
determining which node a FlowFile will go to if the Load Balancing Strategy is 
set to PARTITION_BY_ATTRIBUTE")
+public String getLoadBalancePartitionAttribute() {
+return loadBalancePartitionAttribute;
+}
+
+public void setLoadBalancePartitionAttribute(String 
partitionAttribute) {
+this.loadBalancePartitionAttribute = partitionAttribute;
+}
+
+@ApiModelProperty(value = "Whether or not data should be compressed 
when being transferred between nodes in the cluster.",
+allowableValues = "DO_NOT_COMPRESS, COMPRESS_ATTRIBUTES_ONLY, 
COMPRESS_ATTRIBUTES_AND_CONTENT")
+public String getLoadBalanceCompression() {
+return loadBalanceCompression;
+}
+
+public void setLoadBalanceCompression(String compression) {
+this.loadBalanceCompression = compression;
+}
+
+@ApiModelProperty(value = "The current status of the Connection's Load 
Balancing Activities. Status can indicate that Load Balancing is not configured 
for the connection, that Load Balancing " +
--- End diff --

Good catch. Will address.


---


[jira] [Comment Edited] (NIFI-5448) Failed EL date parsing live-locks processors without a failure relationship

2018-09-12 Thread Mark Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612166#comment-16612166
 ] 

Mark Payne edited comment on NIFI-5448 at 9/12/18 1:59 PM:
---

[~mike.thomsen] [~mcgilman] [~pvillard] - I am re-opening this ticket, because 
I have some concerns about the changes here. The idea here was to add a new 
property and a new relationship, and the default will be to not use that new 
relationship and make it auto-terminated. This way, existing flows do not 
become invalid and continue to work as they have been working. What I think is 
being overlooked here is how this affects flows that are versioned in the Flow 
Registry.

For any flow that is stored in the Flow Registry, when upgrading to this new 
version, we will see that the local flow is Locally Modified because it now has 
a new Relationship that is auto-terminated. This is a valid "Local 
Modification." This is a bit annoying for users, as they have to now push a new 
version of the flow to the Flow Registry, but not a big deal.

However, where this does become a big deal is if a user has a flow in the Flow 
Registry that uses UpdateAttribute and is not currently on the Latest version 
of the flow. For example, My Flow has 8 versions, and I am using version 6 of 
My Flow. For valid reasons, I want to stick with version 6 (for instance, my 
downstream consumer cannot handle the data that is produced by version 8). I 
now update my NiFi to this latest version. What will now happen is that I will 
have a flow that is both Locally Modified AND Not the Latest Version. If I 
attempt to Revert Changes, then I still have Local Modifications because those 
Local Modifications are due to hardcoded changes in the Processor.

Now, all of this stems from the fact that our Version Conflict Management is 
very crude at the moment. This will certainly need to be improved in the 
future, so that even if we have local modifications and the versioned flow has 
changed, we will need to be able to resolve those conflicts. But right now, we 
are just not there yet. Given that, I consider this a change that does in fact 
break backward compatibility. I think we are going to need to roll back these 
changes for 1.8.0 and only introduce such changes after we have a more robust 
Conflict Management story. While I understand that it's not ideal, I would 
recommend for the specified use case, for now, using a RouteOnAttribute to 
check if \{{ test:matches('\d{4}-\d\{2}-\d\{2}') }} and if not, routing to an 
'invalid' relationship before sending to UpdateAttribute – or alternatively 
using a more complex Expression using if/then/else clauses so that the toDate() 
method is called only on valid values.


was (Author: markap14):
[~mike.thomsen] [~mcgilman] [~pvillard] - I am re-opening this ticket, because 
I have some concerns about the changes here. The idea here was to add a new 
property and a new relationship, and the default will be to not use that new 
relationship and make it auto-terminated. This way, existing flows do not 
become invalid and continue to work as they have been working. What I think is 
being overlooked here is how this affects flows that are versioned in the Flow 
Registry.

For any flow that is stored in the Flow Registry, when upgrading to this new 
version, we will see that the local flow is Locally Modified because it now has 
a new Relationship that is auto-terminated. This is a valid "Local 
Modification." This is a bit annoying for users, as they have to now push a new 
version of the flow to the Flow Registry, but not a big deal.

However, where this does become a big deal is if a user has a flow in the Flow 
Registry that uses UpdateAttribute and is not currently on the Latest version 
of the flow. For example, My Flow has 8 versions, and I am using version 6 of 
My Flow. For valid reasons, I want to stick with version 6 (for instance, my 
downstream consumer cannot handle the data that is produced by version 8). I 
now update my NiFi to this latest version. What will now happen is that I will 
have a flow that is both Locally Modified AND Not the Latest Version. If I 
attempt to Revert Changes, then I still have Local Modifications because those 
Local Modifications are due to hardcoded changes in the Processor.

Now, all of this stems from the fact that our Version Conflict Management is 
very crude at the moment. This will certainly need to be improved in the 
future, so that even if we have local modifications and the versioned flow has 
changed, we will need to be able to resolve those conflicts. But right now, we 
are just not there yet. Given that, I consider this a change that does in fact 
break backward compatibility. I think we are going to need to roll back these 
changes for 1.8.0 and only introduce such changes after we have a more robust 
Conflict Management story. While I understand that it's not ideal, 

[jira] [Commented] (NIFI-5448) Failed EL date parsing live-locks processors without a failure relationship

2018-09-12 Thread Mark Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612166#comment-16612166
 ] 

Mark Payne commented on NIFI-5448:
--

[~mike.thomsen] [~mcgilman] [~pvillard] - I am re-opening this ticket, because 
I have some concerns about the changes here. The idea here was to add a new 
property and a new relationship, and the default will be to not use that new 
relationship and make it auto-terminated. This way, existing flows do not 
become invalid and continue to work as they have been working. What I think is 
being overlooked here is how this affects flows that are versioned in the Flow 
Registry.

For any flow that is stored in the Flow Registry, when upgrading to this new 
version, we will see that the local flow is Locally Modified because it now has 
a new Relationship that is auto-terminated. This is a valid "Local 
Modification." This is a bit annoying for users, as they have to now push a new 
version of the flow to the Flow Registry, but not a big deal.

However, where this does become a big deal is if a user has a flow in the Flow 
Registry that uses UpdateAttribute and is not currently on the Latest version 
of the flow. For example, My Flow has 8 versions, and I am using version 6 of 
My Flow. For valid reasons, I want to stick with version 6 (for instance, my 
downstream consumer cannot handle the data that is produced by version 8). I 
now update my NiFi to this latest version. What will now happen is that I will 
have a flow that is both Locally Modified AND Not the Latest Version. If I 
attempt to Revert Changes, then I still have Local Modifications because those 
Local Modifications are due to hardcoded changes in the Processor.

Now, all of this stems from the fact that our Version Conflict Management is 
very crude at the moment. This will certainly need to be improved in the 
future, so that even if we have local modifications and the versioned flow has 
changed, we will need to be able to resolve those conflicts. But right now, we 
are just not there yet. Given that, I consider this a change that does in fact 
break backward compatibility. I think we are going to need to roll back these 
changes for 1.8.0 and only introduce such changes after we have a more robust 
Conflict Management story. While I understand that it's not ideal, I would 
recommend for the specified use case, for now, using a RouteOnAttribute to 
check if \{{test:matches('\d{4}\-\d\{2}\-\d\{2}')}} and if not, routing to an 
'invalid' relationship before sending to UpdateAttribute – or alternatively 
using a more complex Expression using if/then/else clauses so that the toDate() 
method is called only on valid values.

> Failed EL date parsing live-locks processors without a failure relationship
> ---
>
> Key: NIFI-5448
> URL: https://issues.apache.org/jira/browse/NIFI-5448
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: David Koster
>Assignee: Mike Thomsen
>Priority: Major
> Fix For: 1.8.0
>
>
> Processors that utilize the Expression Language need to always present a 
> failure relationship.
> If a processor with only a success relationship, for example UpdateAttribute, 
> utilizes the expression language to perform type coercion to a date and 
> fails, the processor will be unable to dispose of the FlowFile and remain 
> blocked indefinitely.
> Recreation flow:
> GenerateFlowFile -> Update Attribute #1 -> Update Attribute #2 -> Anything
> Update Attribute #1 - test = "Hello World"
> Update Attribute #2 - test = ${test:toDate('-MM-dd')}
>  
> Generates an IllegalAttributeException on UpdateAttribute.
>  
> The behavior should match numerical type coercion and silently skip the 
> processing or offer failure relationships on processors supporting EL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (NIFI-5448) Failed EL date parsing live-locks processors without a failure relationship

2018-09-12 Thread Mark Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne reopened NIFI-5448:
--

> Failed EL date parsing live-locks processors without a failure relationship
> ---
>
> Key: NIFI-5448
> URL: https://issues.apache.org/jira/browse/NIFI-5448
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: David Koster
>Assignee: Mike Thomsen
>Priority: Major
> Fix For: 1.8.0
>
>
> Processors that utilize the Expression Language need to always present a 
> failure relationship.
> If a processor with only a success relationship, for example UpdateAttribute, 
> utilizes the expression language to perform type coercion to a date and 
> fails, the processor will be unable to dispose of the FlowFile and remain 
> blocked indefinitely.
> Recreation flow:
> GenerateFlowFile -> Update Attribute #1 -> Update Attribute #2 -> Anything
> Update Attribute #1 - test = "Hello World"
> Update Attribute #2 - test = ${test:toDate('-MM-dd')}
>  
> Generates an IllegalAttributeException on UpdateAttribute.
>  
> The behavior should match numerical type coercion and silently skip the 
> processing or offer failure relationships on processors supporting EL



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-4532) OpenID Connect User Authentication

2018-09-12 Thread Sarthak (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sarthak reassigned NIFI-4532:
-

Assignee: Sarthak

> OpenID Connect User Authentication
> --
>
> Key: NIFI-4532
> URL: https://issues.apache.org/jira/browse/NIFI-4532
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Docker
>Reporter: Aldrin Piri
>Assignee: Sarthak
>Priority: Major
>
> Provide configuration for OpenID Connect user authentication in Docker images



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4710) Kerberos

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611770#comment-16611770
 ] 

ASF GitHub Bot commented on NIFI-4710:
--

Github user SarthakSahu commented on the issue:

https://github.com/apache/nifi/pull/2866
  
@pepov  PR is ready with spnego enhancement and updated last review 
comments. Please review it. 


> Kerberos
> 
>
> Key: NIFI-4710
> URL: https://issues.apache.org/jira/browse/NIFI-4710
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Docker
>Reporter: Aldrin Piri
>Assignee: Sarthak
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2866: NIFI-4710 Kerberos support for user auth in Docker instanc...

2018-09-12 Thread SarthakSahu
Github user SarthakSahu commented on the issue:

https://github.com/apache/nifi/pull/2866
  
@pepov  PR is ready with spnego enhancement and updated last review 
comments. Please review it. 


---


[jira] [Commented] (NIFI-5516) Allow data in a Connection to be Load-Balanced across cluster

2018-09-12 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611741#comment-16611741
 ] 

ASF GitHub Bot commented on NIFI-5516:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r216925801
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/ConnectionDTO.java
 ---
@@ -231,6 +239,46 @@ public void setPrioritizers(List prioritizers) 
{
 this.prioritizers = prioritizers;
 }
 
+@ApiModelProperty(value = "How to load balance the data in this 
Connection across the nodes in the cluster.",
+allowableValues = "DO_NOT_LOAD_BALANCE, PARTITION_BY_ATTRIBUTE, 
ROUND_ROBIN, SINGLE_NODE")
+public String getLoadBalanceStrategy() {
+return loadBalanceStrategy;
+}
+
+public void setLoadBalanceStrategy(String loadBalanceStrategy) {
+this.loadBalanceStrategy = loadBalanceStrategy;
+}
+
+@ApiModelProperty(value = "The FlowFile Attribute to use for 
determining which node a FlowFile will go to if the Load Balancing Strategy is 
set to PARTITION_BY_ATTRIBUTE")
+public String getLoadBalancePartitionAttribute() {
+return loadBalancePartitionAttribute;
+}
+
+public void setLoadBalancePartitionAttribute(String 
partitionAttribute) {
+this.loadBalancePartitionAttribute = partitionAttribute;
+}
+
+@ApiModelProperty(value = "Whether or not data should be compressed 
when being transferred between nodes in the cluster.",
+allowableValues = "DO_NOT_COMPRESS, COMPRESS_ATTRIBUTES_ONLY, 
COMPRESS_ATTRIBUTES_AND_CONTENT")
+public String getLoadBalanceCompression() {
+return loadBalanceCompression;
+}
+
+public void setLoadBalanceCompression(String compression) {
+this.loadBalanceCompression = compression;
+}
+
+@ApiModelProperty(value = "The current status of the Connection's Load 
Balancing Activities. Status can indicate that Load Balancing is not configured 
for the connection, that Load Balancing " +
--- End diff --

Better to annotate this method with readOnly = true.


> Allow data in a Connection to be Load-Balanced across cluster
> -
>
> Key: NIFI-5516
> URL: https://issues.apache.org/jira/browse/NIFI-5516
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Allow user to configure a Connection to be load balanced across the cluster. 
> For more information, see Feature Proposal at 
> https://cwiki.apache.org/confluence/display/NIFI/Load-Balanced+Connections



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...

2018-09-12 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2947#discussion_r216925801
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/ConnectionDTO.java
 ---
@@ -231,6 +239,46 @@ public void setPrioritizers(List prioritizers) 
{
 this.prioritizers = prioritizers;
 }
 
+@ApiModelProperty(value = "How to load balance the data in this 
Connection across the nodes in the cluster.",
+allowableValues = "DO_NOT_LOAD_BALANCE, PARTITION_BY_ATTRIBUTE, 
ROUND_ROBIN, SINGLE_NODE")
+public String getLoadBalanceStrategy() {
+return loadBalanceStrategy;
+}
+
+public void setLoadBalanceStrategy(String loadBalanceStrategy) {
+this.loadBalanceStrategy = loadBalanceStrategy;
+}
+
+@ApiModelProperty(value = "The FlowFile Attribute to use for 
determining which node a FlowFile will go to if the Load Balancing Strategy is 
set to PARTITION_BY_ATTRIBUTE")
+public String getLoadBalancePartitionAttribute() {
+return loadBalancePartitionAttribute;
+}
+
+public void setLoadBalancePartitionAttribute(String 
partitionAttribute) {
+this.loadBalancePartitionAttribute = partitionAttribute;
+}
+
+@ApiModelProperty(value = "Whether or not data should be compressed 
when being transferred between nodes in the cluster.",
+allowableValues = "DO_NOT_COMPRESS, COMPRESS_ATTRIBUTES_ONLY, 
COMPRESS_ATTRIBUTES_AND_CONTENT")
+public String getLoadBalanceCompression() {
+return loadBalanceCompression;
+}
+
+public void setLoadBalanceCompression(String compression) {
+this.loadBalanceCompression = compression;
+}
+
+@ApiModelProperty(value = "The current status of the Connection's Load 
Balancing Activities. Status can indicate that Load Balancing is not configured 
for the connection, that Load Balancing " +
--- End diff --

Better to annotate this method with readOnly = true.


---