[jira] [Commented] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15673059#comment-15673059
 ] 

Pierre Villard commented on NIFI-3050:
--

+1 great discussion

It'll need to be well documented but will provide a better security management.

One question: how the backward compatibility will be ensured? Meaning: what 
will be the behavior when started NiFi with an existing workflow containing 
restricted processors? A message like "insufficient permissions, please contact 
the administrator"?

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3053) InvokeHTTP sending request multiple to the remote URL insted of one

2016-11-16 Thread Tony (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony  updated NIFI-3053:

Summary: InvokeHTTP sending request multiple to the remote URL insted of 
one  (was: InvokeHTTP sending request multiple time a single request )

> InvokeHTTP sending request multiple to the remote URL insted of one
> ---
>
> Key: NIFI-3053
> URL: https://issues.apache.org/jira/browse/NIFI-3053
> Project: Apache NiFi
>  Issue Type: Bug
> Environment: Linux OS
>Reporter: Tony 
> Fix For: 1.0.0
>
>
> Hi all,
> While performing a task I observe that the InvokeHTTP processor sending 
> multiple request to the remote URL instead of one for some cases , after 
> stopping that component and again restarting that component it again behave 
> normally as usual .
> In configuration all channels like failure , no retry , retry and original is 
> Auto terminated and response is connected to log attribute .
> Please Help to resolve the issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3053) InvokeHTTP sending request multiple time a single request

2016-11-16 Thread Tony (JIRA)
Tony  created NIFI-3053:
---

 Summary: InvokeHTTP sending request multiple time a single request 
 Key: NIFI-3053
 URL: https://issues.apache.org/jira/browse/NIFI-3053
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Tony 






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-3052) Update Admin Guide and Getting Starting guide for additional colors added to UI

2016-11-16 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim reassigned NIFI-3052:


Assignee: Andrew Lim

> Update Admin Guide and Getting Starting guide for additional colors added to 
> UI
> ---
>
> Key: NIFI-3052
> URL: https://issues.apache.org/jira/browse/NIFI-3052
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Affects Versions: 1.1.0
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
>
> Additional color was added in the UI via NIFI-2603.
> Some of the screenshots in the Getting Started Guide and Admin Guides need to 
> be updated for these changes, mostly around the processor Alert, Run and Stop 
> icons now having color.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672749#comment-15672749
 ] 

Joseph Witt commented on NIFI-3050:
---

+1

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672742#comment-15672742
 ] 

Andy LoPresto edited comment on NIFI-3050 at 11/17/16 5:00 AM:
---

Ok, building on that, my definition would be:

{quote}
A *Restricted* component is one that can be used to execute arbitrary 
unsanitized code provided by the operator through the NiFi REST API or can be 
used to obtain or alter data on the NiFi host system using the NiFi OS 
credentials. These components could be used by an otherwise authorized NiFi 
user to go beyond the intended use of the application, escalate privilege, or 
could expose data about the internals of the NiFi process or the host system. 
All of these capabilities should be considered privileged, and admins should be 
aware of these capabilities and explicitly enable them for a subset of trusted 
users.
{quote}

I concur with exposing reasoning for each classification in the documentation 
and with your determination on {{SSLContextService}} vs. 
{{SiteToSiteProvenanceReportingTask}}. 


was (Author: alopresto):
Ok, building on that, my definition would be:

{quote}
A **Restricted** component is one that can be used to execute arbitrary 
unsanitized code provided by the operator through the NiFi REST API or can be 
used to obtain or alter data on the NiFi host system using the NiFi OS 
credentials. These components could be used by an otherwise authorized NiFi 
user to go beyond the intended use of the application, escalate privilege, or 
could expose data about the internals of the NiFi process or the host system. 
All of these capabilities should be considered privileged, and admins should be 
aware of these capabilities and explicitly enable them for a subset of trusted 
users.
{quote}

I concur with exposing reasoning for each classification in the documentation 
and with your determination on {{SSLContextService}} vs. 
{{SiteToSiteProvenanceReportingTask}}. 

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672742#comment-15672742
 ] 

Andy LoPresto commented on NIFI-3050:
-

Ok, building on that, my definition would be:

{quote}
A **Restricted** component is one that can be used to execute arbitrary 
unsanitized code provided by the operator through the NiFi REST API or can be 
used to obtain or alter data on the NiFi host system using the NiFi OS 
credentials. These components could be used by an otherwise authorized NiFi 
user to go beyond the intended use of the application, escalate privilege, or 
could expose data about the internals of the NiFi process or the host system. 
All of these capabilities should be considered privileged, and admins should be 
aware of these capabilities and explicitly enable them for a subset of trusted 
users.
{quote}

I concur with exposing reasoning for each classification in the documentation 
and with your determination on {{SSLContextService}} vs. 
{{SiteToSiteProvenanceReportingTask}}. 

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3051) Encrypt Config - XML Parse Exception Occurs on Login Identity Providers File

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672730#comment-15672730
 ] 

ASF GitHub Bot commented on NIFI-3051:
--

GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/1238

NIFI-3051 Fixed issue serializing commented or empty login-identity-p…

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

…roviders.xml.

Updated and added unit tests. (+1 squashed commit)
Squashed commits:
[b187202] NIFI-3051 - checked in test demonstrating failure to serialize 
commented ldap-provider section.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-3051

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1238.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1238


commit decd5e4d6000a881ee867d9020a3b9e57673a03b
Author: Andy LoPresto 
Date:   2016-11-17T03:38:59Z

NIFI-3051 Fixed issue serializing commented or empty 
login-identity-providers.xml.
Updated and added unit tests. (+1 squashed commit)
Squashed commits:
[b187202] NIFI-3051 - checked in test demonstrating failure to serialize 
commented ldap-provider section.




> Encrypt Config - XML Parse Exception Occurs on Login Identity Providers File
> 
>
> Key: NIFI-3051
> URL: https://issues.apache.org/jira/browse/NIFI-3051
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Yolanda M. Davis
>Assignee: Andy LoPresto
>
> I encountered an error when attempting to run encrypt config on a 
> login-identity-provider.xml file where the provider with "ldap-provider" 
> identity was commented out. The exception received is below:
> org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 1; Premature end 
> of file.
>   at groovy.xml.XmlUtil.serialize(XmlUtil.java:454)
>   at groovy.xml.XmlUtil.serialize(XmlUtil.java:440)
>   at groovy.xml.XmlUtil.serialize(XmlUtil.java:182)
>   at groovy.xml.XmlUtil.serialize(XmlUtil.java:151)
>   at groovy.xml.XmlUtil$serialize.call(Unknown Source)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
>   at 
> org.apache.nifi.properties.ConfigEncryptionTool.serializeLoginIdentityProvidersAndPreserveFormat(ConfigEncryptionTool.groovy:693)
>   at 
> org.apache.nifi.properties.ConfigEncryptionTool$serializeLoginIdentityProvidersAndPreserveFormat$0.call(Unknown
>  Source)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>   at 
> 

[GitHub] nifi pull request #1238: NIFI-3051 Fixed issue serializing commented or empt...

2016-11-16 Thread alopresto
GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/1238

NIFI-3051 Fixed issue serializing commented or empty login-identity-p…

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

…roviders.xml.

Updated and added unit tests. (+1 squashed commit)
Squashed commits:
[b187202] NIFI-3051 - checked in test demonstrating failure to serialize 
commented ldap-provider section.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-3051

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1238.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1238


commit decd5e4d6000a881ee867d9020a3b9e57673a03b
Author: Andy LoPresto 
Date:   2016-11-17T03:38:59Z

NIFI-3051 Fixed issue serializing commented or empty 
login-identity-providers.xml.
Updated and added unit tests. (+1 squashed commit)
Squashed commits:
[b187202] NIFI-3051 - checked in test demonstrating failure to serialize 
commented ldap-provider section.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672647#comment-15672647
 ] 

Joseph Witt commented on NIFI-3050:
---


Let's define what it means for a component to be considered "Restricted".  
Proposed definition:

A Restricted component is one that can be utilized to execute custom code 
provided by the operator through the NiFi REST API or one which can be used to 
obtain or alter data on the system nifi is executing on using the credentials 
NiFi is executing as.  These components could be used by an otherwise 
authorized user of the system which could go beyond the intended use of these 
components and could expose data about the internals of the NiFi process or the 
system NiFi is operating on which should be considered privileged tasks that 
need to be specially limited.

...maybe not a great definition.  But with that viewpoint in mind

I don't think SSLContextService meets that criteria.

I do think SiteToSiteProvenanceReportingTask does though as it would allow the 
user to access to raw provenance data outside the normal REST API controlled 
authorization model.

For each component we tag as restricted we should document as part of the 
annotation the 'why' of it being restricted so that we can provide that 
information through documentation and the UI.

For example GetFile should be @Restricted("It can be used to obtain the 
contents of any file accessible to the NiFi process on the system.")



> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3051) Encrypt Config - XML Parse Exception Occurs on Login Identity Providers File

2016-11-16 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672596#comment-15672596
 ] 

Andy LoPresto commented on NIFI-3051:
-

Thanks [~YolandaMDavis]. I believe this should be a quick fix and I will try to 
have it for tomorrow. 

> Encrypt Config - XML Parse Exception Occurs on Login Identity Providers File
> 
>
> Key: NIFI-3051
> URL: https://issues.apache.org/jira/browse/NIFI-3051
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Yolanda M. Davis
>Assignee: Andy LoPresto
>
> I encountered an error when attempting to run encrypt config on a 
> login-identity-provider.xml file where the provider with "ldap-provider" 
> identity was commented out. The exception received is below:
> org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 1; Premature end 
> of file.
>   at groovy.xml.XmlUtil.serialize(XmlUtil.java:454)
>   at groovy.xml.XmlUtil.serialize(XmlUtil.java:440)
>   at groovy.xml.XmlUtil.serialize(XmlUtil.java:182)
>   at groovy.xml.XmlUtil.serialize(XmlUtil.java:151)
>   at groovy.xml.XmlUtil$serialize.call(Unknown Source)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
>   at 
> org.apache.nifi.properties.ConfigEncryptionTool.serializeLoginIdentityProvidersAndPreserveFormat(ConfigEncryptionTool.groovy:693)
>   at 
> org.apache.nifi.properties.ConfigEncryptionTool$serializeLoginIdentityProvidersAndPreserveFormat$0.call(Unknown
>  Source)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
> I've discussed this directly with [~alopresto] and he has agreed to 
> investigate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3051) Encrypt Config - XML Parse Exception Occurs on Login Identity Providers File

2016-11-16 Thread Yolanda M. Davis (JIRA)
Yolanda M. Davis created NIFI-3051:
--

 Summary: Encrypt Config - XML Parse Exception Occurs on Login 
Identity Providers File
 Key: NIFI-3051
 URL: https://issues.apache.org/jira/browse/NIFI-3051
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Affects Versions: 1.1.0
Reporter: Yolanda M. Davis
Assignee: Andy LoPresto


I encountered an error when attempting to run encrypt config on a 
login-identity-provider.xml file where the provider with "ldap-provider" 
identity was commented out. The exception received is below:

org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 1; Premature end of 
file.

at groovy.xml.XmlUtil.serialize(XmlUtil.java:454)
at groovy.xml.XmlUtil.serialize(XmlUtil.java:440)
at groovy.xml.XmlUtil.serialize(XmlUtil.java:182)
at groovy.xml.XmlUtil.serialize(XmlUtil.java:151)
at groovy.xml.XmlUtil$serialize.call(Unknown Source)
at 
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at 
org.apache.nifi.properties.ConfigEncryptionTool.serializeLoginIdentityProvidersAndPreserveFormat(ConfigEncryptionTool.groovy:693)
at 
org.apache.nifi.properties.ConfigEncryptionTool$serializeLoginIdentityProvidersAndPreserveFormat$0.call(Unknown
 Source)
at 
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)

I've discussed this directly with [~alopresto] and he has agreed to investigate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672571#comment-15672571
 ] 

Andy LoPresto edited comment on NIFI-3050 at 11/17/16 3:32 AM:
---

Would you consider {{SSLContextService}} a dangerous component? What about 
{{SiteToSiteProvenanceReportingTask}}?


was (Author: alopresto):
Would you consider {SSLContextService}} a dangerous component? What about 
{{SiteToSiteProvenanceReportingTask}}?

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672571#comment-15672571
 ] 

Andy LoPresto commented on NIFI-3050:
-

Would you consider {SSLContextService}} a dangerous component? What about 
{{SiteToSiteProvenanceReportingTask}}?

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672563#comment-15672563
 ] 

Joseph Witt commented on NIFI-3050:
---

in my opinion it should be for the extension points/components.  The subject 
should say "Restrict dangerous components to special permission".  This means 
Processors, Controller Services, Reporting Tasks.

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Matt Gilman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672558#comment-15672558
 ] 

Matt Gilman commented on NIFI-3050:
---

Should the Restricted annotation be supported on Controller Services or 
Reporting Tasks at this point or should we be solely focused on Processors? 

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3050) Restrict dangerous processors to special permission

2016-11-16 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672523#comment-15672523
 ] 

Joseph Witt commented on NIFI-3050:
---

andy - am a huge +1 . thanks for pushing this.  It is certainly time and we 
have the basic ingredients to start taking meaningful steps here.

I do think though ListFile is ok to not have as Restricted and I also don't 
think we need to do anything special with flow file attributes at this stage.  
Do you agree?

> Restrict dangerous processors to special permission
> ---
>
> Key: NIFI-3050
> URL: https://issues.apache.org/jira/browse/NIFI-3050
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Blocker
>  Labels: security
> Fix For: 1.1.0
>
>
> As evidenced by [NIFI-3045] and other discoveries (e.g. using an 
> {{ExecuteScript}} processor to iterate over a {{NiFiProperties}} instance 
> after the application has already decrypted the sensitive properties from the 
> {{nifi.properties}} file on disk, using a {{GetFile}} processor to retrieve 
> {{/etc/passwd}}, etc.) NiFi is a powerful tool which can allow unauthorized 
> users to perform malicious actions. While no tool as versatile as NiFi will 
> ever be completely immune to insider threat, to further restrict the 
> potential for abuse, certain processors should be designated as 
> {{restricted}}, and these processors can only be added to the canvas or 
> modified by users who, along with the proper permission to modify the canvas, 
> have a special permission to interact with these "dangerous" processors. 
> From the [Security Feature 
> Roadmap|https://cwiki.apache.org/confluence/display/NIFI/Security+Feature+Roadmap]:
> {quote}
> Dangerous Processors
> * Processors which can directly affect behavior/configuration of NiFi/other 
> services
> - {{GetFile}}
> - {{PutFile}}
> - {{ListFile}}
> - {{FetchFile}}
> - {{ExecuteScript}}
> - {{InvokeScriptedProcessor}}
> - {{ExecuteProcess}}
> - {{ExecuteStreamCommand}}
> * These processors should only be creatable/editable by users with special 
> access control policy
> * Marked by {{@Restricted}} annotation on processor class
> * All flowfiles originating/passing through these processors have special 
> attribute/protection
> * Perhaps *File processors can access a certain location by default but 
> cannot access the root filesystem without special user permission?
> {quote}
> [~mcgilman] and I should have a PR for this tomorrow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3045) Usage of -k undermines encrypted configuration

2016-11-16 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-3045:

Labels: bootstrap configuration encryption security  (was: )

> Usage of -k undermines encrypted configuration
> --
>
> Key: NIFI-3045
> URL: https://issues.apache.org/jira/browse/NIFI-3045
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.0.0
>Reporter: Anders Breindahl
>  Labels: bootstrap, configuration, encryption, security
> Attachments: 2016-11-16_dash-ks-extraction.png, 
> extract-dash-ks-from-process-list.xml
>
>
> Hey,
> When setting up a hardened NiFi installation I ran into this. I hope I'm 
> mistaken.
> When running the `encrypt-config.sh` script, one has a 
> `nifi.bootstrap.sensitive.key` string configured in `bootstrap.conf`. The 
> service startup script makes this be passed from `RunNifi` to`NiFi` by a `-k` 
> parameter.
> This however can be retrieved by any user of the interface -- which, combined 
> with NiFi being able to read from (the 
> encrypted-under-`nifi.bootstrap.sensitive.key`) `nifi.properties` file means 
> that e.g. the `nifi.security.keystorePasswd` property can be decrypted 
> offline.
> Does this have anything to it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3045) Usage of -k undermines encrypted configuration

2016-11-16 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-3045:

Affects Version/s: 1.0.0

> Usage of -k undermines encrypted configuration
> --
>
> Key: NIFI-3045
> URL: https://issues.apache.org/jira/browse/NIFI-3045
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.0.0
>Reporter: Anders Breindahl
>  Labels: bootstrap, configuration, encryption, security
> Attachments: 2016-11-16_dash-ks-extraction.png, 
> extract-dash-ks-from-process-list.xml
>
>
> Hey,
> When setting up a hardened NiFi installation I ran into this. I hope I'm 
> mistaken.
> When running the `encrypt-config.sh` script, one has a 
> `nifi.bootstrap.sensitive.key` string configured in `bootstrap.conf`. The 
> service startup script makes this be passed from `RunNifi` to`NiFi` by a `-k` 
> parameter.
> This however can be retrieved by any user of the interface -- which, combined 
> with NiFi being able to read from (the 
> encrypted-under-`nifi.bootstrap.sensitive.key`) `nifi.properties` file means 
> that e.g. the `nifi.security.keystorePasswd` property can be decrypted 
> offline.
> Does this have anything to it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3045) Usage of -k undermines encrypted configuration

2016-11-16 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-3045:

Component/s: Configuration

> Usage of -k undermines encrypted configuration
> --
>
> Key: NIFI-3045
> URL: https://issues.apache.org/jira/browse/NIFI-3045
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.0.0
>Reporter: Anders Breindahl
>  Labels: bootstrap, configuration, encryption, security
> Attachments: 2016-11-16_dash-ks-extraction.png, 
> extract-dash-ks-from-process-list.xml
>
>
> Hey,
> When setting up a hardened NiFi installation I ran into this. I hope I'm 
> mistaken.
> When running the `encrypt-config.sh` script, one has a 
> `nifi.bootstrap.sensitive.key` string configured in `bootstrap.conf`. The 
> service startup script makes this be passed from `RunNifi` to`NiFi` by a `-k` 
> parameter.
> This however can be retrieved by any user of the interface -- which, combined 
> with NiFi being able to read from (the 
> encrypted-under-`nifi.bootstrap.sensitive.key`) `nifi.properties` file means 
> that e.g. the `nifi.security.keystorePasswd` property can be decrypted 
> offline.
> Does this have anything to it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-3045) Usage of -k undermines encrypted configuration

2016-11-16 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt resolved NIFI-3045.
---
Resolution: Duplicate

[~skrewz] thanks for raising this.  closing this one as a duplicate of 
NIFI-2656.  Please join the discussion there.  As you note an authorized user 
can place processors on the flow which do things which are dangerous.  There 
are a number of good next steps we can take to further improve how little even 
an authorized user can do.

> Usage of -k undermines encrypted configuration
> --
>
> Key: NIFI-3045
> URL: https://issues.apache.org/jira/browse/NIFI-3045
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Anders Breindahl
> Attachments: 2016-11-16_dash-ks-extraction.png, 
> extract-dash-ks-from-process-list.xml
>
>
> Hey,
> When setting up a hardened NiFi installation I ran into this. I hope I'm 
> mistaken.
> When running the `encrypt-config.sh` script, one has a 
> `nifi.bootstrap.sensitive.key` string configured in `bootstrap.conf`. The 
> service startup script makes this be passed from `RunNifi` to`NiFi` by a `-k` 
> parameter.
> This however can be retrieved by any user of the interface -- which, combined 
> with NiFi being able to read from (the 
> encrypted-under-`nifi.bootstrap.sensitive.key`) `nifi.properties` file means 
> that e.g. the `nifi.security.keystorePasswd` property can be decrypted 
> offline.
> Does this have anything to it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3045) Usage of -k undermines encrypted configuration

2016-11-16 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672335#comment-15672335
 ] 

Andy LoPresto commented on NIFI-3045:
-

I have previously filed [NIFI-2656] to address this by allowing the actual 
application process ({{NiFi.java}}) to securely prompt for the key if not 
received from the bootstrap process ({{RunNiFi.java}}). This means the key 
material would not be exposed to the running process list, but would require 
manual intervention on startup/restart. 

> Usage of -k undermines encrypted configuration
> --
>
> Key: NIFI-3045
> URL: https://issues.apache.org/jira/browse/NIFI-3045
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Anders Breindahl
> Attachments: 2016-11-16_dash-ks-extraction.png, 
> extract-dash-ks-from-process-list.xml
>
>
> Hey,
> When setting up a hardened NiFi installation I ran into this. I hope I'm 
> mistaken.
> When running the `encrypt-config.sh` script, one has a 
> `nifi.bootstrap.sensitive.key` string configured in `bootstrap.conf`. The 
> service startup script makes this be passed from `RunNifi` to`NiFi` by a `-k` 
> parameter.
> This however can be retrieved by any user of the interface -- which, combined 
> with NiFi being able to read from (the 
> encrypted-under-`nifi.bootstrap.sensitive.key`) `nifi.properties` file means 
> that e.g. the `nifi.security.keystorePasswd` property can be decrypted 
> offline.
> Does this have anything to it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3045) Usage of -k undermines encrypted configuration

2016-11-16 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-3045:

Description: 
Hey,

When setting up a hardened NiFi installation I ran into this. I hope I'm 
mistaken.

When running the `encrypt-config.sh` script, one has a 
`nifi.bootstrap.sensitive.key` string configured in `bootstrap.conf`. The 
service startup script makes this be passed from `RunNifi` to`NiFi` by a `-k` 
parameter.

This however can be retrieved by any user of the interface -- which, combined 
with NiFi being able to read from (the 
encrypted-under-`nifi.bootstrap.sensitive.key`) `nifi.properties` file means 
that e.g. the `nifi.security.keystorePasswd` property can be decrypted offline.

Does this have anything to it?

  was:
Hey,

When setting up a hardened NiFi installation I ran into this. I hope I'm 
mistaken.

When running the `encrypt-config.sh` script, one has a 
`nifi.bootstrap.sensitive.key` string configured in `bootstrap.conf`. The 
service startup script makes this be passed from `RunNifi` to`NiFi` by a `-k` 
parameter.

This however can be retrieved by any user of the interface---which, combined 
with NiFi being able to read from (the 
encrypted-under-`nifi.bootstrap.sensitive.key`) `nifi.properties` file means 
that e.g. the `nifi.security.keystorePasswd` property can be decrypted offline.

Does this have anything to it?


> Usage of -k undermines encrypted configuration
> --
>
> Key: NIFI-3045
> URL: https://issues.apache.org/jira/browse/NIFI-3045
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Anders Breindahl
> Attachments: 2016-11-16_dash-ks-extraction.png, 
> extract-dash-ks-from-process-list.xml
>
>
> Hey,
> When setting up a hardened NiFi installation I ran into this. I hope I'm 
> mistaken.
> When running the `encrypt-config.sh` script, one has a 
> `nifi.bootstrap.sensitive.key` string configured in `bootstrap.conf`. The 
> service startup script makes this be passed from `RunNifi` to`NiFi` by a `-k` 
> parameter.
> This however can be retrieved by any user of the interface -- which, combined 
> with NiFi being able to read from (the 
> encrypted-under-`nifi.bootstrap.sensitive.key`) `nifi.properties` file means 
> that e.g. the `nifi.security.keystorePasswd` property can be decrypted 
> offline.
> Does this have anything to it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1165: NIFI-2943 - pkcs12 keystore improvements

2016-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1165


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2943) tls-toolkit pkcs12 truststore 0 entries

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672306#comment-15672306
 ] 

ASF subversion and git services commented on NIFI-2943:
---

Commit e5eda6370510337a1660008f80bf7ebf4a0ba288 in nifi's branch 
refs/heads/master from [~bryanrosan...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e5eda63 ]

NIFI-2943 - Toolkit uses JKS type over PKCS12 when creating truststore because 
non-Bouncy Castle providers cannot read certificates from PKCS12 truststore.

Peer review feedback (+2 squashed commits)
Squashed commits:
[0102c8e] NIFI-2943 - Peer review feedback
[9bcd495] NIFI-2943 - pkcs12 keystore improvements

1. loading pkcs12 keystores with bouncy castle everywhere
2. tls-toolkit client using jks truststore when keystore type is specified 
differently
3. tests

This closes #1165.

Signed-off-by: Andy LoPresto 


> tls-toolkit pkcs12 truststore 0 entries
> ---
>
> Key: NIFI-2943
> URL: https://issues.apache.org/jira/browse/NIFI-2943
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Rosander
>Assignee: Bryan Rosander
>Priority: Minor
>
> When pkcs12 is used by the tls-toolkit, the resulting truststore has no 
> entries when inspected by the keytool and the tls-toolkit certificate 
> authority certificate is not trusted by NiFi.
> This seems to be due to the Java pkcs12 provider not supporting certificate 
> entries:
> http://stackoverflow.com/questions/3614239/pkcs12-java-keystore-from-ca-and-user-certificate-in-java#answer-3614405
> The Bouncy Castle provider does seem to support certificates but we may not 
> want to explicitly use that provider from within NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2943) tls-toolkit pkcs12 truststore 0 entries

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672305#comment-15672305
 ] 

ASF subversion and git services commented on NIFI-2943:
---

Commit e5eda6370510337a1660008f80bf7ebf4a0ba288 in nifi's branch 
refs/heads/master from [~bryanrosan...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e5eda63 ]

NIFI-2943 - Toolkit uses JKS type over PKCS12 when creating truststore because 
non-Bouncy Castle providers cannot read certificates from PKCS12 truststore.

Peer review feedback (+2 squashed commits)
Squashed commits:
[0102c8e] NIFI-2943 - Peer review feedback
[9bcd495] NIFI-2943 - pkcs12 keystore improvements

1. loading pkcs12 keystores with bouncy castle everywhere
2. tls-toolkit client using jks truststore when keystore type is specified 
differently
3. tests

This closes #1165.

Signed-off-by: Andy LoPresto 


> tls-toolkit pkcs12 truststore 0 entries
> ---
>
> Key: NIFI-2943
> URL: https://issues.apache.org/jira/browse/NIFI-2943
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Rosander
>Assignee: Bryan Rosander
>Priority: Minor
>
> When pkcs12 is used by the tls-toolkit, the resulting truststore has no 
> entries when inspected by the keytool and the tls-toolkit certificate 
> authority certificate is not trusted by NiFi.
> This seems to be due to the Java pkcs12 provider not supporting certificate 
> entries:
> http://stackoverflow.com/questions/3614239/pkcs12-java-keystore-from-ca-and-user-certificate-in-java#answer-3614405
> The Bouncy Castle provider does seem to support certificates but we may not 
> want to explicitly use that provider from within NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2943) tls-toolkit pkcs12 truststore 0 entries

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672308#comment-15672308
 ] 

ASF GitHub Bot commented on NIFI-2943:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1165


> tls-toolkit pkcs12 truststore 0 entries
> ---
>
> Key: NIFI-2943
> URL: https://issues.apache.org/jira/browse/NIFI-2943
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Rosander
>Assignee: Bryan Rosander
>Priority: Minor
>
> When pkcs12 is used by the tls-toolkit, the resulting truststore has no 
> entries when inspected by the keytool and the tls-toolkit certificate 
> authority certificate is not trusted by NiFi.
> This seems to be due to the Java pkcs12 provider not supporting certificate 
> entries:
> http://stackoverflow.com/questions/3614239/pkcs12-java-keystore-from-ca-and-user-certificate-in-java#answer-3614405
> The Bouncy Castle provider does seem to support certificates but we may not 
> want to explicitly use that provider from within NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2943) tls-toolkit pkcs12 truststore 0 entries

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672304#comment-15672304
 ] 

ASF subversion and git services commented on NIFI-2943:
---

Commit e5eda6370510337a1660008f80bf7ebf4a0ba288 in nifi's branch 
refs/heads/master from [~bryanrosan...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e5eda63 ]

NIFI-2943 - Toolkit uses JKS type over PKCS12 when creating truststore because 
non-Bouncy Castle providers cannot read certificates from PKCS12 truststore.

Peer review feedback (+2 squashed commits)
Squashed commits:
[0102c8e] NIFI-2943 - Peer review feedback
[9bcd495] NIFI-2943 - pkcs12 keystore improvements

1. loading pkcs12 keystores with bouncy castle everywhere
2. tls-toolkit client using jks truststore when keystore type is specified 
differently
3. tests

This closes #1165.

Signed-off-by: Andy LoPresto 


> tls-toolkit pkcs12 truststore 0 entries
> ---
>
> Key: NIFI-2943
> URL: https://issues.apache.org/jira/browse/NIFI-2943
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Rosander
>Assignee: Bryan Rosander
>Priority: Minor
>
> When pkcs12 is used by the tls-toolkit, the resulting truststore has no 
> entries when inspected by the keytool and the tls-toolkit certificate 
> authority certificate is not trusted by NiFi.
> This seems to be due to the Java pkcs12 provider not supporting certificate 
> entries:
> http://stackoverflow.com/questions/3614239/pkcs12-java-keystore-from-ca-and-user-certificate-in-java#answer-3614405
> The Bouncy Castle provider does seem to support certificates but we may not 
> want to explicitly use that provider from within NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2943) tls-toolkit pkcs12 truststore 0 entries

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672293#comment-15672293
 ] 

ASF GitHub Bot commented on NIFI-2943:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1165
  
The logging issue was resolved by 
[NIFI-3049](https://issues.apache.org/jira/browse/NIFI-3049) and [PR 
1237](https://github.com/apache/nifi/pull/1237). 

Verified `contrib-check` and all tests pass. Ran toolkit and logging output 
for PKCS12 truststore type is correct. Then ran application and was able to 
connect using client certificate as per usual. 

```

hw12203:...assembly/target/nifi-toolkit-1.1.0-SNAPSHOT-bin/nifi-toolkit-1.1.0-SNAPSHOT
 (pr1165) alopresto
 46s @ 16:23:25 $ ./bin/tls-toolkit.sh standalone -n 'localhost' -T PKCS12 
-P password -S password
2016/11/16 16:29:40 INFO [main] 
org.apache.nifi.toolkit.tls.commandLine.BaseCommandLine: Command line argument 
--keyStoreType=PKCS12 only applies to keystore, recommended truststore type of 
JKS unaffected.
2016/11/16 16:29:40 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine: No 
nifiPropertiesFile specified, using embedded one.
2016/11/16 16:29:41 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Running standalone 
certificate generation with output directory ../nifi-toolkit-1.1.0-SNAPSHOT
2016/11/16 16:29:41 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generated new CA 
certificate ../nifi-toolkit-1.1.0-SNAPSHOT/nifi-cert.pem and key 
../nifi-toolkit-1.1.0-SNAPSHOT/nifi-key.key
2016/11/16 16:29:41 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Writing new ssl 
configuration to ../nifi-toolkit-1.1.0-SNAPSHOT/localhost
2016/11/16 16:29:42 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully 
generated TLS configuration for localhost 1 in 
../nifi-toolkit-1.1.0-SNAPSHOT/localhost
2016/11/16 16:29:42 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: No clientCertDn 
specified, not generating any client certificates.
2016/11/16 16:29:42 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: tls-toolkit 
standalone completed successfully

hw12203:...assembly/target/nifi-toolkit-1.1.0-SNAPSHOT-bin/nifi-toolkit-1.1.0-SNAPSHOT
 (pr1165) alopresto
 377s @ 16:29:43 $ ll localhost/
total 40
drwx--   5 alopresto  staff   170B Nov 16 16:29 ./
drwxr-xr-x  11 alopresto  staff   374B Nov 16 16:29 ../
-rw---   1 alopresto  staff   3.4K Nov 16 16:29 keystore.pkcs12
-rw---   1 alopresto  staff   8.6K Nov 16 16:29 nifi.properties
-rw---   1 alopresto  staff   911B Nov 16 16:29 truststore.jks

hw12203:...assembly/target/nifi-toolkit-1.1.0-SNAPSHOT-bin/nifi-toolkit-1.1.0-SNAPSHOT
 (pr1165) alopresto
 17s @ 16:30:01 $
```

Squashed, merged, and closed. 


> tls-toolkit pkcs12 truststore 0 entries
> ---
>
> Key: NIFI-2943
> URL: https://issues.apache.org/jira/browse/NIFI-2943
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Rosander
>Assignee: Bryan Rosander
>Priority: Minor
>
> When pkcs12 is used by the tls-toolkit, the resulting truststore has no 
> entries when inspected by the keytool and the tls-toolkit certificate 
> authority certificate is not trusted by NiFi.
> This seems to be due to the Java pkcs12 provider not supporting certificate 
> entries:
> http://stackoverflow.com/questions/3614239/pkcs12-java-keystore-from-ca-and-user-certificate-in-java#answer-3614405
> The Bouncy Castle provider does seem to support certificates but we may not 
> want to explicitly use that provider from within NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2943) tls-toolkit pkcs12 truststore 0 entries

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672289#comment-15672289
 ] 

ASF subversion and git services commented on NIFI-2943:
---

Commit 6117b0e1e1c1685999436ba495ef96eb676658fe in nifi's branch 
refs/heads/master from [~bryanrosan...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6117b0e ]

NIFI-2943 - Toolkit uses JKS type over PKCS12 when creating truststore because 
non-Bouncy Castle providers cannot read certificates from PKCS12 truststore.

Peer review feedback (+2 squashed commits)
Squashed commits:
[0102c8e] NIFI-2943 - Peer review feedback
[9bcd495] NIFI-2943 - pkcs12 keystore improvements

1. loading pkcs12 keystores with bouncy castle everywhere
2. tls-toolkit client using jks truststore when keystore type is specified 
differently
3. tests

This closes # 1165.

Signed-off-by: Andy LoPresto 


> tls-toolkit pkcs12 truststore 0 entries
> ---
>
> Key: NIFI-2943
> URL: https://issues.apache.org/jira/browse/NIFI-2943
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Rosander
>Assignee: Bryan Rosander
>Priority: Minor
>
> When pkcs12 is used by the tls-toolkit, the resulting truststore has no 
> entries when inspected by the keytool and the tls-toolkit certificate 
> authority certificate is not trusted by NiFi.
> This seems to be due to the Java pkcs12 provider not supporting certificate 
> entries:
> http://stackoverflow.com/questions/3614239/pkcs12-java-keystore-from-ca-and-user-certificate-in-java#answer-3614405
> The Bouncy Castle provider does seem to support certificates but we may not 
> want to explicitly use that provider from within NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1165: NIFI-2943 - pkcs12 keystore improvements

2016-11-16 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1165
  
The logging issue was resolved by 
[NIFI-3049](https://issues.apache.org/jira/browse/NIFI-3049) and [PR 
1237](https://github.com/apache/nifi/pull/1237). 

Verified `contrib-check` and all tests pass. Ran toolkit and logging output 
for PKCS12 truststore type is correct. Then ran application and was able to 
connect using client certificate as per usual. 

```

hw12203:...assembly/target/nifi-toolkit-1.1.0-SNAPSHOT-bin/nifi-toolkit-1.1.0-SNAPSHOT
 (pr1165) alopresto
🔓 46s @ 16:23:25 $ ./bin/tls-toolkit.sh standalone -n 'localhost' -T 
PKCS12 -P password -S password
2016/11/16 16:29:40 INFO [main] 
org.apache.nifi.toolkit.tls.commandLine.BaseCommandLine: Command line argument 
--keyStoreType=PKCS12 only applies to keystore, recommended truststore type of 
JKS unaffected.
2016/11/16 16:29:40 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine: No 
nifiPropertiesFile specified, using embedded one.
2016/11/16 16:29:41 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Running standalone 
certificate generation with output directory ../nifi-toolkit-1.1.0-SNAPSHOT
2016/11/16 16:29:41 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generated new CA 
certificate ../nifi-toolkit-1.1.0-SNAPSHOT/nifi-cert.pem and key 
../nifi-toolkit-1.1.0-SNAPSHOT/nifi-key.key
2016/11/16 16:29:41 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Writing new ssl 
configuration to ../nifi-toolkit-1.1.0-SNAPSHOT/localhost
2016/11/16 16:29:42 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully 
generated TLS configuration for localhost 1 in 
../nifi-toolkit-1.1.0-SNAPSHOT/localhost
2016/11/16 16:29:42 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: No clientCertDn 
specified, not generating any client certificates.
2016/11/16 16:29:42 INFO [main] 
org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: tls-toolkit 
standalone completed successfully

hw12203:...assembly/target/nifi-toolkit-1.1.0-SNAPSHOT-bin/nifi-toolkit-1.1.0-SNAPSHOT
 (pr1165) alopresto
🔓 377s @ 16:29:43 $ ll localhost/
total 40
drwx--   5 alopresto  staff   170B Nov 16 16:29 ./
drwxr-xr-x  11 alopresto  staff   374B Nov 16 16:29 ../
-rw---   1 alopresto  staff   3.4K Nov 16 16:29 keystore.pkcs12
-rw---   1 alopresto  staff   8.6K Nov 16 16:29 nifi.properties
-rw---   1 alopresto  staff   911B Nov 16 16:29 truststore.jks

hw12203:...assembly/target/nifi-toolkit-1.1.0-SNAPSHOT-bin/nifi-toolkit-1.1.0-SNAPSHOT
 (pr1165) alopresto
🔓 17s @ 16:30:01 $
```

Squashed, merged, and closed. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2943) tls-toolkit pkcs12 truststore 0 entries

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672290#comment-15672290
 ] 

ASF subversion and git services commented on NIFI-2943:
---

Commit 6117b0e1e1c1685999436ba495ef96eb676658fe in nifi's branch 
refs/heads/master from [~bryanrosan...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6117b0e ]

NIFI-2943 - Toolkit uses JKS type over PKCS12 when creating truststore because 
non-Bouncy Castle providers cannot read certificates from PKCS12 truststore.

Peer review feedback (+2 squashed commits)
Squashed commits:
[0102c8e] NIFI-2943 - Peer review feedback
[9bcd495] NIFI-2943 - pkcs12 keystore improvements

1. loading pkcs12 keystores with bouncy castle everywhere
2. tls-toolkit client using jks truststore when keystore type is specified 
differently
3. tests

This closes # 1165.

Signed-off-by: Andy LoPresto 


> tls-toolkit pkcs12 truststore 0 entries
> ---
>
> Key: NIFI-2943
> URL: https://issues.apache.org/jira/browse/NIFI-2943
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Rosander
>Assignee: Bryan Rosander
>Priority: Minor
>
> When pkcs12 is used by the tls-toolkit, the resulting truststore has no 
> entries when inspected by the keytool and the tls-toolkit certificate 
> authority certificate is not trusted by NiFi.
> This seems to be due to the Java pkcs12 provider not supporting certificate 
> entries:
> http://stackoverflow.com/questions/3614239/pkcs12-java-keystore-from-ca-and-user-certificate-in-java#answer-3614405
> The Bouncy Castle provider does seem to support certificates but we may not 
> want to explicitly use that provider from within NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2943) tls-toolkit pkcs12 truststore 0 entries

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672288#comment-15672288
 ] 

ASF subversion and git services commented on NIFI-2943:
---

Commit 6117b0e1e1c1685999436ba495ef96eb676658fe in nifi's branch 
refs/heads/master from [~bryanrosan...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6117b0e ]

NIFI-2943 - Toolkit uses JKS type over PKCS12 when creating truststore because 
non-Bouncy Castle providers cannot read certificates from PKCS12 truststore.

Peer review feedback (+2 squashed commits)
Squashed commits:
[0102c8e] NIFI-2943 - Peer review feedback
[9bcd495] NIFI-2943 - pkcs12 keystore improvements

1. loading pkcs12 keystores with bouncy castle everywhere
2. tls-toolkit client using jks truststore when keystore type is specified 
differently
3. tests

This closes # 1165.

Signed-off-by: Andy LoPresto 


> tls-toolkit pkcs12 truststore 0 entries
> ---
>
> Key: NIFI-2943
> URL: https://issues.apache.org/jira/browse/NIFI-2943
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Bryan Rosander
>Assignee: Bryan Rosander
>Priority: Minor
>
> When pkcs12 is used by the tls-toolkit, the resulting truststore has no 
> entries when inspected by the keytool and the tls-toolkit certificate 
> authority certificate is not trusted by NiFi.
> This seems to be due to the Java pkcs12 provider not supporting certificate 
> entries:
> http://stackoverflow.com/questions/3614239/pkcs12-java-keystore-from-ca-and-user-certificate-in-java#answer-3614405
> The Bouncy Castle provider does seem to support certificates but we may not 
> want to explicitly use that provider from within NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/1233
  
I just confirmed functionality with ES 5.0.1 with and without x-pack 
security. Once the issues commented are addressed it should be good to go.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672259#comment-15672259
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/1233
  
I just confirmed functionality with ES 5.0.1 with and without x-pack 
security. Once the issues commented are addressed it should be good to go.


> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-3049:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
> Fix For: 1.1.0
>
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672119#comment-15672119
 ] 

ASF GitHub Bot commented on NIFI-3049:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1237
  
Verified `contrib-check` and all tests pass. Ran TLS toolkit and 
encrypt-config toolkit and both successfully logged to console. Verified that 
`NiFiPropertiesLoader` successfully logged to `nifi-app.log` when loading 
encrypted properties. Verified application smoke tests. 


> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
> Fix For: 1.1.0
>
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672115#comment-15672115
 ] 

ASF GitHub Bot commented on NIFI-3049:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1237


> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
> Fix For: 1.1.0
>
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



[jira] [Commented] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672114#comment-15672114
 ] 

ASF subversion and git services commented on NIFI-3049:
---

Commit fa13832a9c07b20e968efc5d8baf7e7e09e1a7b1 in nifi's branch 
refs/heads/master from [~jtstorck]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=fa13832 ]

NIFI-3049 Fixes logging issues due to logback and log4j being on the classpath
Removed logback usage from classpath
Added slf4j-log4j12 dependency in nifi-toolkit pom
Added logback-classic exclusion for nifi-properties-loader used by 
nifi-toolkit-encrypt-config
Updated log4j.properties logging pattern and logger config in 
nifi-toolkit-assembly and nifi-toolkit-zookeeper-migrator, filtering zookeeper 
messages below WARN
Removed logback.groovy since log4j is the single logging implementation
Updated ZooKeeperMigratorMain command line output to match standards 
established by other tools in nifi-toolkit

This closes #1237.

Signed-off-by: Andy LoPresto 


> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
> Fix For: 1.1.0
>
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1237: NIFI-3049 Fixes logging issues due to logback and l...

2016-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1237


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672052#comment-15672052
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/1233
  
Also, both processors do not emit any provenance events. This should be 
added.


> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/1233
  
Also, both processors do not emit any provenance events. This should be 
added.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671973#comment-15671973
 ] 

ASF GitHub Bot commented on NIFI-3049:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1237
  
Reviewing...


> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
> Fix For: 1.1.0
>
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1237: NIFI-3049 Fixes logging issues due to logback and log4j be...

2016-11-16 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1237
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3049:
--
Status: Patch Available  (was: In Progress)

> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
> Fix For: 1.1.0
>
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3049:
--
Fix Version/s: 1.1.0

> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
> Fix For: 1.1.0
>
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671910#comment-15671910
 ] 

ASF GitHub Bot commented on NIFI-3049:
--

GitHub user jtstorck opened a pull request:

https://github.com/apache/nifi/pull/1237

NIFI-3049 Fixes logging issues due to logback and log4j being on the …

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

…classpath

Removed logback usage from classpath
Added slf4j-log4j12 dependency in nifi-toolkit pom
Added logback-classic exclusion for nifi-properties-loader used by 
nifi-toolkit-encrypt-config
Updated log4j.properties logging pattern and logger config in 
nifi-toolkit-assembly and nifi-toolkit-zookeeper-migrator, filtering zookeeper 
messages below WARN
Removed logback.groovy since log4j is the single logging implementation
Updated ZooKeeperMigratorMain command line output to match standards 
established by other tools in nifi-toolkit

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jtstorck/nifi NIFI-3049

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1237.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1237


commit 2cc5ce2c023fcdf770e05a33bbcfd78c682ff680
Author: Jeff Storck 
Date:   2016-11-16T22:41:12Z

NIFI-3049 Fixes logging issues due to logback and log4j being on the 
classpath
Removed logback usage from classpath
Added slf4j-log4j12 dependency in nifi-toolkit pom
Added logback-classic exclusion for nifi-properties-loader used by 
nifi-toolkit-encrypt-config
Updated log4j.properties logging pattern and logger config in 
nifi-toolkit-assembly and nifi-toolkit-zookeeper-migrator, filtering zookeeper 
messages below WARN
Removed logback.groovy since log4j is the single logging implementation
Updated ZooKeeperMigratorMain command line output to match standards 
established by other tools in nifi-toolkit




> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in 

[GitHub] nifi pull request #1237: NIFI-3049 Fixes logging issues due to logback and l...

2016-11-16 Thread jtstorck
GitHub user jtstorck opened a pull request:

https://github.com/apache/nifi/pull/1237

NIFI-3049 Fixes logging issues due to logback and log4j being on the …

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

…classpath

Removed logback usage from classpath
Added slf4j-log4j12 dependency in nifi-toolkit pom
Added logback-classic exclusion for nifi-properties-loader used by 
nifi-toolkit-encrypt-config
Updated log4j.properties logging pattern and logger config in 
nifi-toolkit-assembly and nifi-toolkit-zookeeper-migrator, filtering zookeeper 
messages below WARN
Removed logback.groovy since log4j is the single logging implementation
Updated ZooKeeperMigratorMain command line output to match standards 
established by other tools in nifi-toolkit

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jtstorck/nifi NIFI-3049

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1237.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1237


commit 2cc5ce2c023fcdf770e05a33bbcfd78c682ff680
Author: Jeff Storck 
Date:   2016-11-16T22:41:12Z

NIFI-3049 Fixes logging issues due to logback and log4j being on the 
classpath
Removed logback usage from classpath
Added slf4j-log4j12 dependency in nifi-toolkit pom
Added logback-classic exclusion for nifi-properties-loader used by 
nifi-toolkit-encrypt-config
Updated log4j.properties logging pattern and logger config in 
nifi-toolkit-assembly and nifi-toolkit-zookeeper-migrator, filtering zookeeper 
messages below WARN
Removed logback.groovy since log4j is the single logging implementation
Updated ZooKeeperMigratorMain command line output to match standards 
established by other tools in nifi-toolkit




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #56: MINIFI-132 Adjusting default log configuration

2016-11-16 Thread apiri
GitHub user apiri opened a pull request:

https://github.com/apache/nifi-minifi/pull/56

MINIFI-132 Adjusting default log configuration

MINIFI-132 Adjusting default log configuration to reduce overall footprint 
of logs and to enable compression by default.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apiri/nifi-minifi MINIFI-132

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi/pull/56.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #56


commit 6fd356b6e91f9d3c9a05ade514c3cb67beb0870c
Author: Aldrin Piri 
Date:   2016-11-16T22:17:16Z

MINIFI-132 Adjusting default log configuration to reduce overall footprint 
of logs and to enable compression by default.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671831#comment-15671831
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88337395
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractElasticsearch5TransportClientProcessor.java
 ---
@@ -0,0 +1,289 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.nifi.util.StringUtils;
+import org.elasticsearch.client.Client;
+import org.elasticsearch.client.transport.TransportClient;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.transport.client.PreBuiltTransportClient;
+
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetSocketAddress;
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+
+abstract class AbstractElasticsearch5TransportClientProcessor extends 
AbstractElasticsearch5Processor {
+
+/**
+ * This validator ensures the Elasticsearch hosts property is a valid 
list of hostname:port entries
+ */
+private static final Validator HOSTNAME_PORT_VALIDATOR = (subject, 
input, context) -> {
+final List esList = Arrays.asList(input.split(","));
+for (String hostnamePort : esList) {
+String[] addresses = hostnamePort.split(":");
+// Protect against invalid input like http://127.0.0.1:9300 
(URL scheme should not be there)
+if (addresses.length != 2) {
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Must be in hostname:port form (no scheme such as 
http://;).valid(false).build();
+}
+}
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Valid cluster definition").valid(true).build();
+};
+
+protected static final PropertyDescriptor CLUSTER_NAME = new 
PropertyDescriptor.Builder()
+.name("el5-cluster-name")
+.displayName("Cluster Name")
+.description("Name of the ES cluster (for example, 
elasticsearch_brew). Defaults to 'elasticsearch'")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.defaultValue("elasticsearch")
+.build();
+
+protected static final PropertyDescriptor HOSTS = new 
PropertyDescriptor.Builder()
+.name("el5-hosts")
+.displayName("ElasticSearch Hosts")
+.description("ElasticSearch Hosts, which should be comma 
separated and colon for hostname/port "
++ "host1:port,host2:port,  For example 
testcluster:9300. This processor uses the Transport Client to "
++ "connect to hosts. The default transport client port 
is 9300.")
+.required(true)
+

[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671835#comment-15671835
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88329420
  
--- Diff: nifi-nar-bundles/pom.xml ---
@@ -54,6 +54,7 @@
 nifi-html-bundle
 nifi-scripting-bundle
 nifi-elasticsearch-bundle
--- End diff --

Could this be changed to match how the Kafka bundle does version hierarchy? 


> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671829#comment-15671829
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88329905
  
--- Diff: nifi-nar-bundles/nifi-elasticsearch-5-bundle/pom.xml ---
@@ -0,0 +1,44 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-nar-bundles
+1.1.0-SNAPSHOT
+
+
+org.apache.nifi
+nifi-elasticsearch-5-bundle
+pom
+
+
+6.3.0
--- End diff --

nit-pick, "lucene.version" is defined here but "slf4jversion" and 
"es.version" are defined in the processor pom. Is there a reason for that or 
can it be moved to the processor pom to match?


> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671833#comment-15671833
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88337612
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractElasticsearch5Processor.java
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * A base class for all Elasticsearch processors
+ */
+public abstract class AbstractElasticsearch5Processor extends 
AbstractProcessor {
+
+public static final PropertyDescriptor PROP_SSL_CONTEXT_SERVICE = new 
PropertyDescriptor.Builder()
+.name("el5-ssl-context-service")
+.displayName("SSL Context Service")
+.description("The SSL Context Service used to provide client 
certificate information for TLS/SSL "
++ "connections. This service only applies if the 
Elasticsearch endpoint(s) have been secured with TLS/SSL.")
+.required(false)
+.identifiesControllerService(SSLContextService.class)
+.build();
+
+protected static final PropertyDescriptor CHARSET = new 
PropertyDescriptor.Builder()
+.name("el5-charset")
+.displayName("Character Set")
+.description("Specifies the character set of the document 
data.")
+.required(true)
+.defaultValue("UTF-8")
+.addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("el5-username")
+.displayName("Username")
+.description("Username to access the Elasticsearch cluster")
+.required(false)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("el5-password")
+.displayName("Password")
+.description("Password to access the Elasticsearch cluster")
+.required(false)
+.sensitive(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected abstract void createElasticsearchClient(ProcessContext 
context) throws ProcessException;
+
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+Set results = new HashSet<>();
+
+// Ensure that if username or password is set, then the other is 
too
+Map propertyMap = 
validationContext.getProperties();
+if (StringUtils.isEmpty(propertyMap.get(USERNAME)) != 
StringUtils.isEmpty(propertyMap.get(PASSWORD))) {
--- End diff --

In addition to this, should do a check that Xpack is at least set if the 
security properties are set (since they won't/can't be used if xpack isn't set)


> Support Elasticsearch 5.0 for Put/FetchElasticsearch 

[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671834#comment-15671834
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88341976
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/pom.xml
 ---
@@ -0,0 +1,111 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+nifi-elasticsearch-5-bundle
+org.apache.nifi
+1.1.0-SNAPSHOT
+
+
+nifi-elasticsearch-5-processors
+jar
+
+
+2.7
+5.0.0
+
+
+
+
+org.apache.nifi
+nifi-api
+provided
+
+
+org.apache.nifi
+nifi-properties
+provided
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+org.apache.lucene
+lucene-core
+${lucene.version}
+
+
+org.apache.nifi
+nifi-mock
+test
+
+
+org.elasticsearch.client
+transport
+${es.version}
+
+
+com.squareup.okhttp3
+okhttp
+3.3.1
+
+
+org.apache.nifi
+nifi-ssl-context-service-api
+
+
+commons-io
+commons-io
+
+
+org.codehaus.jackson
+jackson-mapper-asl
--- End diff --

Same comment as lucene, can't find where this is used.


> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671832#comment-15671832
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88342854
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearch5.java
 ---
@@ -0,0 +1,266 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.elasticsearch.ElasticsearchTimeoutException;
+import org.elasticsearch.action.bulk.BulkItemResponse;
+import org.elasticsearch.action.bulk.BulkRequestBuilder;
+import org.elasticsearch.action.bulk.BulkResponse;
+
+import org.elasticsearch.client.transport.NoNodeAvailableException;
+import org.elasticsearch.node.NodeClosedException;
+import org.elasticsearch.transport.ReceiveTimeoutTransportException;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5","insert", "update", "write", 
"put"})
+@CapabilityDescription("Writes the contents of a FlowFile to 
Elasticsearch, using the specified parameters such as "
++ "the index to insert into and the type of the document. If the 
cluster has been configured for authorization "
++ "and/or secure transport (SSL/TLS), and the X-Pack plugin is 
available, secure connections can be made. This processor "
++ "supports Elasticsearch 5.x clusters.")
+public class PutElasticsearch5 extends 
AbstractElasticsearch5TransportClientProcessor {
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("All FlowFiles that are written to Elasticsearch 
are routed to this relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("All FlowFiles that cannot be written to 
Elasticsearch are routed to this relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("A FlowFile is routed to this relationship if the 
database cannot be updated but attempting the operation again may succeed")
+.build();
+
+public static final PropertyDescriptor ID_ATTRIBUTE = new 
PropertyDescriptor.Builder()
+.name("el5-put-id-attribute")
+.displayName("Identifier Attribute")
+   

[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88329420
  
--- Diff: nifi-nar-bundles/pom.xml ---
@@ -54,6 +54,7 @@
 nifi-html-bundle
 nifi-scripting-bundle
 nifi-elasticsearch-bundle
--- End diff --

Could this be changed to match how the Kafka bundle does version hierarchy? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671828#comment-15671828
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88330726
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractElasticsearch5TransportClientProcessor.java
 ---
@@ -0,0 +1,289 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.nifi.util.StringUtils;
+import org.elasticsearch.client.Client;
+import org.elasticsearch.client.transport.TransportClient;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.transport.client.PreBuiltTransportClient;
+
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetSocketAddress;
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+
+abstract class AbstractElasticsearch5TransportClientProcessor extends 
AbstractElasticsearch5Processor {
+
+/**
+ * This validator ensures the Elasticsearch hosts property is a valid 
list of hostname:port entries
+ */
+private static final Validator HOSTNAME_PORT_VALIDATOR = (subject, 
input, context) -> {
+final List esList = Arrays.asList(input.split(","));
+for (String hostnamePort : esList) {
+String[] addresses = hostnamePort.split(":");
+// Protect against invalid input like http://127.0.0.1:9300 
(URL scheme should not be there)
+if (addresses.length != 2) {
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Must be in hostname:port form (no scheme such as 
http://;).valid(false).build();
+}
+}
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Valid cluster definition").valid(true).build();
+};
+
+protected static final PropertyDescriptor CLUSTER_NAME = new 
PropertyDescriptor.Builder()
+.name("el5-cluster-name")
+.displayName("Cluster Name")
+.description("Name of the ES cluster (for example, 
elasticsearch_brew). Defaults to 'elasticsearch'")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.defaultValue("elasticsearch")
+.build();
+
+protected static final PropertyDescriptor HOSTS = new 
PropertyDescriptor.Builder()
+.name("el5-hosts")
+.displayName("ElasticSearch Hosts")
+.description("ElasticSearch Hosts, which should be comma 
separated and colon for hostname/port "
++ "host1:port,host2:port,  For example 
testcluster:9300. This processor uses the Transport Client to "
++ "connect to hosts. The default transport client port 
is 9300.")
+.required(true)
+

[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671830#comment-15671830
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88341802
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/pom.xml
 ---
@@ -0,0 +1,111 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+nifi-elasticsearch-5-bundle
+org.apache.nifi
+1.1.0-SNAPSHOT
+
+
+nifi-elasticsearch-5-processors
+jar
+
+
+2.7
+5.0.0
+
+
+
+
+org.apache.nifi
+nifi-api
+provided
+
+
+org.apache.nifi
+nifi-properties
+provided
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+org.apache.lucene
+lucene-core
+${lucene.version}
--- End diff --

Where is lucene used in this module? I did a search and only came up with 
this pom


> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671836#comment-15671836
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88341213
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/pom.xml
 ---
@@ -0,0 +1,111 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+nifi-elasticsearch-5-bundle
+org.apache.nifi
+1.1.0-SNAPSHOT
+
+
+nifi-elasticsearch-5-processors
+jar
+
+
+2.7
+5.0.0
+
+
+
+
+org.apache.nifi
+nifi-api
+provided
+
+
+org.apache.nifi
+nifi-properties
+provided
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+org.apache.lucene
+lucene-core
+${lucene.version}
+
+
+org.apache.nifi
+nifi-mock
+test
+
+
+org.elasticsearch.client
+transport
+${es.version}
+
+
+com.squareup.okhttp3
--- End diff --

I don't think this dep is needed


> Support Elasticsearch 5.0 for Put/FetchElasticsearch processors
> ---
>
> Key: NIFI-3011
> URL: https://issues.apache.org/jira/browse/NIFI-3011
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> Now that Elastic has released a new major version (5.0) of Elasticsearch, the 
> Put/FetchElasticsearch processors would need to be upgraded (or duplicated) 
> as the major version of the transport client needs to match the major version 
> of the Elasticsearch cluster.
> If upgrade is selected, then Put/FetchES will no longer work with 
> Elasticsearch 2.x clusters, so in that case users would want to switch to the 
> Http versions of those processors. However this might not be desirable (due 
> to performance concerns with the HTTP API vs the transport API), so care must 
> be taken when deciding whether to upgrade the existing processors or create 
> new ones.
> Creating new versions of these processors (to use the 5.0 transport client) 
> will also take some consideration, as it is unlikely the different versions 
> can coexist in the same NAR due to classloading issues (multiple versions of 
> JARs containing the same class names, e.g.). It may be necessary to create an 
> "elasticsearch-5.0" version of the NAR, containing only the new versions of 
> these processors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88337612
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractElasticsearch5Processor.java
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * A base class for all Elasticsearch processors
+ */
+public abstract class AbstractElasticsearch5Processor extends 
AbstractProcessor {
+
+public static final PropertyDescriptor PROP_SSL_CONTEXT_SERVICE = new 
PropertyDescriptor.Builder()
+.name("el5-ssl-context-service")
+.displayName("SSL Context Service")
+.description("The SSL Context Service used to provide client 
certificate information for TLS/SSL "
++ "connections. This service only applies if the 
Elasticsearch endpoint(s) have been secured with TLS/SSL.")
+.required(false)
+.identifiesControllerService(SSLContextService.class)
+.build();
+
+protected static final PropertyDescriptor CHARSET = new 
PropertyDescriptor.Builder()
+.name("el5-charset")
+.displayName("Character Set")
+.description("Specifies the character set of the document 
data.")
+.required(true)
+.defaultValue("UTF-8")
+.addValidator(StandardValidators.CHARACTER_SET_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor USERNAME = new 
PropertyDescriptor.Builder()
+.name("el5-username")
+.displayName("Username")
+.description("Username to access the Elasticsearch cluster")
+.required(false)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("el5-password")
+.displayName("Password")
+.description("Password to access the Elasticsearch cluster")
+.required(false)
+.sensitive(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected abstract void createElasticsearchClient(ProcessContext 
context) throws ProcessException;
+
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+Set results = new HashSet<>();
+
+// Ensure that if username or password is set, then the other is 
too
+Map propertyMap = 
validationContext.getProperties();
+if (StringUtils.isEmpty(propertyMap.get(USERNAME)) != 
StringUtils.isEmpty(propertyMap.get(PASSWORD))) {
--- End diff --

In addition to this, should do a check that Xpack is at least set if the 
security properties are set (since they won't/can't be used if xpack isn't set)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a 

[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671827#comment-15671827
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88333819
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractElasticsearch5TransportClientProcessor.java
 ---
@@ -0,0 +1,289 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.nifi.util.StringUtils;
+import org.elasticsearch.client.Client;
+import org.elasticsearch.client.transport.TransportClient;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.transport.client.PreBuiltTransportClient;
+
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetSocketAddress;
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+
+abstract class AbstractElasticsearch5TransportClientProcessor extends 
AbstractElasticsearch5Processor {
+
+/**
+ * This validator ensures the Elasticsearch hosts property is a valid 
list of hostname:port entries
+ */
+private static final Validator HOSTNAME_PORT_VALIDATOR = (subject, 
input, context) -> {
+final List esList = Arrays.asList(input.split(","));
+for (String hostnamePort : esList) {
+String[] addresses = hostnamePort.split(":");
+// Protect against invalid input like http://127.0.0.1:9300 
(URL scheme should not be there)
+if (addresses.length != 2) {
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Must be in hostname:port form (no scheme such as 
http://;).valid(false).build();
+}
+}
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Valid cluster definition").valid(true).build();
+};
+
+protected static final PropertyDescriptor CLUSTER_NAME = new 
PropertyDescriptor.Builder()
+.name("el5-cluster-name")
+.displayName("Cluster Name")
+.description("Name of the ES cluster (for example, 
elasticsearch_brew). Defaults to 'elasticsearch'")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.defaultValue("elasticsearch")
+.build();
+
+protected static final PropertyDescriptor HOSTS = new 
PropertyDescriptor.Builder()
+.name("el5-hosts")
+.displayName("ElasticSearch Hosts")
+.description("ElasticSearch Hosts, which should be comma 
separated and colon for hostname/port "
++ "host1:port,host2:port,  For example 
testcluster:9300. This processor uses the Transport Client to "
++ "connect to hosts. The default transport client port 
is 9300.")
+.required(true)
+

[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88340765
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearch5.java
 ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.elasticsearch.ElasticsearchTimeoutException;
+import org.elasticsearch.action.get.GetRequestBuilder;
+import org.elasticsearch.action.get.GetResponse;
+import org.elasticsearch.client.transport.NoNodeAvailableException;
+import org.elasticsearch.node.NodeClosedException;
+import org.elasticsearch.transport.ReceiveTimeoutTransportException;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
+@CapabilityDescription("Retrieves a document from Elasticsearch using the 
specified connection properties and the "
++ "identifier of the document to retrieve. If the cluster has been 
configured for authorization and/or secure "
++ "transport (SSL/TLS), and the X-Pack plugin is available, secure 
connections can be made. This processor "
++ "supports Elasticsearch 5.x clusters.")
+@WritesAttributes({
+@WritesAttribute(attribute = "filename", description = "The 
filename attributes is set to the document identifier"),
+@WritesAttribute(attribute = "es.index", description = "The 
Elasticsearch index containing the document"),
+@WritesAttribute(attribute = "es.type", description = "The 
Elasticsearch document type")
+})
+public class FetchElasticsearch5 extends 
AbstractElasticsearch5TransportClientProcessor {
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("All FlowFiles that are read from Elasticsearch 
are routed to this relationship").build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("All FlowFiles that cannot be read from 
Elasticsearch are routed to this relationship").build();
+
+public static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("A FlowFile is routed to this relationship if the 
document cannot be fetched but attempting the operation again may succeed")
+.build();
+
+public static final Relationship REL_NOT_FOUND = new 

[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88341802
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/pom.xml
 ---
@@ -0,0 +1,111 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+nifi-elasticsearch-5-bundle
+org.apache.nifi
+1.1.0-SNAPSHOT
+
+
+nifi-elasticsearch-5-processors
+jar
+
+
+2.7
+5.0.0
+
+
+
+
+org.apache.nifi
+nifi-api
+provided
+
+
+org.apache.nifi
+nifi-properties
+provided
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+org.apache.lucene
+lucene-core
+${lucene.version}
--- End diff --

Where is lucene used in this module? I did a search and only came up with 
this pom


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88333819
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractElasticsearch5TransportClientProcessor.java
 ---
@@ -0,0 +1,289 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.nifi.util.StringUtils;
+import org.elasticsearch.client.Client;
+import org.elasticsearch.client.transport.TransportClient;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.transport.client.PreBuiltTransportClient;
+
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetSocketAddress;
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+
+abstract class AbstractElasticsearch5TransportClientProcessor extends 
AbstractElasticsearch5Processor {
+
+/**
+ * This validator ensures the Elasticsearch hosts property is a valid 
list of hostname:port entries
+ */
+private static final Validator HOSTNAME_PORT_VALIDATOR = (subject, 
input, context) -> {
+final List esList = Arrays.asList(input.split(","));
+for (String hostnamePort : esList) {
+String[] addresses = hostnamePort.split(":");
+// Protect against invalid input like http://127.0.0.1:9300 
(URL scheme should not be there)
+if (addresses.length != 2) {
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Must be in hostname:port form (no scheme such as 
http://;).valid(false).build();
+}
+}
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Valid cluster definition").valid(true).build();
+};
+
+protected static final PropertyDescriptor CLUSTER_NAME = new 
PropertyDescriptor.Builder()
+.name("el5-cluster-name")
+.displayName("Cluster Name")
+.description("Name of the ES cluster (for example, 
elasticsearch_brew). Defaults to 'elasticsearch'")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.defaultValue("elasticsearch")
+.build();
+
+protected static final PropertyDescriptor HOSTS = new 
PropertyDescriptor.Builder()
+.name("el5-hosts")
+.displayName("ElasticSearch Hosts")
+.description("ElasticSearch Hosts, which should be comma 
separated and colon for hostname/port "
++ "host1:port,host2:port,  For example 
testcluster:9300. This processor uses the Transport Client to "
++ "connect to hosts. The default transport client port 
is 9300.")
+.required(true)
+.expressionLanguageSupported(false)
+.addValidator(HOSTNAME_PORT_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PROP_XPACK_LOCATION = new 
PropertyDescriptor.Builder()
+

[jira] [Commented] (NIFI-3011) Support Elasticsearch 5.0 for Put/FetchElasticsearch processors

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671837#comment-15671837
 ] 

ASF GitHub Bot commented on NIFI-3011:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88340765
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/FetchElasticsearch5.java
 ---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.OutputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.elasticsearch.ElasticsearchTimeoutException;
+import org.elasticsearch.action.get.GetRequestBuilder;
+import org.elasticsearch.action.get.GetResponse;
+import org.elasticsearch.client.transport.NoNodeAvailableException;
+import org.elasticsearch.node.NodeClosedException;
+import org.elasticsearch.transport.ReceiveTimeoutTransportException;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5", "fetch", "read", "get"})
+@CapabilityDescription("Retrieves a document from Elasticsearch using the 
specified connection properties and the "
++ "identifier of the document to retrieve. If the cluster has been 
configured for authorization and/or secure "
++ "transport (SSL/TLS), and the X-Pack plugin is available, secure 
connections can be made. This processor "
++ "supports Elasticsearch 5.x clusters.")
+@WritesAttributes({
+@WritesAttribute(attribute = "filename", description = "The 
filename attributes is set to the document identifier"),
+@WritesAttribute(attribute = "es.index", description = "The 
Elasticsearch index containing the document"),
+@WritesAttribute(attribute = "es.type", description = "The 
Elasticsearch document type")
+})
+public class FetchElasticsearch5 extends 
AbstractElasticsearch5TransportClientProcessor {
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("All FlowFiles that are read from Elasticsearch 
are routed to this relationship").build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("All FlowFiles that cannot be read from 
Elasticsearch are routed to this relationship").build();
+
+public static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+

[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88342854
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearch5.java
 ---
@@ -0,0 +1,266 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.elasticsearch.ElasticsearchTimeoutException;
+import org.elasticsearch.action.bulk.BulkItemResponse;
+import org.elasticsearch.action.bulk.BulkRequestBuilder;
+import org.elasticsearch.action.bulk.BulkResponse;
+
+import org.elasticsearch.client.transport.NoNodeAvailableException;
+import org.elasticsearch.node.NodeClosedException;
+import org.elasticsearch.transport.ReceiveTimeoutTransportException;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"elasticsearch", "elasticsearch 5","insert", "update", "write", 
"put"})
+@CapabilityDescription("Writes the contents of a FlowFile to 
Elasticsearch, using the specified parameters such as "
++ "the index to insert into and the type of the document. If the 
cluster has been configured for authorization "
++ "and/or secure transport (SSL/TLS), and the X-Pack plugin is 
available, secure connections can be made. This processor "
++ "supports Elasticsearch 5.x clusters.")
+public class PutElasticsearch5 extends 
AbstractElasticsearch5TransportClientProcessor {
+
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("All FlowFiles that are written to Elasticsearch 
are routed to this relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("All FlowFiles that cannot be written to 
Elasticsearch are routed to this relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("A FlowFile is routed to this relationship if the 
database cannot be updated but attempting the operation again may succeed")
+.build();
+
+public static final PropertyDescriptor ID_ATTRIBUTE = new 
PropertyDescriptor.Builder()
+.name("el5-put-id-attribute")
+.displayName("Identifier Attribute")
+.description("The name of the attribute containing the 
identifier for each FlowFile")
+.required(true)
+.expressionLanguageSupported(false)
+

[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88337395
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractElasticsearch5TransportClientProcessor.java
 ---
@@ -0,0 +1,289 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.nifi.util.StringUtils;
+import org.elasticsearch.client.Client;
+import org.elasticsearch.client.transport.TransportClient;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.transport.client.PreBuiltTransportClient;
+
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetSocketAddress;
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+
+abstract class AbstractElasticsearch5TransportClientProcessor extends 
AbstractElasticsearch5Processor {
+
+/**
+ * This validator ensures the Elasticsearch hosts property is a valid 
list of hostname:port entries
+ */
+private static final Validator HOSTNAME_PORT_VALIDATOR = (subject, 
input, context) -> {
+final List esList = Arrays.asList(input.split(","));
+for (String hostnamePort : esList) {
+String[] addresses = hostnamePort.split(":");
+// Protect against invalid input like http://127.0.0.1:9300 
(URL scheme should not be there)
+if (addresses.length != 2) {
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Must be in hostname:port form (no scheme such as 
http://;).valid(false).build();
+}
+}
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Valid cluster definition").valid(true).build();
+};
+
+protected static final PropertyDescriptor CLUSTER_NAME = new 
PropertyDescriptor.Builder()
+.name("el5-cluster-name")
+.displayName("Cluster Name")
+.description("Name of the ES cluster (for example, 
elasticsearch_brew). Defaults to 'elasticsearch'")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.defaultValue("elasticsearch")
+.build();
+
+protected static final PropertyDescriptor HOSTS = new 
PropertyDescriptor.Builder()
+.name("el5-hosts")
+.displayName("ElasticSearch Hosts")
+.description("ElasticSearch Hosts, which should be comma 
separated and colon for hostname/port "
++ "host1:port,host2:port,  For example 
testcluster:9300. This processor uses the Transport Client to "
++ "connect to hosts. The default transport client port 
is 9300.")
+.required(true)
+.expressionLanguageSupported(false)
+.addValidator(HOSTNAME_PORT_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PROP_XPACK_LOCATION = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88341213
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/pom.xml
 ---
@@ -0,0 +1,111 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+nifi-elasticsearch-5-bundle
+org.apache.nifi
+1.1.0-SNAPSHOT
+
+
+nifi-elasticsearch-5-processors
+jar
+
+
+2.7
+5.0.0
+
+
+
+
+org.apache.nifi
+nifi-api
+provided
+
+
+org.apache.nifi
+nifi-properties
+provided
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+org.apache.lucene
+lucene-core
+${lucene.version}
+
+
+org.apache.nifi
+nifi-mock
+test
+
+
+org.elasticsearch.client
+transport
+${es.version}
+
+
+com.squareup.okhttp3
--- End diff --

I don't think this dep is needed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88330726
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/src/main/java/org/apache/nifi/processors/elasticsearch/AbstractElasticsearch5TransportClientProcessor.java
 ---
@@ -0,0 +1,289 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.nifi.util.StringUtils;
+import org.elasticsearch.client.Client;
+import org.elasticsearch.client.transport.TransportClient;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.transport.client.PreBuiltTransportClient;
+
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetSocketAddress;
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+
+abstract class AbstractElasticsearch5TransportClientProcessor extends 
AbstractElasticsearch5Processor {
+
+/**
+ * This validator ensures the Elasticsearch hosts property is a valid 
list of hostname:port entries
+ */
+private static final Validator HOSTNAME_PORT_VALIDATOR = (subject, 
input, context) -> {
+final List esList = Arrays.asList(input.split(","));
+for (String hostnamePort : esList) {
+String[] addresses = hostnamePort.split(":");
+// Protect against invalid input like http://127.0.0.1:9300 
(URL scheme should not be there)
+if (addresses.length != 2) {
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Must be in hostname:port form (no scheme such as 
http://;).valid(false).build();
+}
+}
+return new 
ValidationResult.Builder().subject(subject).input(input).explanation(
+"Valid cluster definition").valid(true).build();
+};
+
+protected static final PropertyDescriptor CLUSTER_NAME = new 
PropertyDescriptor.Builder()
+.name("el5-cluster-name")
+.displayName("Cluster Name")
+.description("Name of the ES cluster (for example, 
elasticsearch_brew). Defaults to 'elasticsearch'")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.defaultValue("elasticsearch")
+.build();
+
+protected static final PropertyDescriptor HOSTS = new 
PropertyDescriptor.Builder()
+.name("el5-hosts")
+.displayName("ElasticSearch Hosts")
+.description("ElasticSearch Hosts, which should be comma 
separated and colon for hostname/port "
++ "host1:port,host2:port,  For example 
testcluster:9300. This processor uses the Transport Client to "
++ "connect to hosts. The default transport client port 
is 9300.")
+.required(true)
+.expressionLanguageSupported(false)
+.addValidator(HOSTNAME_PORT_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor PROP_XPACK_LOCATION = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88341976
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-5-bundle/nifi-elasticsearch-5-processors/pom.xml
 ---
@@ -0,0 +1,111 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+nifi-elasticsearch-5-bundle
+org.apache.nifi
+1.1.0-SNAPSHOT
+
+
+nifi-elasticsearch-5-processors
+jar
+
+
+2.7
+5.0.0
+
+
+
+
+org.apache.nifi
+nifi-api
+provided
+
+
+org.apache.nifi
+nifi-properties
+provided
+
+
+org.apache.nifi
+nifi-processor-utils
+
+
+org.apache.lucene
+lucene-core
+${lucene.version}
+
+
+org.apache.nifi
+nifi-mock
+test
+
+
+org.elasticsearch.client
+transport
+${es.version}
+
+
+com.squareup.okhttp3
+okhttp
+3.3.1
+
+
+org.apache.nifi
+nifi-ssl-context-service-api
+
+
+commons-io
+commons-io
+
+
+org.codehaus.jackson
+jackson-mapper-asl
--- End diff --

Same comment as lucene, can't find where this is used.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1233: NIFI-3011: Added Elasticsearch5 processors

2016-11-16 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1233#discussion_r88329905
  
--- Diff: nifi-nar-bundles/nifi-elasticsearch-5-bundle/pom.xml ---
@@ -0,0 +1,44 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-nar-bundles
+1.1.0-SNAPSHOT
+
+
+org.apache.nifi
+nifi-elasticsearch-5-bundle
+pom
+
+
+6.3.0
--- End diff --

nit-pick, "lucene.version" is defined here but "slf4jversion" and 
"es.version" are defined in the processor pom. Is there a reason for that or 
can it be moved to the processor pom to match?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-2949) Improve UI of Remote Process Group Ports window

2016-11-16 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-2949:
--
   Resolution: Fixed
Fix Version/s: 1.1.0
   Status: Resolved  (was: Patch Available)

> Improve UI of Remote Process Group Ports window
> ---
>
> Key: NIFI-2949
> URL: https://issues.apache.org/jira/browse/NIFI-2949
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.0.0
>Reporter: Andrew Lim
>Assignee: Scott Aslan
>Priority: Minor
> Fix For: 1.1.0
>
> Attachments: RPG_Ports_1.0.png, RPG_port_0.x.png, Screen Shot 
> 2016-11-09 at 3.07.55 PM.png
>
>
> Creating this ticket to capture some issues I am seeing with the RPG Ports 
> dialog window.
> -When there are no ports, there is a lot of empty white space in the window.  
> The "Input ports" and "Output ports" sections appear to be cut-off.  
> Screenshots attached for both 0.x and 1.0.0.
> -It seems unnecessary for the "Input ports" and "Output ports" text to be so 
> small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2949) Improve UI of Remote Process Group Ports window

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671812#comment-15671812
 ] 

ASF GitHub Bot commented on NIFI-2949:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1226
  
Thanks @scottyaslan! This has been merged to master.


> Improve UI of Remote Process Group Ports window
> ---
>
> Key: NIFI-2949
> URL: https://issues.apache.org/jira/browse/NIFI-2949
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.0.0
>Reporter: Andrew Lim
>Assignee: Scott Aslan
>Priority: Minor
> Attachments: RPG_Ports_1.0.png, RPG_port_0.x.png, Screen Shot 
> 2016-11-09 at 3.07.55 PM.png
>
>
> Creating this ticket to capture some issues I am seeing with the RPG Ports 
> dialog window.
> -When there are no ports, there is a lot of empty white space in the window.  
> The "Input ports" and "Output ports" sections appear to be cut-off.  
> Screenshots attached for both 0.x and 1.0.0.
> -It seems unnecessary for the "Input ports" and "Output ports" text to be so 
> small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2949) Improve UI of Remote Process Group Ports window

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671808#comment-15671808
 ] 

ASF subversion and git services commented on NIFI-2949:
---

Commit 878db823754d647dadb0d5fa16a4b0b7be76bda0 in nifi's branch 
refs/heads/master from [~scottyaslan]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=878db82 ]

[NIFI-2949] update remote process group port styles. This closes #1226


> Improve UI of Remote Process Group Ports window
> ---
>
> Key: NIFI-2949
> URL: https://issues.apache.org/jira/browse/NIFI-2949
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.0.0
>Reporter: Andrew Lim
>Assignee: Scott Aslan
>Priority: Minor
> Attachments: RPG_Ports_1.0.png, RPG_port_0.x.png, Screen Shot 
> 2016-11-09 at 3.07.55 PM.png
>
>
> Creating this ticket to capture some issues I am seeing with the RPG Ports 
> dialog window.
> -When there are no ports, there is a lot of empty white space in the window.  
> The "Input ports" and "Output ports" sections appear to be cut-off.  
> Screenshots attached for both 0.x and 1.0.0.
> -It seems unnecessary for the "Input ports" and "Output ports" text to be so 
> small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2949) Improve UI of Remote Process Group Ports window

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671811#comment-15671811
 ] 

ASF GitHub Bot commented on NIFI-2949:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1226


> Improve UI of Remote Process Group Ports window
> ---
>
> Key: NIFI-2949
> URL: https://issues.apache.org/jira/browse/NIFI-2949
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.0.0
>Reporter: Andrew Lim
>Assignee: Scott Aslan
>Priority: Minor
> Attachments: RPG_Ports_1.0.png, RPG_port_0.x.png, Screen Shot 
> 2016-11-09 at 3.07.55 PM.png
>
>
> Creating this ticket to capture some issues I am seeing with the RPG Ports 
> dialog window.
> -When there are no ports, there is a lot of empty white space in the window.  
> The "Input ports" and "Output ports" sections appear to be cut-off.  
> Screenshots attached for both 0.x and 1.0.0.
> -It seems unnecessary for the "Input ports" and "Output ports" text to be so 
> small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1226: [NIFI-2949] update remote process group port styles

2016-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1226


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1226: [NIFI-2949] update remote process group port styles

2016-11-16 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1226
  
Thanks @scottyaslan! This has been merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671765#comment-15671765
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1202
  
@olegz I did push a commit that I believe addresses the main concerns here. 
I did not update some of the stylistic changes proposed.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1202: NIFI-2854: Refactor repositories and swap files to use sch...

2016-11-16 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1202
  
@olegz I did push a commit that I believe addresses the main concerns here. 
I did not update some of the stylistic changes proposed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3049:
--
Description: 
The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has a 
hard dependecy on log4j, which conflicts with nifi-property-loader's dependency 
on logback.  When the toolkit runs, SLF4J finds two logging implementations to 
bind to and complains, and according to SLF4J's documentation, it is considered 
random which implementation SLF4J will use.

The command line output should be modified to adhere to the standard 
established by the other tools in the toolkit.

  was:
The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has a 
hard dependecy on log4j, which conflicts with nifi-property-loader's dependency 
on logback.  When the toolkit runs, SLF4J finds two logging implementations to 
bind to and complains, and according to SLF4J's documentation, it is considered 
random which implementation SLF4J will use.

The ZooKeeper Migrator Toolkit should also be modified to use ZooKeeper 
client's chroot capablity using the connection string (with path appended to 
it) rather than the migrator attempting to implement that with custom logic.

Finally, the command line output should be modified to adhere to the standard 
established by the other tools in the toolkit.


> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3020) LDAP - Support configurable user identity

2016-11-16 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3020:
--
 Assignee: Matt Gilman
Fix Version/s: 1.1.0
   Status: Patch Available  (was: Open)

> LDAP - Support configurable user identity
> -
>
> Key: NIFI-3020
> URL: https://issues.apache.org/jira/browse/NIFI-3020
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.1.0
>
>
> The current LDAP provider supports a configurable search filter that will 
> allow the user specified login name to be matched against any LDAP entry 
> attribute. We should offer a configuration option that will indicate if we 
> should use the LDAP entry DN or if we should use the login name that was used 
> in the search filter. For instance, this would allow an admin to configure a 
> user to login with their sAMAccountName and subsequently use that name as 
> their user's identity.
> Note: we should default this option to be the user DN in order to ensure 
> backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3020) LDAP - Support configurable user identity

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671698#comment-15671698
 ] 

ASF GitHub Bot commented on NIFI-3020:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/1236

LDAP - Configurable strategy to identify users

NIFI-3020:
- Introducing a user identity strategy for identifying users.
- Fixing issue with the referral strategy error message.
- Adding code to shutdown the application when the authorizer or login 
identity provider are not initialized successfully.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-3020

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1236.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1236


commit 252e3bae209bb7d4365dd28dded7e72048e38e41
Author: Matt Gilman 
Date:   2016-11-16T21:29:14Z

NIFI-3020:
- Introducing a strategy for identifying users.
- Fixing issue with the referral strategy error message.
- Adding code to shutdown the application when the authorizer or login 
identity provider are not initialized successfully.




> LDAP - Support configurable user identity
> -
>
> Key: NIFI-3020
> URL: https://issues.apache.org/jira/browse/NIFI-3020
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Gilman
>
> The current LDAP provider supports a configurable search filter that will 
> allow the user specified login name to be matched against any LDAP entry 
> attribute. We should offer a configuration option that will indicate if we 
> should use the LDAP entry DN or if we should use the login name that was used 
> in the search filter. For instance, this would allow an admin to configure a 
> user to login with their sAMAccountName and subsequently use that name as 
> their user's identity.
> Note: we should default this option to be the user DN in order to ensure 
> backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1236: LDAP - Configurable strategy to identify users

2016-11-16 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/1236

LDAP - Configurable strategy to identify users

NIFI-3020:
- Introducing a user identity strategy for identifying users.
- Fixing issue with the referral strategy error message.
- Adding code to shutdown the application when the authorizer or login 
identity provider are not initialized successfully.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-3020

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1236.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1236


commit 252e3bae209bb7d4365dd28dded7e72048e38e41
Author: Matt Gilman 
Date:   2016-11-16T21:29:14Z

NIFI-3020:
- Introducing a strategy for identifying users.
- Fixing issue with the referral strategy error message.
- Adding code to shutdown the application when the authorizer or login 
identity provider are not initialized successfully.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671582#comment-15671582
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88327262
  
--- Diff: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/timebuffer/CountSizeEntityAccess.java
 ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.util.timebuffer;
+
+public class CountSizeEntityAccess implements EntityAccess 
{
+@Override
+public TimedCountSize aggregate(final TimedCountSize oldValue, final 
TimedCountSize toAdd) {
+if (oldValue == null && toAdd == null) {
+return new TimedCountSize(0L, 0L);
+} else if (oldValue == null) {
+return toAdd;
+} else if (toAdd == null) {
+return oldValue;
+}
+
+return new TimedCountSize(oldValue.getCount() + toAdd.getCount(), 
oldValue.getSize() + toAdd.getSize());
--- End diff --

you right, if both not null. My bad!


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-16 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88327262
  
--- Diff: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/timebuffer/CountSizeEntityAccess.java
 ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.util.timebuffer;
+
+public class CountSizeEntityAccess implements EntityAccess 
{
+@Override
+public TimedCountSize aggregate(final TimedCountSize oldValue, final 
TimedCountSize toAdd) {
+if (oldValue == null && toAdd == null) {
+return new TimedCountSize(0L, 0L);
+} else if (oldValue == null) {
+return toAdd;
+} else if (toAdd == null) {
+return oldValue;
+}
+
+return new TimedCountSize(oldValue.getCount() + toAdd.getCount(), 
oldValue.getSize() + toAdd.getSize());
--- End diff --

you right, if both not null. My bad!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-2985) Update User Guide for Backpressure Visual indicator

2016-11-16 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim updated NIFI-2985:
-
Summary: Update User Guide for Backpressure Visual indicator  (was: Update 
docs for Backpressure Visual indicator)

> Update User Guide for Backpressure Visual indicator
> ---
>
> Key: NIFI-2985
> URL: https://issues.apache.org/jira/browse/NIFI-2985
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Affects Versions: 1.1.0
>Reporter: Andrew Lim
>Assignee: Andrew Lim
> Fix For: 1.1.0
>
>
> In NIFI-766, the Backpressure visual indicator was added to connections in 
> the UI.  This new feature should be captured in the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and does not provide the correct command line usage output

2016-11-16 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3049:
--
Summary: ZooKeeper Migrator Toolkit addition has broken logging in NiFI 
Toolkit and does not provide the correct command line usage output  (was: 
ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit)

> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit and 
> does not provide the correct command line usage output
> -
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The ZooKeeper Migrator Toolkit should also be modified to use ZooKeeper 
> client's chroot capablity using the connection string (with path appended to 
> it) rather than the migrator attempting to implement that with custom logic.
> Finally, the command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit

2016-11-16 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3049:
--
Description: 
The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has a 
hard dependecy on log4j, which conflicts with nifi-property-loader's dependency 
on logback.  When the toolkit runs, SLF4J finds two logging implementations to 
bind to and complains, and according to SLF4J's documentation, it is considered 
random which implementation SLF4J will use.

The ZooKeeper Migrator Toolkit should also be modified to use ZooKeeper 
client's chroot capablity using the connection string (with path appended to 
it) rather than the migrator attempting to implement that with custom logic.

Finally, the command line output should be modified to adhere to the standard 
established by the other tools in the toolkit.

  was:
The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has a 
hard dependecy on log4j, which conflicts with nifi-property-loader's dependency 
on logback.  When the toolkit runs, SLF4J finds two logging implementations to 
bind to and complains, and according to SLF4J's documentation, it is considered 
random which implementation SLF4J will use.

The ZooKeeper Migrator Toolkit should also be modified 


> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit
> --
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The ZooKeeper Migrator Toolkit should also be modified to use ZooKeeper 
> client's chroot capablity using the connection string (with path appended to 
> it) rather than the migrator attempting to implement that with custom logic.
> Finally, the command line output should be modified to adhere to the standard 
> established by the other tools in the toolkit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88325963
  
--- Diff: 
nifi-commons/nifi-schema-utils/src/main/java/org/apache/nifi/repository/schema/SchemaRecordReader.java
 ---
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.repository.schema;
+
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+
+
+public class SchemaRecordReader {
+private final RecordSchema schema;
+
+public SchemaRecordReader(final RecordSchema schema) {
+this.schema = schema;
+}
+
+public static SchemaRecordReader fromSchema(final RecordSchema schema) 
{
+return new SchemaRecordReader(schema);
+}
+
+private static void fillBuffer(final InputStream in, final byte[] 
destination) throws IOException {
+int bytesRead = 0;
+int len;
+while (bytesRead < destination.length) {
+len = in.read(destination, bytesRead, destination.length - 
bytesRead);
+if (len < 0) {
+throw new EOFException();
+}
+
+bytesRead += len;
+}
+}
+
+public Record readRecord(final InputStream in) throws IOException {
+final int sentinelByte = in.read();
+if (sentinelByte < 0) {
+return null;
+}
+
+if (sentinelByte != 1) {
+throw new IOException("Expected to read a Sentinel Byte of '1' 
but got a value of '" + sentinelByte + "' instead");
+}
+
+final List schemaFields = schema.getFields();
+final Map fields = new 
HashMap<>(schemaFields.size());
+
+for (final RecordField field : schema.getFields()) {
+final Object value = readField(in, field);
+fields.put(field, value);
+}
+
+return new FieldMapRecord(fields, schema);
+}
+
+
+private Object readField(final InputStream in, final RecordField 
field) throws IOException {
+switch (field.getRepetition()) {
+case ZERO_OR_MORE: {
+// If repetition is 0+ then that means we have a list and 
need to read how many items are in the list.
+final int iterations = readInt(in);
+if (iterations == 0) {
+return Collections.emptyList();
+}
+
+final List value = new ArrayList<>(iterations);
+for (int i = 0; i < iterations; i++) {
+value.add(readFieldValue(in, field.getFieldType(), 
field.getFieldName(), field.getSubFields()));
+}
+
+return value;
+}
+case ZERO_OR_ONE: {
+// If repetition is 0 or 1 (optional), then check if next 
byte is a 0, which means field is absent or 1, which means
+// field is present. Otherwise, throw an Exception.
+final int nextByte = in.read();
+if (nextByte == -1) {
+throw new EOFException("Unexpected End-of-File when 
attempting to read Repetition value for field '" + field.getFieldName() + "'");
+}
+if (nextByte == 0) {
+return null;
+}
+if (nextByte != 1) {
+throw new IOException("Invalid Boolean value found 
when reading 'Repetition' of field '" + field.getFieldName() + "'. Expected 0 
or 1 but got " + (nextByte & 0xFF));
+}
+  

[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671552#comment-15671552
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88325829
  
--- Diff: 
nifi-commons/nifi-write-ahead-log/src/main/java/org/wali/MinimalLockingWriteAheadLog.java
 ---
@@ -512,25 +525,38 @@ public synchronized int checkpoint() throws 
IOException {
 swapLocations = new HashSet<>(externalLocations);
 for (final Partition partition : partitions) {
 try {
-partition.rollover();
+partitionStreams.add(partition.rollover());
 } catch (final Throwable t) {
 partition.blackList();
 numberBlackListedPartitions.getAndIncrement();
 throw t;
 }
 }
-
-// notify global sync with the write lock held. We do this 
because we don't want the repository to get updated
-// while the listener is performing its necessary tasks
-if (syncListener != null) {
-syncListener.onGlobalSync();
-}
 } finally {
 writeLock.unlock();
 }
 
 stopTheWorldNanos = System.nanoTime() - stopTheWorldStart;
 
+// Close all of the Partitions' Output Streams. We do this 
here, instead of in Partition.rollover()
+// because we want to do this outside of the write lock. 
Because calling close() on FileOutputStream can
+// be very expensive, as it has to flush the data to disk, we 
don't want to prevent other Process Sessions
+// from getting committed. Since rollover() transitions the 
partition to write to a new file already, there
+// is no reason that we need to close this FileOutputStream 
before releasing the write lock. Also, if any Exception
+// does get thrown when calling close(), we don't need to 
blacklist the partition, as the stream that was getting
+// closed is not the stream being written to for the partition 
anyway.
+for (final OutputStream partitionStream : partitionStreams) {
+partitionStream.close();
--- End diff --

If close() fails we do want to throw an Exception. But you're right - we 
should catch the Exception first and close the other streams so that there is 
no resource leak. Will update that.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not 

[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671555#comment-15671555
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88325963
  
--- Diff: 
nifi-commons/nifi-schema-utils/src/main/java/org/apache/nifi/repository/schema/SchemaRecordReader.java
 ---
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.repository.schema;
+
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+
+
+public class SchemaRecordReader {
+private final RecordSchema schema;
+
+public SchemaRecordReader(final RecordSchema schema) {
+this.schema = schema;
+}
+
+public static SchemaRecordReader fromSchema(final RecordSchema schema) 
{
+return new SchemaRecordReader(schema);
+}
+
+private static void fillBuffer(final InputStream in, final byte[] 
destination) throws IOException {
+int bytesRead = 0;
+int len;
+while (bytesRead < destination.length) {
+len = in.read(destination, bytesRead, destination.length - 
bytesRead);
+if (len < 0) {
+throw new EOFException();
+}
+
+bytesRead += len;
+}
+}
+
+public Record readRecord(final InputStream in) throws IOException {
+final int sentinelByte = in.read();
+if (sentinelByte < 0) {
+return null;
+}
+
+if (sentinelByte != 1) {
+throw new IOException("Expected to read a Sentinel Byte of '1' 
but got a value of '" + sentinelByte + "' instead");
+}
+
+final List schemaFields = schema.getFields();
+final Map fields = new 
HashMap<>(schemaFields.size());
+
+for (final RecordField field : schema.getFields()) {
+final Object value = readField(in, field);
+fields.put(field, value);
+}
+
+return new FieldMapRecord(fields, schema);
+}
+
+
+private Object readField(final InputStream in, final RecordField 
field) throws IOException {
+switch (field.getRepetition()) {
+case ZERO_OR_MORE: {
+// If repetition is 0+ then that means we have a list and 
need to read how many items are in the list.
+final int iterations = readInt(in);
+if (iterations == 0) {
+return Collections.emptyList();
+}
+
+final List value = new ArrayList<>(iterations);
+for (int i = 0; i < iterations; i++) {
+value.add(readFieldValue(in, field.getFieldType(), 
field.getFieldName(), field.getSubFields()));
+}
+
+return value;
+}
+case ZERO_OR_ONE: {
+// If repetition is 0 or 1 (optional), then check if next 
byte is a 0, which means field is absent or 1, which means
+// field is present. Otherwise, throw an Exception.
+final int nextByte = in.read();
+if (nextByte == -1) {
+throw new EOFException("Unexpected End-of-File when 
attempting to read Repetition value for field '" + field.getFieldName() + "'");
+}
+if (nextByte == 0) {
+return null;
+}
+if 

[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88325829
  
--- Diff: 
nifi-commons/nifi-write-ahead-log/src/main/java/org/wali/MinimalLockingWriteAheadLog.java
 ---
@@ -512,25 +525,38 @@ public synchronized int checkpoint() throws 
IOException {
 swapLocations = new HashSet<>(externalLocations);
 for (final Partition partition : partitions) {
 try {
-partition.rollover();
+partitionStreams.add(partition.rollover());
 } catch (final Throwable t) {
 partition.blackList();
 numberBlackListedPartitions.getAndIncrement();
 throw t;
 }
 }
-
-// notify global sync with the write lock held. We do this 
because we don't want the repository to get updated
-// while the listener is performing its necessary tasks
-if (syncListener != null) {
-syncListener.onGlobalSync();
-}
 } finally {
 writeLock.unlock();
 }
 
 stopTheWorldNanos = System.nanoTime() - stopTheWorldStart;
 
+// Close all of the Partitions' Output Streams. We do this 
here, instead of in Partition.rollover()
+// because we want to do this outside of the write lock. 
Because calling close() on FileOutputStream can
+// be very expensive, as it has to flush the data to disk, we 
don't want to prevent other Process Sessions
+// from getting committed. Since rollover() transitions the 
partition to write to a new file already, there
+// is no reason that we need to close this FileOutputStream 
before releasing the write lock. Also, if any Exception
+// does get thrown when calling close(), we don't need to 
blacklist the partition, as the stream that was getting
+// closed is not the stream being written to for the partition 
anyway.
+for (final OutputStream partitionStream : partitionStreams) {
+partitionStream.close();
--- End diff --

If close() fails we do want to throw an Exception. But you're right - we 
should catch the Exception first and close the other streams so that there is 
no resource leak. Will update that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit

2016-11-16 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-3049:
--
Description: 
The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has a 
hard dependecy on log4j, which conflicts with nifi-property-loader's dependency 
on logback.  When the toolkit runs, SLF4J finds two logging implementations to 
bind to and complains, and according to SLF4J's documentation, it is considered 
random which implementation SLF4J will use.

The ZooKeeper Migrator Toolkit should also be modified 

  was:The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which 
has a hard dependecy on log4j, which conflicts with nifi-property-loader's 
dependency on logback.  When the toolkit runs, SLF4J finds two logging 
implementations to bind to and complains, and according to SLF4J's 
documentation, it is considered random which implementation SLF4J will use.


> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit
> --
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.
> The ZooKeeper Migrator Toolkit should also be modified 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88325590
  
--- Diff: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/timebuffer/CountSizeEntityAccess.java
 ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.util.timebuffer;
+
+public class CountSizeEntityAccess implements EntityAccess 
{
+@Override
+public TimedCountSize aggregate(final TimedCountSize oldValue, final 
TimedCountSize toAdd) {
+if (oldValue == null && toAdd == null) {
+return new TimedCountSize(0L, 0L);
+} else if (oldValue == null) {
+return toAdd;
+} else if (toAdd == null) {
+return oldValue;
+}
+
+return new TimedCountSize(oldValue.getCount() + toAdd.getCount(), 
oldValue.getSize() + toAdd.getSize());
--- End diff --

I don't believe those are the same...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671548#comment-15671548
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88325590
  
--- Diff: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/timebuffer/CountSizeEntityAccess.java
 ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.util.timebuffer;
+
+public class CountSizeEntityAccess implements EntityAccess 
{
+@Override
+public TimedCountSize aggregate(final TimedCountSize oldValue, final 
TimedCountSize toAdd) {
+if (oldValue == null && toAdd == null) {
+return new TimedCountSize(0L, 0L);
+} else if (oldValue == null) {
+return toAdd;
+} else if (toAdd == null) {
+return oldValue;
+}
+
+return new TimedCountSize(oldValue.getCount() + toAdd.getCount(), 
oldValue.getSize() + toAdd.getSize());
--- End diff --

I don't believe those are the same...


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit

2016-11-16 Thread Jeff Storck (JIRA)
Jeff Storck created NIFI-3049:
-

 Summary: ZooKeeper Migrator Toolkit addition has broken logging in 
NiFI Toolkit
 Key: NIFI-3049
 URL: https://issues.apache.org/jira/browse/NIFI-3049
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Affects Versions: 1.1.0
Reporter: Jeff Storck


The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has a 
hard dependecy on log4j, which conflicts with nifi-property-loader's dependency 
on logback.  When the toolkit runs, SLF4J finds two logging implementations to 
bind to and complains, and according to SLF4J's documentation, it is considered 
random which implementation SLF4J will use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-3049) ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit

2016-11-16 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck reassigned NIFI-3049:
-

Assignee: Jeff Storck

> ZooKeeper Migrator Toolkit addition has broken logging in NiFI Toolkit
> --
>
> Key: NIFI-3049
> URL: https://issues.apache.org/jira/browse/NIFI-3049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.1.0
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>
> The ZooKeeper Migrator Toolkit has a dependency on ZooKeeper 3.4.6 which has 
> a hard dependecy on log4j, which conflicts with nifi-property-loader's 
> dependency on logback.  When the toolkit runs, SLF4J finds two logging 
> implementations to bind to and complains, and according to SLF4J's 
> documentation, it is considered random which implementation SLF4J will use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r88325073
  
--- Diff: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/stream/io/BufferedInputStream.java
 ---
@@ -16,19 +16,445 @@
  */
 package org.apache.nifi.stream.io;
 
+import java.io.IOException;
 import java.io.InputStream;
 
 /**
  * This class is a slight modification of the BufferedInputStream in the 
java.io package. The modification is that this implementation does not provide 
synchronization on method calls, which means
  * that this class is not suitable for use by multiple threads. However, 
the absence of these synchronized blocks results in potentially much better 
performance.
--- End diff --

You're welcome to create your own benchmarks. My experience is that it 
causes tremendous performance implications when used in 'critical sections' of 
code. For example, if you have code that reads 1 GB of data, one byte at a 
time, you'd be crossing a synchronization barrier over 1 billion times. This 
can equate to several seconds spent simply crossing that barrier, even when 
using a single thread.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-1002) support for Listen WebSocket processor

2016-11-16 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-1002:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> support for Listen WebSocket processor 
> ---
>
> Key: NIFI-1002
> URL: https://issues.apache.org/jira/browse/NIFI-1002
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 0.4.0
>Reporter: sumanth chinthagunta
>Assignee: Koji Kawamura
>  Labels: newbie
> Fix For: 1.1.0
>
>
> A WebSocket listen processor will be helpful for IoT data ingestion.
> I am playing with embedded Vert.X  for WebSocket and also ability to  put 
> FlowFiles back to WebSocket client via Vert.X EventBus.
> https://github.com/xmlking/nifi-websocket 
> I am new to NiFi. any advise can be  helpful.  
> PS: I feel forcing Interfaces for Controller Services is unnecessary as in 
> many cases Controller Services are only used by a set of Processors and 
> developers usually  bundle them together. 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1235: NIFI-1002 added L entries for Bouncy Castle

2016-11-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1235


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   >