[jira] [Commented] (NIFIREG-124) As a user I want the sidenav table sorting to persist when I open dialogs.

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388245#comment-16388245
 ] 

ASF GitHub Bot commented on NIFIREG-124:


Github user scottyaslan commented on the issue:

https://github.com/apache/nifi-registry/pull/100
  
Nice catch! I have updated this PR to address the issue with the "add to 
group" button. Please re-review. Thanks!


> As a user I want the sidenav table sorting to persist when I open dialogs.
> --
>
> Key: NIFIREG-124
> URL: https://issues.apache.org/jira/browse/NIFIREG-124
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
>
> When editing a group, if the user sorts the listed users in the group table 
> to be descending, but then click Add User button, the sort order in the users 
> in group table switches back to ascending, even if I make no change in the 
> Add User dialog. This bug also exists in when editing a user as well as when 
> editing a bucket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4855) The layout of NiFi API document is broken

2018-03-06 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-4855.
---
   Resolution: Fixed
Fix Version/s: 1.6.0

> The layout of NiFi API document is broken
> -
>
> Key: NIFI-4855
> URL: https://issues.apache.org/jira/browse/NIFI-4855
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.5.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: broken_ui.png, fixed_ui.001.png
>
>
> This is reported by Hiroaki Nawa.
> Looks like _Controller_ section incudes _Versions_ section mistakenly, and 
> the layout is broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-124) As a user I want the sidenav table sorting to persist when I open dialogs.

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388334#comment-16388334
 ] 

ASF GitHub Bot commented on NIFIREG-124:


Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/100


> As a user I want the sidenav table sorting to persist when I open dialogs.
> --
>
> Key: NIFIREG-124
> URL: https://issues.apache.org/jira/browse/NIFIREG-124
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
>
> When editing a group, if the user sorts the listed users in the group table 
> to be descending, but then click Add User button, the sort order in the users 
> in group table switches back to ascending, even if I make no change in the 
> Add User dialog. This bug also exists in when editing a user as well as when 
> editing a bucket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #100: [NIFIREG-124] persist sidenav table sorting

2018-03-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/100


---


[GitHub] nifi issue #2515: NIFI-4885: Granular component restrictions

2018-03-06 Thread andrewmlim
Github user andrewmlim commented on the issue:

https://github.com/apache/nifi/pull/2515
  
After deleting a specific granular policy, the following message appears:

Showing effective policy inherited from Process Group 
restricted-components. Override this policy.

The text "restricted-components" in the message is hyperlinked.  If clicked 
you get a dialog that says:

Unable to locate group with id 'restricted-components'.

Unable to enter the selected group.




---


[jira] [Commented] (NIFI-4885) More granular restricted component categories

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388376#comment-16388376
 ] 

ASF GitHub Bot commented on NIFI-4885:
--

Github user andrewmlim commented on the issue:

https://github.com/apache/nifi/pull/2515
  
After deleting a specific granular policy, the following message appears:

Showing effective policy inherited from Process Group 
restricted-components. Override this policy.

The text "restricted-components" in the message is hyperlinked.  If clicked 
you get a dialog that says:

Unable to locate group with id 'restricted-components'.

Unable to enter the selected group.




> More granular restricted component categories
> -
>
> Key: NIFI-4885
> URL: https://issues.apache.org/jira/browse/NIFI-4885
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Major
>
> Update the Restricted annotation to support more granular categories. 
> Available categories will map to new access policies. Example categories and 
> their corresponding access policies may be
>  * read-filesystem (/restricted-components/read-filesystem)
>  * write-filesystem (/restricted-components/write-filesystem)
>  * code-execution (/restricted-components/code-execution)
>  * keytab-access (/restricted-components/keytab-access)
> The hierarchical nature of the access policies will support backward 
> compatibility with existing installations where the policy of 
> /restricted-components was used to enforce all subcategories. Any users with 
> /restricted-components permissions will be granted access to all 
> subcategories. In order to leverage the new granular categories, an 
> administrator will need to use NiFi to update their access policies (remove a 
> user from /restricted-components and place them into the desired subcategory)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388452#comment-16388452
 ] 

ASF GitHub Bot commented on NIFI-4893:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2480


> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #100: [NIFIREG-124] persist sidenav table sorting

2018-03-06 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi-registry/pull/100
  
Nice catch! I have updated this PR to address the issue with the "add to 
group" button. Please re-review. Thanks!


---


[jira] [Commented] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388346#comment-16388346
 ] 

ASF GitHub Bot commented on NIFI-4893:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2480
  
@gardellajuanpablo looks good now! Thanks for bringing this up and sticking 
with us till it's all in good shape! +1 merged to master.


> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2480: NIFI-4893 Cannot convert Avro schemas to Record schemas wi...

2018-03-06 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2480
  
@gardellajuanpablo looks good now! Thanks for bringing this up and sticking 
with us till it's all in good shape! +1 merged to master.


---


[jira] [Resolved] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-03-06 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-4893.
--
   Resolution: Fixed
Fix Version/s: 1.6.0

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4033) Allow multiple Controller Services to be enabled or disabled

2018-03-06 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388353#comment-16388353
 ] 

Mark Payne commented on NIFI-4033:
--

[~sivaprasanna] I agree, we should also allow the deletion of multiple 
Controller Services.

> Allow multiple Controller Services to be enabled or disabled
> 
>
> Key: NIFI-4033
> URL: https://issues.apache.org/jira/browse/NIFI-4033
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Mark Payne
>Priority: Major
>
> It would be nice to select multiple controller services (via checkbox or just 
> shift-click or whatever) and then click a button to enable or disable all of 
> them. It would be particularly useful when importing a new template, for 
> example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-90) Replace explicit penalization with automatic penalization/back-off

2018-03-06 Thread Boris Tyukin (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388431#comment-16388431
 ] 

Boris Tyukin commented on NIFI-90:
--

I am surprised this Jira has not got any traction after 3 years...Having used 
Apache Airflow for a while, I am looking to retry capabilities in NiFi and it 
comes down to "building your own" flows, that would handle retries in a loop 
and then sleeping for some time. The best solution I found, suggested by 
[Alessio Palma|https://community.hortonworks.com/users/16011/ozw1z5rd.html] 
[https://community.hortonworks.com/questions/56167/is-there-wait-processor-in-nifi.html]
 but still would be nice to have re-try capabilities like Airflow folks. You 
can specify global re-try behavior for a flow or override per task/processor. 
This helps a lot to deal with intermittent issues, like losing network 
connection or source database system, being down for maintenance. 

> Replace explicit penalization with automatic penalization/back-off
> --
>
> Key: NIFI-90
> URL: https://issues.apache.org/jira/browse/NIFI-90
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Joseph Witt
>Priority: Minor
>
> Rather than having users configure explicit penalization periods and 
> requiring developers to implement it in their processors we can automate 
> this.  Perhaps keep a LinkedHashMap of size 5 or so 
> in the FlowFileRecord construct.  When a FlowFile is routed to a Connection, 
> the counter is incremented.  If the counter exceeds 3 visits to the same 
> connection, the FlowFile will be automatically penalized.  This protects us 
> "5 hops out" so that if we have something like DistributeLoad -> PostHTTP 
> with failure looping back to DistributeLoad, we will still penalize when 
> appropriate.
> In addition, we will remove the configuration option from the UI, setting the 
> penalization period to some default such as 5 seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2516: Nifi 4516

2018-03-06 Thread JohannesDaniel
GitHub user JohannesDaniel opened a pull request:

https://github.com/apache/nifi/pull/2516

Nifi 4516

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JohannesDaniel/nifi NIFI-4516

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2516.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2516


commit 0e99a9e52e5537830ca90b318f9684304998b6f4
Author: Pierre Villard 
Date:   2018-03-02T17:08:29Z

NIFI-4922 - Add badges to the README file

Signed-off-by: Pierre Villard 
Signed-off-by: James Wing 

This closes #2505.

commit c585f6e10df4d510207698243147928698948c17
Author: JohannesDaniel 
Date:   2018-03-05T16:30:42Z

NIFI-4516 FetchSolr-Processor




---


[GitHub] nifi issue #2516: Nifi 4516

2018-03-06 Thread JohannesDaniel
Github user JohannesDaniel commented on the issue:

https://github.com/apache/nifi/pull/2516
  
@ijokarumawak 
To be aligned with processor GetSolr I implemented two options for the data 
format of Solr results: Solr XML and record functions. However, facets and 
stats are written to flowfiles in JSON (which has the same structure like the 
Solr-JSON). I did not implement record management for these two components to 
keep the complexity of the processor at a reasonably level. I chose JSON as it 
is probably the best integrated format in NiFi. 


---


[GitHub] nifi pull request #2480: NIFI-4893 Cannot convert Avro schemas to Record sch...

2018-03-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2480


---


[jira] [Commented] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388457#comment-16388457
 ] 

Mark Payne commented on NIFI-4940:
--

[~pvillard] I am probably just missing something obvious here, but I'm not sure 
that I'm following the intent here exactly. It feels very odd to me to have an 
option to "Normalize Column Names for Avro" type of option on the CSV Reader. 
You also mention that UpdateRecord is not working as expected with "weird" 
column names - so not sure how that ties into Avro. Is the concern here that 
RecordPath is not properly evaluating the field names in some cases? Or is your 
UpdateRecord processor configured with an Avro Writer and that's causing 
failures?

> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4940
> URL: https://issues.apache.org/jira/browse/NIFI-4940
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4940:
-
Resolution: Won't Do
Status: Resolved  (was: Patch Available)

> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4940
> URL: https://issues.apache.org/jira/browse/NIFI-4940
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388518#comment-16388518
 ] 

Pierre Villard commented on NIFI-4940:
--

Thanks for your comment [~markap14]! I didn't pay close attention.

It's possible to escape the field names when using a record path:
{code:java}
/'my id'/childField{code}

> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4940
> URL: https://issues.apache.org/jira/browse/NIFI-4940
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4941) Make description of "nifi.sensitive.props.additional.keys" property more explicit by referring to properties in nifi.properties

2018-03-06 Thread Andrew Lim (JIRA)
Andrew Lim created NIFI-4941:


 Summary: Make description of 
"nifi.sensitive.props.additional.keys" property more explicit by referring to 
properties in nifi.properties
 Key: NIFI-4941
 URL: https://issues.apache.org/jira/browse/NIFI-4941
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Reporter: Andrew Lim


Description in 1.5.0 is:

The comma separated list of properties to encrypt in addition to the default 
sensitive properties (see Encrypt-Config Tool).

Should clarify which "properties" can be encrypted.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-03-06 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388795#comment-16388795
 ] 

Andy LoPresto edited comment on NIFI-4942 at 3/7/18 12:57 AM:
--

The proposed mechanism for this is to store the current password in a 
cryptographically hashed format locally and allow the toolkit to accept that as 
a parameter in place of the plaintext existing password or key during 
migration. This enables the sensitive material to be stored in a format that is 
not vulnerable to reversing but still requires a user to prove knowledge 
of/access to the original password in order to perform a migration. 

Current example usage:

{{$ ./bin/encrypt-config.sh -m -w existingPassword -p thisIsTheNewPassword ...}}

The above will still be supported (along with {{-e}} and {{-k}} options for 
existing and new key (hex) respectively). However, another mode will be 
supported:

{{$ ./bin/encrypt-config.sh -m -z secure_hash(existingPassword) -p 
thisIsTheNewPassword ...}}

where the {{-z / --secure-hash}} argument specifies that the following argument 
is the secure hash of the existing password, using the algorithm:

{{secure_hash = bcrypt(version, workFactor, salt, existingPassword)}}

and the following values:

* {{version}} = {{2a}}
* {{workFactor}} = {{12}} (actually log ~2~ value, so 2 ^12^ iterations)
* {{salt}} = {{ABCDEFGHIJKLMNOPQRSTUV}} a randomly-generated, 22 character 
Base64-encoded unpadded salt value. This decodes to 16 bytes when used in the 
derivation process
* {{existingPassword}} = the current (pre-migration) password used to derive 
the key which is currently encrypting the sensitive properties

In the event the user had been using raw keys rather than passwords (which is 
not the case for Ambari, but will be provided for completeness), a similar 
process can be used with the {{-y / --secure-hash-key}} arguments:

{{$ ./bin/encrypt-config.sh -m -y secure_hash_key(existingKeyHex) -p 
thisIsTheNewPassword ...}}

where the {{-y / --secure-hash-key}} argument specifies that the following 
argument is the secure hash of the existing key in hexadecimal encoding, using 
the algorithm:

{{secure_hash_key = bcrypt(version, workFactor, salt, 
lowercase(existingKeyHex))}}

and the following values:

* {{version}} = {{2a}}
* {{workFactor}} = {{12}} (actually log ~2~ value, so 2 ^12^ iterations)
* {{salt}} = {{ABCDEFGHIJKLMNOPQRSTUV}} a randomly-generated, 22 character 
Base64-encoded unpadded salt value. This decodes to 16 bytes when used in the 
derivation process
* {{existingKeyHex}} = the current (pre-migration) key in hexadecimal encoding 
which is currently encrypting the sensitive properties. This value will be 
lowercased before being fed to the {{bcrypt}} function to ensure that case 
sensitivity does not matter


was (Author: alopresto):
The proposed mechanism for this is to store the current password in a 
cryptographically hashed format locally and allow the toolkit to accept that as 
a parameter in place of the plaintext existing password or key during 
migration. This enables the sensitive material to be stored in a format that is 
not vulnerable to reversing but still requires a user to prove knowledge 
of/access to the original password in order to perform a migration. 

Current example usage:

{{$ ./bin/encrypt-config.sh -m -w existingPassword -p thisIsTheNewPassword ...}}

The above will still be supported (along with {{-e}} and {{-k}} options for 
existing and new key (hex) respectively). However, another mode will be 
supported:

{{$ ./bin/encrypt-config.sh -m -z secure_hash(existingPassword) -p 
thisIsTheNewPassword ...}}

where the {{-z}} / {{--secure-hash}} argument specifies that the following 
argument is the secure hash of the existing password, using the algorithm:

{{secure_hash = bcrypt(version, workFactor, salt, existingPassword)}}

and the following values:

* {{version}} = {{2a}}
* {{workFactor}} = {{12}} (actually log ~2~ value, so 2 ^12^ iterations)
* {{salt}} = {{ABCDEFGHIJKLMNOPQRSTUV}} a randomly-generated, 22 character 
Base64-encoded unpadded salt value. This decodes to 16 bytes when used in the 
derivation process
* {{existingPassword}} = the current (pre-migration) password used to derive 
the key which is currently encrypting the sensitive properties

In the event the user had been using raw keys rather than passwords (which is 
not the case for Ambari, but will be provided for completeness), a similar 
process can be used with the {{-y}} / {{--secure-hash-key}} arguments:

{{$ ./bin/encrypt-config.sh -m -y secure_hash_key(existingKeyHex) -p 
thisIsTheNewPassword ...}}

where the {{-y}} / {{--secure-hash-key}} argument specifies that the following 
argument is the secure hash of the existing key in hexadecimal encoding, using 
the algorithm:

{{secure_hash_key = bcrypt(version, workFactor, salt, 
lowercase(existingKeyHex))}}

and the following 

[jira] [Commented] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-03-06 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388795#comment-16388795
 ] 

Andy LoPresto commented on NIFI-4942:
-

The proposed mechanism for this is to store the current password in a 
cryptographically hashed format locally and allow the toolkit to accept that as 
a parameter in place of the plaintext existing password or key during 
migration. This enables the sensitive material to be stored in a format that is 
not vulnerable to reversing but still requires a user to prove knowledge 
of/access to the original password in order to perform a migration. 

Current example usage:

{{$ ./bin/encrypt-config.sh -m -w existingPassword -p thisIsTheNewPassword ...}}

The above will still be supported (along with {{-e}} and {{-k}} options for 
existing and new key (hex) respectively). However, another mode will be 
supported:

{{$ ./bin/encrypt-config.sh -m -z secure_hash(existingPassword) -p 
thisIsTheNewPassword ...}}

where the {{-z}}/{{--secure-hash}} argument specifies that the following 
argument is the secure hash of the existing password, using the algorithm:

{{secure_hash = bcrypt(version, workFactor, salt, existingPassword)}}

and the following values:

* {{version}} = {{2a}}
* {{workFactor}} = {{12}} (actually log~2~ value, so 2^12^ iterations)
* {{salt}} = {{ABCDEFGHIJKLMNOPQRSTUV}} a randomly-generated, 22 character 
Base64-encoded unpadded salt value. This decodes to 16 bytes when used in the 
derivation process
* {{existingPassword}} = the current (pre-migration) password used to derive 
the key which is currently encrypting the sensitive properties

In the event the user had been using raw keys rather than passwords (which is 
not the case for Ambari, but will be provided for completeness), a similar 
process can be used with the {{-y}}/{{--secure-hash-key}} arguments:

{{$ ./bin/encrypt-config.sh -m -y secure_hash_key(existingKeyHex) -p 
thisIsTheNewPassword ...}}

where the {{-y}}/{{--secure-hash-key}} argument specifies that the following 
argument is the secure hash of the existing key in hexadecimal encoding, using 
the algorithm:

{{secure_hash_key = bcrypt(version, workFactor, salt, 
lowercase(existingKeyHex))}}

and the following values:

* {{version}} = {{2a}}
* {{workFactor}} = {{12}} (actually log~2~ value, so 2^12^ iterations)
* {{salt}} = {{ABCDEFGHIJKLMNOPQRSTUV}} a randomly-generated, 22 character 
Base64-encoded unpadded salt value. This decodes to 16 bytes when used in the 
derivation process
* {{existingKeyHex}} = the current (pre-migration) key in hexadecimal encoding 
which is currently encrypting the sensitive properties. This value will be 
lowercased before being fed to the {{bcrypt}} function to ensure that case 
sensitivity does not matter

> NiFi Toolkit - Allow migration of master key without previous password
> --
>
> Key: NIFI-4942
> URL: https://issues.apache.org/jira/browse/NIFI-4942
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.5.0
>Reporter: Yolanda M. Davis
>Assignee: Andy LoPresto
>Priority: Major
>
> Currently the encryption cli in nifi toolkit requires that, in order to 
> migrate from one master key to the next, the previous master key or password 
> should be provided. In cases where the provisioning tool doesn't have the 
> previous value available this becomes challenging to provide and may be prone 
> to error. In speaking with [~alopresto] we can allow toolkit to support a 
> mode of execution such that the master key can be updated without requiring 
> the previous password. Also documentation around it's usage should be updated 
> to be clear in describing the purpose and the type of environment where this 
> command should be used (admin only access etc).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-03-06 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388795#comment-16388795
 ] 

Andy LoPresto edited comment on NIFI-4942 at 3/7/18 12:56 AM:
--

The proposed mechanism for this is to store the current password in a 
cryptographically hashed format locally and allow the toolkit to accept that as 
a parameter in place of the plaintext existing password or key during 
migration. This enables the sensitive material to be stored in a format that is 
not vulnerable to reversing but still requires a user to prove knowledge 
of/access to the original password in order to perform a migration. 

Current example usage:

{{$ ./bin/encrypt-config.sh -m -w existingPassword -p thisIsTheNewPassword ...}}

The above will still be supported (along with {{-e}} and {{-k}} options for 
existing and new key (hex) respectively). However, another mode will be 
supported:

{{$ ./bin/encrypt-config.sh -m -z secure_hash(existingPassword) -p 
thisIsTheNewPassword ...}}

where the {{-z}} / {{--secure-hash}} argument specifies that the following 
argument is the secure hash of the existing password, using the algorithm:

{{secure_hash = bcrypt(version, workFactor, salt, existingPassword)}}

and the following values:

* {{version}} = {{2a}}
* {{workFactor}} = {{12}} (actually log ~2~ value, so 2 ^12^ iterations)
* {{salt}} = {{ABCDEFGHIJKLMNOPQRSTUV}} a randomly-generated, 22 character 
Base64-encoded unpadded salt value. This decodes to 16 bytes when used in the 
derivation process
* {{existingPassword}} = the current (pre-migration) password used to derive 
the key which is currently encrypting the sensitive properties

In the event the user had been using raw keys rather than passwords (which is 
not the case for Ambari, but will be provided for completeness), a similar 
process can be used with the {{-y}} / {{--secure-hash-key}} arguments:

{{$ ./bin/encrypt-config.sh -m -y secure_hash_key(existingKeyHex) -p 
thisIsTheNewPassword ...}}

where the {{-y}} / {{--secure-hash-key}} argument specifies that the following 
argument is the secure hash of the existing key in hexadecimal encoding, using 
the algorithm:

{{secure_hash_key = bcrypt(version, workFactor, salt, 
lowercase(existingKeyHex))}}

and the following values:

* {{version}} = {{2a}}
* {{workFactor}} = {{12}} (actually log ~2~ value, so 2 ^12^ iterations)
* {{salt}} = {{ABCDEFGHIJKLMNOPQRSTUV}} a randomly-generated, 22 character 
Base64-encoded unpadded salt value. This decodes to 16 bytes when used in the 
derivation process
* {{existingKeyHex}} = the current (pre-migration) key in hexadecimal encoding 
which is currently encrypting the sensitive properties. This value will be 
lowercased before being fed to the {{bcrypt}} function to ensure that case 
sensitivity does not matter


was (Author: alopresto):
The proposed mechanism for this is to store the current password in a 
cryptographically hashed format locally and allow the toolkit to accept that as 
a parameter in place of the plaintext existing password or key during 
migration. This enables the sensitive material to be stored in a format that is 
not vulnerable to reversing but still requires a user to prove knowledge 
of/access to the original password in order to perform a migration. 

Current example usage:

{{$ ./bin/encrypt-config.sh -m -w existingPassword -p thisIsTheNewPassword ...}}

The above will still be supported (along with {{-e}} and {{-k}} options for 
existing and new key (hex) respectively). However, another mode will be 
supported:

{{$ ./bin/encrypt-config.sh -m -z secure_hash(existingPassword) -p 
thisIsTheNewPassword ...}}

where the {{-z}}/{{--secure-hash}} argument specifies that the following 
argument is the secure hash of the existing password, using the algorithm:

{{secure_hash = bcrypt(version, workFactor, salt, existingPassword)}}

and the following values:

* {{version}} = {{2a}}
* {{workFactor}} = {{12}} (actually log~2~ value, so 2^12^ iterations)
* {{salt}} = {{ABCDEFGHIJKLMNOPQRSTUV}} a randomly-generated, 22 character 
Base64-encoded unpadded salt value. This decodes to 16 bytes when used in the 
derivation process
* {{existingPassword}} = the current (pre-migration) password used to derive 
the key which is currently encrypting the sensitive properties

In the event the user had been using raw keys rather than passwords (which is 
not the case for Ambari, but will be provided for completeness), a similar 
process can be used with the {{-y}}/{{--secure-hash-key}} arguments:

{{$ ./bin/encrypt-config.sh -m -y secure_hash_key(existingKeyHex) -p 
thisIsTheNewPassword ...}}

where the {{-y}}/{{--secure-hash-key}} argument specifies that the following 
argument is the secure hash of the existing key in hexadecimal encoding, using 
the algorithm:

{{secure_hash_key = bcrypt(version, workFactor, salt, 
lowercase(existingKeyHex))}}

and the following 

[jira] [Assigned] (NIFI-4941) Make description of "nifi.sensitive.props.additional.keys" property more explicit by referring to properties in nifi.properties

2018-03-06 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim reassigned NIFI-4941:


Assignee: Andrew Lim

> Make description of "nifi.sensitive.props.additional.keys" property more 
> explicit by referring to properties in nifi.properties
> ---
>
> Key: NIFI-4941
> URL: https://issues.apache.org/jira/browse/NIFI-4941
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Trivial
>
> Description in 1.5.0 is:
> The comma separated list of properties to encrypt in addition to the default 
> sensitive properties (see Encrypt-Config Tool).
> Should clarify which "properties" can be encrypted.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFIREG-150) Maintenance mode switch via REST API for data backup

2018-03-06 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-150:

Description: 
Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine.

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for replicated data or external persistence providers (that themselves 
can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and recommendations for backup and recovery or [cold-slave 
failover|http://www.sonatype.org/nexus/2015/07/10/high-availability-ha-and-continuous-integration-ci-with-nexus-oss/]
 require copying the data in the NiFi Registry's home directory host storage to 
another location, where it could be used to create another NiFi Registry with 
the same data on demand, e.g., in a cloud migration or disaster recovery 
scenario.

If the NiFi Registry service is running when this copy operation is performed, 
one risks copying partially-written data/records/files that could be corrupted 
when later loaded/read from disk. One solution for this today is to stop the 
NiFi Registry, but this leaves it unavailable for users and scripts, which is 
not ideal. For example, continuous deployment scripts for NiFi data flows that 
read flows from NiFi registry would not be able to access a required service.

In the long-term, it would be nice to offer proper HA NiFi Registry solution 
out of the box. However, in the short-term, in order to avoid having to 
shutdown NiFi Registry in order to initiate a backup, it would be nice for 
admins to be able to put a NiFi Registry instance into "read only maintenance 
mode", during which the contents of the NiFi Registry home directory could be 
more safely copied to a backup location or cold spare. (I say "more safely" 
because some files in the home directory, such as the default location for 
logs, would continue to be written too, but the most important files, such as 
those used by the file-based database and persistence providers, would 
stabilize after existing write operations are flushed to disk.)

Implementation thoughts:
 - endpoints for turning maintenance mode on/off would fit in nicely as custom 
endpoints under Actuator (NIFIREG-134), and therefore could be access 
controlled but Actuator authorization rules
 - when maintenance mode is enabled, a custom Spring filter could intercept any 
requests that modify persisted state (eg, by resource path and HTTP method 
pattern matching) return a "503 Service Unavailable" status code indicating 
that the resource is temporarily unavailable. A spring filter checking HTTP 
methods against resources is an approach already used to authorize access to 
certain resources, so there might be an opportunity for code-reuse there (the 
maintenance mode filter would need to be dynamically, programmatically 
enabled/disabled, and instead of returning a 403, we would return a 503)
 - when maintenance mode is enabled, the /actuator/health endpoint could also 
indicate this, giving clients a way to check if a server is in maintenance mode 
or not.

  was:
Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine.

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for distributed/clustered data or external persistence providers (that 
themselves can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and recommendations 

[jira] [Commented] (NIFI-4516) Add FetchSolr processor

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388623#comment-16388623
 ] 

ASF GitHub Bot commented on NIFI-4516:
--

Github user JohannesDaniel closed the pull request at:

https://github.com/apache/nifi/pull/2516


> Add FetchSolr processor
> ---
>
> Key: NIFI-4516
> URL: https://issues.apache.org/jira/browse/NIFI-4516
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>  Labels: features
>
> The processor shall be capable 
> * to query Solr within a workflow,
> * to make use of standard functionalities of Solr such as faceting, 
> highlighting, result grouping, etc.,
> * to make use of NiFis expression language to build Solr queries, 
> * to handle results as records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2516: NIFI-4516 FetchSolr Processor

2018-03-06 Thread JohannesDaniel
Github user JohannesDaniel closed the pull request at:

https://github.com/apache/nifi/pull/2516


---


[jira] [Commented] (NIFI-4516) Add FetchSolr processor

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388652#comment-16388652
 ] 

ASF GitHub Bot commented on NIFI-4516:
--

GitHub user JohannesDaniel opened a pull request:

https://github.com/apache/nifi/pull/2517

NIFI-4516 FetchSolr Processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JohannesDaniel/nifi NIFI-4516

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2517.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2517


commit 07434cca4907932d55a2d7abe25710ba75f03a4c
Author: JohannesDaniel 
Date:   2018-03-06T22:43:49Z

FetchSolr Processor finalized




> Add FetchSolr processor
> ---
>
> Key: NIFI-4516
> URL: https://issues.apache.org/jira/browse/NIFI-4516
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>  Labels: features
>
> The processor shall be capable 
> * to query Solr within a workflow,
> * to make use of standard functionalities of Solr such as faceting, 
> highlighting, result grouping, etc.,
> * to make use of NiFis expression language to build Solr queries, 
> * to handle results as records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2517: NIFI-4516 FetchSolr Processor

2018-03-06 Thread JohannesDaniel
GitHub user JohannesDaniel opened a pull request:

https://github.com/apache/nifi/pull/2517

NIFI-4516 FetchSolr Processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JohannesDaniel/nifi NIFI-4516

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2517.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2517


commit 07434cca4907932d55a2d7abe25710ba75f03a4c
Author: JohannesDaniel 
Date:   2018-03-06T22:43:49Z

FetchSolr Processor finalized




---


[jira] [Updated] (NIFIREG-150) Maintenance mode switch via REST API for data backup

2018-03-06 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-150:

Description: 
Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine.

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for distributed/clustered data or external persistence providers (that 
themselves can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and recommendations for backup and recovery or [cold-slave 
failover|http://www.sonatype.org/nexus/2015/07/10/high-availability-ha-and-continuous-integration-ci-with-nexus-oss/]
 require copying the data in the NiFi Registry's home directory host storage to 
another location, where it could be used to create another NiFi Registry with 
the same data on demand, e.g., in a cloud migration or disaster recovery 
scenario.

If the NiFi Registry service is running when this copy operation is performed, 
one risks copying partially-written data/records/files that could be corrupted 
when later loaded/read from disk. One solution for this today is to stop the 
NiFi Registry, but this leaves it unavailable for users and scripts, which is 
not ideal. For example, continuous deployment scripts for NiFi data flows that 
read flows from NiFi registry would not be able to access a required service.

In the long-term, it would be nice to offer proper HA NiFi Registry solution 
out of the box. However, in the short-term, it would be nice for users to be 
able to put a NiFi Registry instance into "read only maintenance mode", during 
which the contents of the NiFi Registry home directory could be more safely 
copied to a backup location or cold spare. (I say "more safely" because some 
files in the home directory, such as the default location for logs, would 
continue to be written too, but the most important files, such as those used by 
the file-based database and persistence providers, would stabilize after 
existing write operations are flushed to disk.)

Implementation thoughts:
 - endpoints for turning maintenance mode on/off would fit in nicely as custom 
endpoints under Actuator (NIFIREG-134), and therefore could be access 
controlled but Actuator authorization rules
 - when maintenance mode is enabled, a custom Spring filter could intercept any 
requests that modify persisted state (eg, by resource path and HTTP method 
pattern matching) return a "503 Service Unavailable" status code indicating 
that the resource is temporarily unavailable. A similar approach is used to 
authorize access to certain endpoints.
 - when maintenance mode is enabled, the /actuator/health endpoint could also 
indicate this, giving clients a way to check if a server is in maintenance mode 
or not.

  was:
Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine. 

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for distributed/clustered data or external persistence providers (that 
themselves can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and recommendations for backup and recovery or [cold-slave 
failover|http://www.sonatype.org/nexus/2015/07/10/high-availability-ha-and-continuous-integration-ci-with-nexus-oss/]
 require copying the data in the NiFi Registry's home directory host storage to 
another location, where it could be used to create another NiFi Registry with 
the same data on 

[jira] [Created] (NIFIREG-150) Maintenance mode switch via REST API for data backup

2018-03-06 Thread Kevin Doran (JIRA)
Kevin Doran created NIFIREG-150:
---

 Summary: Maintenance mode switch via REST API for data backup
 Key: NIFIREG-150
 URL: https://issues.apache.org/jira/browse/NIFIREG-150
 Project: NiFi Registry
  Issue Type: New Feature
Reporter: Kevin Doran


Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine. 

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for distributed/clustered data or external persistence providers (that 
themselves can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and recommendations for backup and recovery or [cold-slave 
failover|http://www.sonatype.org/nexus/2015/07/10/high-availability-ha-and-continuous-integration-ci-with-nexus-oss/]
 require copying the data in the NiFi Registry's home directory host storage to 
another location, where it could be used to create another NiFi Registry with 
the same data on demand, e.g., in a cloud migration or disaster recovery 
scenario.

If the NiFi Registry service is running when this copy operation is performed, 
one risks copying partially-written data/records/files that could be corrupted 
when later loaded/read from disk. One solution for this today is to stop the 
NiFi Registry, but this leaves it unavailable for users and scripts, which is 
not ideal. For example, continuous deployment scripts for NiFi data flows that 
read flows from NiFi registry would not be able to access a required service.

In the long-term, it would be nice to offer proper HA NiFi Registry solution 
out of the box. However, in the short-term, it would be nice for users to be 
able to put a NiFi Registry instance into "read only maintenance mode", during 
which the contents of the NiFi Registry home directory could be more safely 
copied to a backup location or cold spare. (I say more safely because some 
files in the home directory, such as the default location for logs, would 
continue to be written too, but the most important files, such as the 
file-based database and persistence providers, would stabilize after existing 
write operations are flushed to disk).

Implementation thoughts:
 - endpoints for turning maintenance mode on/off would fit in nicely as custom 
endpoints under Actuator (NIFIREG-134), and therefore could be access 
controlled but Actuator authorization rules
 - when maintenance mode is enabled, a custom Spring filter could intercept any 
requests that modify persisted state (eg, by resource path and HTTP method 
pattern matching) return a "503 Service Unavailable" status code indicating 
that the resource is temporarily unavailable. A similar approach is used to 
authorize access to certain endpoints.
 - when maintenance mode is enabled, the /actuator/health endpoint could also 
indicate this, giving clients a way to check if a server is in maintenance mode 
or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFIREG-150) Maintenance mode switch via REST API for data backup

2018-03-06 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-150:

Description: 
Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine.

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for distributed/clustered data or external persistence providers (that 
themselves can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and recommendations for backup and recovery or [cold-slave 
failover|http://www.sonatype.org/nexus/2015/07/10/high-availability-ha-and-continuous-integration-ci-with-nexus-oss/]
 require copying the data in the NiFi Registry's home directory host storage to 
another location, where it could be used to create another NiFi Registry with 
the same data on demand, e.g., in a cloud migration or disaster recovery 
scenario.

If the NiFi Registry service is running when this copy operation is performed, 
one risks copying partially-written data/records/files that could be corrupted 
when later loaded/read from disk. One solution for this today is to stop the 
NiFi Registry, but this leaves it unavailable for users and scripts, which is 
not ideal. For example, continuous deployment scripts for NiFi data flows that 
read flows from NiFi registry would not be able to access a required service.

In the long-term, it would be nice to offer proper HA NiFi Registry solution 
out of the box. However, in the short-term, in order to avoid having to 
shutdown NiFi Registry in order to initiate a backup, it would be nice for 
admins to be able to put a NiFi Registry instance into "read only maintenance 
mode", during which the contents of the NiFi Registry home directory could be 
more safely copied to a backup location or cold spare. (I say "more safely" 
because some files in the home directory, such as the default location for 
logs, would continue to be written too, but the most important files, such as 
those used by the file-based database and persistence providers, would 
stabilize after existing write operations are flushed to disk.)

Implementation thoughts:
 - endpoints for turning maintenance mode on/off would fit in nicely as custom 
endpoints under Actuator (NIFIREG-134), and therefore could be access 
controlled but Actuator authorization rules
 - when maintenance mode is enabled, a custom Spring filter could intercept any 
requests that modify persisted state (eg, by resource path and HTTP method 
pattern matching) return a "503 Service Unavailable" status code indicating 
that the resource is temporarily unavailable. A spring filter checking HTTP 
methods against resources is an approach already used to authorize access to 
certain resources, so there might be an opportunity for code-reuse there (the 
maintenance mode filter would need to be dynamically, programmatically 
enabled/disabled, and instead of returning a 403, we would return a 503)
 - when maintenance mode is enabled, the /actuator/health endpoint could also 
indicate this, giving clients a way to check if a server is in maintenance mode 
or not.

  was:
Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine.

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for distributed/clustered data or external persistence providers (that 
themselves can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and 

[GitHub] nifi pull request #2513: NIFI-4940 - CSVReader - Header schema strategy - no...

2018-03-06 Thread pvillard31
Github user pvillard31 closed the pull request at:

https://github.com/apache/nifi/pull/2513


---


[GitHub] nifi issue #2513: NIFI-4940 - CSVReader - Header schema strategy - normalize...

2018-03-06 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2513
  
Closing as per comment from Mark in the JIRA. What I was trying to achieve 
can already be done and works nicely :) Thanks anyway for having a look 
@zenfenan !


---


[jira] [Commented] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388511#comment-16388511
 ] 

ASF GitHub Bot commented on NIFI-4940:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2513
  
Closing as per comment from Mark in the JIRA. What I was trying to achieve 
can already be done and works nicely :) Thanks anyway for having a look 
@zenfenan !


> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4940
> URL: https://issues.apache.org/jira/browse/NIFI-4940
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388512#comment-16388512
 ] 

ASF GitHub Bot commented on NIFI-4940:
--

Github user pvillard31 closed the pull request at:

https://github.com/apache/nifi/pull/2513


> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4940
> URL: https://issues.apache.org/jira/browse/NIFI-4940
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-03-06 Thread Yolanda M. Davis (JIRA)
Yolanda M. Davis created NIFI-4942:
--

 Summary: NiFi Toolkit - Allow migration of master key without 
previous password
 Key: NIFI-4942
 URL: https://issues.apache.org/jira/browse/NIFI-4942
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Affects Versions: 1.5.0
Reporter: Yolanda M. Davis
Assignee: Andy LoPresto


Currently the encryption cli in nifi toolkit requires that, in order to migrate 
from one master key to the next, the previous master key or password should be 
provided. In cases where the provisioning tool doesn't have the previous value 
available this becomes challenging to provide and may be prone to error. In 
speaking with [~alopresto] we can allow toolkit to support a mode of execution 
such that the master key can be updated without requiring the previous 
password. Also documentation around it's usage should be updated to be clear in 
describing the purpose and the type of environment where this command should be 
used (admin only access etc).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2503: NIFI-4855: The layout of NiFi API document is broken

2018-03-06 Thread tasanuma
Github user tasanuma commented on the issue:

https://github.com/apache/nifi/pull/2503
  
Thanks for reviewing and committing it, @mcgilman!


---


[jira] [Commented] (NIFI-4855) The layout of NiFi API document is broken

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388853#comment-16388853
 ] 

ASF GitHub Bot commented on NIFI-4855:
--

Github user tasanuma commented on the issue:

https://github.com/apache/nifi/pull/2503
  
Thanks for reviewing and committing it, @mcgilman!


> The layout of NiFi API document is broken
> -
>
> Key: NIFI-4855
> URL: https://issues.apache.org/jira/browse/NIFI-4855
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.5.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: broken_ui.png, fixed_ui.001.png
>
>
> This is reported by Hiroaki Nawa.
> Looks like _Controller_ section incudes _Versions_ section mistakenly, and 
> the layout is broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-03-06 Thread Andy LoPresto (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388805#comment-16388805
 ] 

Andy LoPresto commented on NIFI-4942:
-

We should also note that Ambari is supposed to eventually change the API to 
support querying this information in a secure manner so this functionality is 
not necessarily going to be supported long term (the potential refactor of the 
NiFi Toolkit out of NiFi project may allow for earlier major release versioning 
to remove this behavior). 

> NiFi Toolkit - Allow migration of master key without previous password
> --
>
> Key: NIFI-4942
> URL: https://issues.apache.org/jira/browse/NIFI-4942
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.5.0
>Reporter: Yolanda M. Davis
>Assignee: Andy LoPresto
>Priority: Major
>
> Currently the encryption cli in nifi toolkit requires that, in order to 
> migrate from one master key to the next, the previous master key or password 
> should be provided. In cases where the provisioning tool doesn't have the 
> previous value available this becomes challenging to provide and may be prone 
> to error. In speaking with [~alopresto] we can allow toolkit to support a 
> mode of execution such that the master key can be updated without requiring 
> the previous password. Also documentation around it's usage should be updated 
> to be clear in describing the purpose and the type of environment where this 
> command should be used (admin only access etc).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFIREG-150) Maintenance mode switch via REST API for data backup

2018-03-06 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-150:

Description: 
Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine.

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for distributed/clustered data or external persistence providers (that 
themselves can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and recommendations for backup and recovery or [cold-slave 
failover|http://www.sonatype.org/nexus/2015/07/10/high-availability-ha-and-continuous-integration-ci-with-nexus-oss/]
 require copying the data in the NiFi Registry's home directory host storage to 
another location, where it could be used to create another NiFi Registry with 
the same data on demand, e.g., in a cloud migration or disaster recovery 
scenario.

If the NiFi Registry service is running when this copy operation is performed, 
one risks copying partially-written data/records/files that could be corrupted 
when later loaded/read from disk. One solution for this today is to stop the 
NiFi Registry, but this leaves it unavailable for users and scripts, which is 
not ideal. For example, continuous deployment scripts for NiFi data flows that 
read flows from NiFi registry would not be able to access a required service.

In the long-term, it would be nice to offer proper HA NiFi Registry solution 
out of the box. However, in the short-term, it would be nice for users to be 
able to put a NiFi Registry instance into "read only maintenance mode", during 
which the contents of the NiFi Registry home directory could be more safely 
copied to a backup location or cold spare. (I say "more safely" because some 
files in the home directory, such as the default location for logs, would 
continue to be written too, but the most important files, such as those used by 
the file-based database and persistence providers, would stabilize after 
existing write operations are flushed to disk.)

Implementation thoughts:
 - endpoints for turning maintenance mode on/off would fit in nicely as custom 
endpoints under Actuator (NIFIREG-134), and therefore could be access 
controlled but Actuator authorization rules
 - when maintenance mode is enabled, a custom Spring filter could intercept any 
requests that modify persisted state (eg, by resource path and HTTP method 
pattern matching) return a "503 Service Unavailable" status code indicating 
that the resource is temporarily unavailable. A spring filter checking HTTP 
methods against resources is an approach already used to authorize access to 
certain resources, so there might be an opportunity for code-reuse there (the 
maintenance mode filter would need to be dynamically, programmatically 
enabled/disabled, and instead of returning a 403, we would return a 503)
 - when maintenance mode is enabled, the /actuator/health endpoint could also 
indicate this, giving clients a way to check if a server is in maintenance mode 
or not.

  was:
Currently, NiFi Registry does not offer High Availability (HA) out of the box. 
One has to configure an environment around one or more NiFi Registry instances 
to achieve the required level of recoverability and availability.

This is not a requirement in many deployment scenarios as NiFi Registry is on 
the critical path of most system architectures. That is, it is a place to save 
and retrieve versions of flows and extensions, but if NiFi Registry is 
temporarily offline, NiFi data flows deployed to NiFi and MiNiFi instances 
continue to function just fine.

However, a bigger concern is data availability and backup; that is, the 
guarantee that data persisted to NiFi Registry is not lost due to an instance 
failure. Eventually, it will be nice to offer a NiFi Registry HA solution that 
allows for distributed/clustered data or external persistence providers (that 
themselves can be HA).

In the meantime, folks are looking for the best way to build their own data 
backup and recovery solutions for NiFi Registry. A lot of possible solutions 
and recommendations for backup and recovery or [cold-slave 

[jira] [Commented] (NIFI-4516) Add FetchSolr processor

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388656#comment-16388656
 ] 

ASF GitHub Bot commented on NIFI-4516:
--

Github user JohannesDaniel commented on the issue:

https://github.com/apache/nifi/pull/2517
  
@ijokarumawak 
To be aligned with processor GetSolr I implemented two options for the data 
format of Solr results: Solr XML and record functions. However, facets and 
stats are written to flowfiles in JSON (which has the same structure like the 
Solr-JSON). I did not implement record management for these two components to 
keep the complexity of the processor at a reasonably level. I chose JSON as it 
is probably the best integrated format in NiFi.


> Add FetchSolr processor
> ---
>
> Key: NIFI-4516
> URL: https://issues.apache.org/jira/browse/NIFI-4516
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>  Labels: features
>
> The processor shall be capable 
> * to query Solr within a workflow,
> * to make use of standard functionalities of Solr such as faceting, 
> highlighting, result grouping, etc.,
> * to make use of NiFis expression language to build Solr queries, 
> * to handle results as records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2517: NIFI-4516 FetchSolr Processor

2018-03-06 Thread JohannesDaniel
Github user JohannesDaniel commented on the issue:

https://github.com/apache/nifi/pull/2517
  
@ijokarumawak 
To be aligned with processor GetSolr I implemented two options for the data 
format of Solr results: Solr XML and record functions. However, facets and 
stats are written to flowfiles in JSON (which has the same structure like the 
Solr-JSON). I did not implement record management for these two components to 
keep the complexity of the processor at a reasonably level. I chose JSON as it 
is probably the best integrated format in NiFi.


---


[jira] [Commented] (NIFI-4938) Offline buffering is not supported by paho client version 1.0.2 used in mqtt processors.

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389075#comment-16389075
 ] 

ASF GitHub Bot commented on NIFI-4938:
--

Github user Himanshu-it commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2514#discussion_r172748974
  
--- Diff: 
nifi-nar-bundles/nifi-mqtt-bundle/nifi-mqtt-processors/src/test/java/org/apache/nifi/processors/mqtt/common/MqttTestClient.java
 ---
@@ -17,16 +17,7 @@
 
 package org.apache.nifi.processors.mqtt.common;
 
-import org.eclipse.paho.client.mqttv3.IMqttClient;
-import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
-import org.eclipse.paho.client.mqttv3.IMqttToken;
-import org.eclipse.paho.client.mqttv3.MqttCallback;
-import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
-import org.eclipse.paho.client.mqttv3.MqttException;
-import org.eclipse.paho.client.mqttv3.MqttMessage;
-import org.eclipse.paho.client.mqttv3.MqttPersistenceException;
-import org.eclipse.paho.client.mqttv3.MqttSecurityException;
-import org.eclipse.paho.client.mqttv3.MqttTopic;
+import org.eclipse.paho.client.mqttv3.*;
--- End diff --

sure thanks.


> Offline buffering is not supported by paho client version 1.0.2 used in mqtt 
> processors.
> 
>
> Key: NIFI-4938
> URL: https://issues.apache.org/jira/browse/NIFI-4938
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Himanshu Mishra
>Priority: Major
>
> Off line buffering is not supported by paho client version 1.0.2 used in mqtt 
> processors which causes problem of "message loss" with QoS 1 and QoS 2 for 
> mqtt messages when the client connectivity is lost.
> The version 1.2.0 of paho client supports this off line buffering. Thus 
> addressing this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2514: NIFI-4938 Upgraded org.eclipse.paho.client.mqttv3 d...

2018-03-06 Thread Himanshu-it
Github user Himanshu-it commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2514#discussion_r172748974
  
--- Diff: 
nifi-nar-bundles/nifi-mqtt-bundle/nifi-mqtt-processors/src/test/java/org/apache/nifi/processors/mqtt/common/MqttTestClient.java
 ---
@@ -17,16 +17,7 @@
 
 package org.apache.nifi.processors.mqtt.common;
 
-import org.eclipse.paho.client.mqttv3.IMqttClient;
-import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
-import org.eclipse.paho.client.mqttv3.IMqttToken;
-import org.eclipse.paho.client.mqttv3.MqttCallback;
-import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
-import org.eclipse.paho.client.mqttv3.MqttException;
-import org.eclipse.paho.client.mqttv3.MqttMessage;
-import org.eclipse.paho.client.mqttv3.MqttPersistenceException;
-import org.eclipse.paho.client.mqttv3.MqttSecurityException;
-import org.eclipse.paho.client.mqttv3.MqttTopic;
+import org.eclipse.paho.client.mqttv3.*;
--- End diff --

sure thanks.


---


[GitHub] nifi pull request #2513: NIFI-4940 - CSVReader - Header schema strategy - no...

2018-03-06 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2513#discussion_r172590173
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVHeaderSchemaStrategy.java
 ---
@@ -79,4 +81,12 @@ public RecordSchema getSchema(Map 
variables, final InputStream c
 public Set getSuppliedSchemaFields() {
 return schemaFields;
 }
+
+private String normalizeNameForAvro(String inputName) {
+String normalizedName = inputName.replaceAll("[^A-Za-z0-9_]", "_");
--- End diff --

How about this `[^A-Za-z0-9_]+`? So rather than adding N underscores for 
continuous non-compatible characters, it adds a single `_`. 


---


[jira] [Commented] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388109#comment-16388109
 ] 

ASF GitHub Bot commented on NIFI-4940:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2513#discussion_r172590173
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVHeaderSchemaStrategy.java
 ---
@@ -79,4 +81,12 @@ public RecordSchema getSchema(Map 
variables, final InputStream c
 public Set getSuppliedSchemaFields() {
 return schemaFields;
 }
+
+private String normalizeNameForAvro(String inputName) {
+String normalizedName = inputName.replaceAll("[^A-Za-z0-9_]", "_");
--- End diff --

How about this `[^A-Za-z0-9_]+`? So rather than adding N underscores for 
continuous non-compatible characters, it adds a single `_`. 


> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4940
> URL: https://issues.apache.org/jira/browse/NIFI-4940
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4868) Unable to initialise `HBase_1_1_2_ClientService `

2018-03-06 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388127#comment-16388127
 ] 

Bryan Bende commented on NIFI-4868:
---

If you have added a Phoenix Client JAR to the controller service configuration, 
you may need to downgrade the version of the JAR to one that is compatible with 
the HBase 1.1.2 client.

> Unable to initialise  `HBase_1_1_2_ClientService `
> --
>
> Key: NIFI-4868
> URL: https://issues.apache.org/jira/browse/NIFI-4868
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Tools and Build
>Affects Versions: 1.1.1, 1.3.0, 1.4.0
>Reporter: Naveen Nain
>Priority: Major
>  Labels: security
>
> I'm creating a HBASE connection controller service to connect to secured 
> HBASE. I have defined the kerbrose priclpal and keytab. But I'm enabling the 
> controller service, It's giving me this error:
> {quote}2018-02-12 14:16:04,791 INFO [StandardProcessScheduler Thread-4] 
> o.a.nifi.hbase.HBase_1_1_2_ClientService 
> HBase_1_1_2_ClientService[id=88ee1856-0161-1000-019f-35d668a0300e] HBase 
> Security Enabled, logging in as principal hbase/m111.server.e...@embs.com 
> with keytab /Users/naveen/Downloads/nifi-1.0.0/conf/hbase.keytab
> 2018-02-12 14:16:05,388 INFO [StandardProcessScheduler Thread-4] 
> o.a.nifi.hbase.HBase_1_1_2_ClientService 
> HBase_1_1_2_ClientService[id=88ee1856-0161-1000-019f-35d668a0300e] 
> Successfully logged in as principal hbase/* with keytab 
> /Users/naveen/Downloads/nifi-1.0.0/conf/hbase.keytab
> 2018-02-12 14:16:05,389 INFO [StandardProcessScheduler Thread-4] 
> o.a.h.h.zookeeper.RecoverableZooKeeper Process 
> identifier=hconnection-0x6d8fedd8 connecting to ZooKeeper 
> ensemble=*:2181,**:2181,***:2181
> 2018-02-12 14:16:08,073 ERROR [StandardProcessScheduler Thread-4] 
> o.a.n.c.s.StandardControllerServiceNode 
> HBase_1_1_2_ClientService[id=88ee1856-0161-1000-019f-35d668a0300e] Failed to 
> invoke @OnEnabled method due to java.io.IOException: 
> java.lang.reflect.InvocationTargetException
> 2018-02-12 14:16:08,075 ERROR [StandardProcessScheduler Thread-4] 
> o.a.n.c.s.StandardControllerServiceNode 
> java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>  ~[hbase-client-1.1.2.jar:1.1.2]
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>  ~[hbase-client-1.1.2.jar:1.1.2]
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>  ~[hbase-client-1.1.2.jar:1.1.2]
>   at 
> org.apache.nifi.hbase.HBase_1_1_2_ClientService$1.run(HBase_1_1_2_ClientService.java:232)
>  ~[nifi-hbase_1_1_2-client-service-1.0.0.jar:1.0.0]
>   at 
> org.apache.nifi.hbase.HBase_1_1_2_ClientService$1.run(HBase_1_1_2_ClientService.java:229)
>  ~[nifi-hbase_1_1_2-client-service-1.0.0.jar:1.0.0]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_141]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_141]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>  ~[hadoop-common-2.6.2.jar:na]
>   at 
> org.apache.nifi.hbase.HBase_1_1_2_ClientService.createConnection(HBase_1_1_2_ClientService.java:229)
>  ~[nifi-hbase_1_1_2-client-service-1.0.0.jar:1.0.0]
>   at 
> org.apache.nifi.hbase.HBase_1_1_2_ClientService.onEnabled(HBase_1_1_2_ClientService.java:178)
>  ~[nifi-hbase_1_1_2-client-service-1.0.0.jar:1.0.0]
>   at sun.reflect.GeneratedMethodAccessor322.invoke(Unknown Source) 
> ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_141]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_141]
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>  ~[na:na]
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>  ~[na:na]
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>  ~[na:na]
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>  ~[na:na]
>   at 
> org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:348)
>  ~[na:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_141]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_141]
>   at 
> 

[jira] [Created] (NIFI-4938) Offline buffering is not supported by paho client version 1.0.2 used in mqtt processors.

2018-03-06 Thread Himanshu Mishra (JIRA)
Himanshu Mishra created NIFI-4938:
-

 Summary: Offline buffering is not supported by paho client version 
1.0.2 used in mqtt processors.
 Key: NIFI-4938
 URL: https://issues.apache.org/jira/browse/NIFI-4938
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Himanshu Mishra


Off line buffering is not supported by paho client version 1.0.2 used in mqtt 
processors which causes problem of "message loss" with QoS 1 and QoS 2 for mqtt 
messages when the client connectivity is lost.

The version 1.2.0 of paho client supports this off line buffering. Thus 
addressing this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #268: MINIFICPP-397 Added implementation of RouteOnAtt...

2018-03-06 Thread achristianson
Github user achristianson commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/268
  
Rebased & fixed linter. Ready for re-review.


---


[jira] [Commented] (MINIFICPP-397) Implement RouteOnAttribute

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387935#comment-16387935
 ] 

ASF GitHub Bot commented on MINIFICPP-397:
--

Github user achristianson commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/268
  
Rebased & fixed linter. Ready for re-review.


> Implement RouteOnAttribute
> --
>
> Key: MINIFICPP-397
> URL: https://issues.apache.org/jira/browse/MINIFICPP-397
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> RouteOnAttribute is notably missing from MiNiFi - C++ and should be 
> implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2503: NIFI-4855: The layout of NiFi API document is broke...

2018-03-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2503


---


[GitHub] nifi issue #2503: NIFI-4855: The layout of NiFi API document is broken

2018-03-06 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2503
  
Thanks @tasanuma! This has been merged to master.


---


[jira] [Commented] (NIFI-4855) The layout of NiFi API document is broken

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387953#comment-16387953
 ] 

ASF GitHub Bot commented on NIFI-4855:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2503
  
Thanks @tasanuma! This has been merged to master.


> The layout of NiFi API document is broken
> -
>
> Key: NIFI-4855
> URL: https://issues.apache.org/jira/browse/NIFI-4855
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.5.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: broken_ui.png, fixed_ui.001.png
>
>
> This is reported by Hiroaki Nawa.
> Looks like _Controller_ section incudes _Versions_ section mistakenly, and 
> the layout is broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4855) The layout of NiFi API document is broken

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387955#comment-16387955
 ] 

ASF GitHub Bot commented on NIFI-4855:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2503


> The layout of NiFi API document is broken
> -
>
> Key: NIFI-4855
> URL: https://issues.apache.org/jira/browse/NIFI-4855
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.5.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: broken_ui.png, fixed_ui.001.png
>
>
> This is reported by Hiroaki Nawa.
> Looks like _Controller_ section incudes _Versions_ section mistakenly, and 
> the layout is broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4882) CSVRecordReader should utilize specified date/time/timestamp format at its convertSimpleIfPossible method

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387801#comment-16387801
 ] 

ASF GitHub Bot commented on NIFI-4882:
--

Github user derekstraka commented on the issue:

https://github.com/apache/nifi/pull/2473
  
@ijokarumawak - Sure.  That's fine.  Once you can a finalized bug list, I 
can help work on some of the issues.


> CSVRecordReader should utilize specified date/time/timestamp format at its 
> convertSimpleIfPossible method
> -
>
> Key: NIFI-4882
> URL: https://issues.apache.org/jira/browse/NIFI-4882
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Derek Straka
>Priority: Major
>
> CSVRecordReader.convertSimpleIfPossible method is used by ValidateRecord. The 
> method does not coerce values to the target schema field type if the raw 
> string representation in the input CSV file is not compatible.
> The type compatibility check is implemented as follows. But it does not use 
> user specified date/time/timestamp format:
> {code}
> // This will return 'false' for input '01/01/1900' when user 
> specified custom format 'MM/dd/'
> if (DataTypeUtils.isCompatibleDataType(trimmed, dataType)) {
> // The LAZY_DATE_FORMAT should be used to check 
> compatibility, too.
> return DataTypeUtils.convertType(trimmed, dataType, 
> LAZY_DATE_FORMAT, LAZY_TIME_FORMAT, LAZY_TIMESTAMP_FORMAT, fieldName);
> } else {
> return value;
> }
> {code}
> If input date strings have different format than the default format 
> '-MM-dd', then ValidateRecord processor can not validate input records.
> JacksonCSVRecordReader has the identical methods with CSVRecordReader. Those 
> classes should have an abstract class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2473: NIFI-4882: Resolve issue with parsing custom date, time, a...

2018-03-06 Thread derekstraka
Github user derekstraka commented on the issue:

https://github.com/apache/nifi/pull/2473
  
@ijokarumawak - Sure.  That's fine.  Once you can a finalized bug list, I 
can help work on some of the issues.


---


[GitHub] nifi pull request #2515: NIFI-4885: Granular component restrictions

2018-03-06 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2515

NIFI-4885: Granular component restrictions

NIFI-4885: 
- Introducing more granular restricted component access policies.
- Current restricted components have been updated to use new granular 
restrictions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-4885

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2515.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2515


commit 0610789f4e75f3983d54f75cec79e0a2bd8b7991
Author: Matt Gilman 
Date:   2018-02-16T21:21:47Z

NIFI-4885:
- Introducing more granular restricted component access policies.




---


[GitHub] nifi-minifi-cpp issue #268: MINIFICPP-397 Added implementation of RouteOnAtt...

2018-03-06 Thread achristianson
Github user achristianson commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/268
  
Taking a look at linter issues.


---


[jira] [Commented] (NIFI-4938) Offline buffering is not supported by paho client version 1.0.2 used in mqtt processors.

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387794#comment-16387794
 ] 

ASF GitHub Bot commented on NIFI-4938:
--

GitHub user Himanshu-it opened a pull request:

https://github.com/apache/nifi/pull/2514

NIFI-4938 Upgraded org.eclipse.paho.client.mqttv3 dependency version …

Off line buffering for mqtt messages will now be supported with the version 
of paho client being updated to 1.2.0 from 1.0.2 .


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Himanshu-it/nifi 
NIFI-4938-Offline-mqtt-message-buffering-support-for-mqtt-processors-with-QoS1-and-Qos2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2514.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2514


commit 49f1410c7bf61edfe9a9fb19f566743590f50636
Author: himanshu 
Date:   2018-03-06T13:38:50Z

NIFI-4938 Upgraded org.eclipse.paho.client.mqttv3 dependency version to 
1.2.0




> Offline buffering is not supported by paho client version 1.0.2 used in mqtt 
> processors.
> 
>
> Key: NIFI-4938
> URL: https://issues.apache.org/jira/browse/NIFI-4938
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Himanshu Mishra
>Priority: Major
>
> Off line buffering is not supported by paho client version 1.0.2 used in mqtt 
> processors which causes problem of "message loss" with QoS 1 and QoS 2 for 
> mqtt messages when the client connectivity is lost.
> The version 1.2.0 of paho client supports this off line buffering. Thus 
> addressing this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-416) Support crop properties in TFConvertImageToTensor

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387882#comment-16387882
 ] 

ASF GitHub Bot commented on MINIFICPP-416:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/271


> Support crop properties in TFConvertImageToTensor
> -
>
> Key: MINIFICPP-416
> URL: https://issues.apache.org/jira/browse/MINIFICPP-416
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> In order to allow for modifying input aspect ratio (e.g. 4:3) to match output 
> ratio (e.g. 1:1), support cropping of the input image via the following new 
> optional properties:
>  * Crop Offset X
>  * Crop Offset Y
>  * Crop Size X
>  * Crop Size Y



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4938) Offline buffering is not supported by paho client version 1.0.2 used in mqtt processors.

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387798#comment-16387798
 ] 

ASF GitHub Bot commented on NIFI-4938:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2514#discussion_r172519579
  
--- Diff: 
nifi-nar-bundles/nifi-mqtt-bundle/nifi-mqtt-processors/src/test/java/org/apache/nifi/processors/mqtt/common/MqttTestClient.java
 ---
@@ -17,16 +17,7 @@
 
 package org.apache.nifi.processors.mqtt.common;
 
-import org.eclipse.paho.client.mqttv3.IMqttClient;
-import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
-import org.eclipse.paho.client.mqttv3.IMqttToken;
-import org.eclipse.paho.client.mqttv3.MqttCallback;
-import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
-import org.eclipse.paho.client.mqttv3.MqttException;
-import org.eclipse.paho.client.mqttv3.MqttMessage;
-import org.eclipse.paho.client.mqttv3.MqttPersistenceException;
-import org.eclipse.paho.client.mqttv3.MqttSecurityException;
-import org.eclipse.paho.client.mqttv3.MqttTopic;
+import org.eclipse.paho.client.mqttv3.*;
--- End diff --

please avoid star import.  Be sure to run 

mvn clean install -Pcontrib-check


> Offline buffering is not supported by paho client version 1.0.2 used in mqtt 
> processors.
> 
>
> Key: NIFI-4938
> URL: https://issues.apache.org/jira/browse/NIFI-4938
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Himanshu Mishra
>Priority: Major
>
> Off line buffering is not supported by paho client version 1.0.2 used in mqtt 
> processors which causes problem of "message loss" with QoS 1 and QoS 2 for 
> mqtt messages when the client connectivity is lost.
> The version 1.2.0 of paho client supports this off line buffering. Thus 
> addressing this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2514: NIFI-4938 Upgraded org.eclipse.paho.client.mqttv3 d...

2018-03-06 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2514#discussion_r172519579
  
--- Diff: 
nifi-nar-bundles/nifi-mqtt-bundle/nifi-mqtt-processors/src/test/java/org/apache/nifi/processors/mqtt/common/MqttTestClient.java
 ---
@@ -17,16 +17,7 @@
 
 package org.apache.nifi.processors.mqtt.common;
 
-import org.eclipse.paho.client.mqttv3.IMqttClient;
-import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
-import org.eclipse.paho.client.mqttv3.IMqttToken;
-import org.eclipse.paho.client.mqttv3.MqttCallback;
-import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
-import org.eclipse.paho.client.mqttv3.MqttException;
-import org.eclipse.paho.client.mqttv3.MqttMessage;
-import org.eclipse.paho.client.mqttv3.MqttPersistenceException;
-import org.eclipse.paho.client.mqttv3.MqttSecurityException;
-import org.eclipse.paho.client.mqttv3.MqttTopic;
+import org.eclipse.paho.client.mqttv3.*;
--- End diff --

please avoid star import.  Be sure to run 

mvn clean install -Pcontrib-check


---


[jira] [Commented] (NIFI-4882) CSVRecordReader should utilize specified date/time/timestamp format at its convertSimpleIfPossible method

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387795#comment-16387795
 ] 

ASF GitHub Bot commented on NIFI-4882:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2473
  
@derekstraka Before finalizing this review process, I'd like to analyze 
current NiFi data type conversion capability on Date, Timestamp and Time with 
or without custom format at various places comprehensively. I started analyzing 
that and already found several issues. I can not tell if this change would 
match with the entire NiFi data type conversion theme until that analysis 
finishes, although I believe it does.

So, please let me take few more time to research on this subject. If you 
are interested in what I'm doing, here is a Google spreadsheet you can check. 

https://docs.google.com/spreadsheets/d/1EEHGWw7-ZGE9SwOBwQHGUwWFCtuHubz44jTT5qpiCf0/edit?usp=sharing


> CSVRecordReader should utilize specified date/time/timestamp format at its 
> convertSimpleIfPossible method
> -
>
> Key: NIFI-4882
> URL: https://issues.apache.org/jira/browse/NIFI-4882
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Derek Straka
>Priority: Major
>
> CSVRecordReader.convertSimpleIfPossible method is used by ValidateRecord. The 
> method does not coerce values to the target schema field type if the raw 
> string representation in the input CSV file is not compatible.
> The type compatibility check is implemented as follows. But it does not use 
> user specified date/time/timestamp format:
> {code}
> // This will return 'false' for input '01/01/1900' when user 
> specified custom format 'MM/dd/'
> if (DataTypeUtils.isCompatibleDataType(trimmed, dataType)) {
> // The LAZY_DATE_FORMAT should be used to check 
> compatibility, too.
> return DataTypeUtils.convertType(trimmed, dataType, 
> LAZY_DATE_FORMAT, LAZY_TIME_FORMAT, LAZY_TIMESTAMP_FORMAT, fieldName);
> } else {
> return value;
> }
> {code}
> If input date strings have different format than the default format 
> '-MM-dd', then ValidateRecord processor can not validate input records.
> JacksonCSVRecordReader has the identical methods with CSVRecordReader. Those 
> classes should have an abstract class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #271: MINIFICPP-416 Added crop properties to TF...

2018-03-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/271


---


[jira] [Commented] (MINIFICPP-397) Implement RouteOnAttribute

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387922#comment-16387922
 ] 

ASF GitHub Bot commented on MINIFICPP-397:
--

Github user achristianson commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/268
  
Taking a look at linter issues.


> Implement RouteOnAttribute
> --
>
> Key: MINIFICPP-397
> URL: https://issues.apache.org/jira/browse/MINIFICPP-397
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> RouteOnAttribute is notably missing from MiNiFi - C++ and should be 
> implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2503: NIFI-4855: The layout of NiFi API document is broken

2018-03-06 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2503
  
Will review...


---


[jira] [Commented] (NIFI-4855) The layout of NiFi API document is broken

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387923#comment-16387923
 ] 

ASF GitHub Bot commented on NIFI-4855:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2503
  
Will review...


> The layout of NiFi API document is broken
> -
>
> Key: NIFI-4855
> URL: https://issues.apache.org/jira/browse/NIFI-4855
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.5.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: broken_ui.png, fixed_ui.001.png
>
>
> This is reported by Hiroaki Nawa.
> Looks like _Controller_ section incudes _Versions_ section mistakenly, and 
> the layout is broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2487: NIFI-4774: Allow user to choose which write-ahead log impl...

2018-03-06 Thread mosermw
Github user mosermw commented on the issue:

https://github.com/apache/nifi/pull/2487
  
I tested this and I was able to switch back and forth between 
MinimalLockingWriteAheadLog and SequentialAccessWriteAheadLog. +1 from me.
It's best to make this switch while there are 0 flowfiles in the 
repository.  With flowfiles in the system, going from MinimalLockingWAL to 
SequantialAccessWAL worked, but the opposite had some issues.


---


[jira] [Commented] (NIFI-4774) FlowFile Repository should write updates to the same FlowFile to the same partition

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388037#comment-16388037
 ] 

ASF GitHub Bot commented on NIFI-4774:
--

Github user mosermw commented on the issue:

https://github.com/apache/nifi/pull/2487
  
I tested this and I was able to switch back and forth between 
MinimalLockingWriteAheadLog and SequentialAccessWriteAheadLog. +1 from me.
It's best to make this switch while there are 0 flowfiles in the 
repository.  With flowfiles in the system, going from MinimalLockingWAL to 
SequantialAccessWAL worked, but the opposite had some issues.


> FlowFile Repository should write updates to the same FlowFile to the same 
> partition
> ---
>
> Key: NIFI-4774
> URL: https://issues.apache.org/jira/browse/NIFI-4774
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.6.0
>
>
> As-is, in the case of power loss or Operating System crash, we could have an 
> update that is lost, and then an update for the same FlowFile that is not 
> lost, because the updates for a given FlowFile can span partitions. If an 
> update were written to Partition 1 and then to Partition 2 and Partition 2 is 
> flushed to disk by the Operating System and then the Operating System crashes 
> or power is lost before Partition 1 is flushed to disk, we could lose the 
> update to Partition 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4516) Add FetchSolr processor

2018-03-06 Thread Johannes Peter (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387726#comment-16387726
 ] 

Johannes Peter commented on NIFI-4516:
--

Hi [~abhi.rohatgi],

thank you for your offer, but I am almost done with this.

> Add FetchSolr processor
> ---
>
> Key: NIFI-4516
> URL: https://issues.apache.org/jira/browse/NIFI-4516
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>  Labels: features
>
> The processor shall be capable 
> * to query Solr within a workflow,
> * to make use of standard functionalities of Solr such as faceting, 
> highlighting, result grouping, etc.,
> * to make use of NiFis expression language to build Solr queries, 
> * to handle results as records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4940:


 Summary: CSVReader - Header schema strategy - normalize names
 Key: NIFI-4940
 URL: https://issues.apache.org/jira/browse/NIFI-4940
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard


When using the CSV Reader with the Header Schema Strategy access, we should add 
a property (boolean) in case field names should be normalized to be avro 
compatible. Otherwise it won't be possible, for instance, to use the 
UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4939) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4939:


 Summary: CSVReader - Header schema strategy - normalize names
 Key: NIFI-4939
 URL: https://issues.apache.org/jira/browse/NIFI-4939
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard


When using the CSV Reader with the Header Schema Strategy access, we should add 
a property (boolean) in case field names should be normalized to be avro 
compatible. Otherwise it won't be possible, for instance, to use the 
UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4940:
-
Status: Patch Available  (was: Open)

> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4940
> URL: https://issues.apache.org/jira/browse/NIFI-4940
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4936) NiFi parent pom dependency management forcing versions to align defeating classloader isolation

2018-03-06 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387790#comment-16387790
 ] 

Joseph Witt commented on NIFI-4936:
---

testing doing the mvn build with mvn -q but it is really quiet.  So not sure if 
the build would get cancelled by travis-ci for inactivity.

 

> NiFi parent pom dependency management forcing versions to align defeating 
> classloader isolation
> ---
>
> Key: NIFI-4936
> URL: https://issues.apache.org/jira/browse/NIFI-4936
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions, Tools and Build
>Affects Versions: 1.1.0, 1.2.0, 1.0.1, 1.3.0, 1.4.0, 1.5.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: NIFI-4936.patch, build-fixing, 
> old-vs-new-dependencies.txt
>
>
> the top level pom in nifi has a massive dependency management section.  this 
> was used initially to help enforce consistent usage of dependencies across 
> nifi but this also can defeat the purpose of the classloader isolation 
> offered by nars.  We need to push down dependency version declarations to the 
> nar levels where appropriate.
> there have been reported issues of defects happening due to us using much 
> newer versions (or sometimes older) of dependencies due to this dependency 
> management model.  By pushing declarations down to the proper scope each 
> nar/etc.. can use the specific versions of components it needs and we'll stop 
> introducing issues by forcing a different version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2473: NIFI-4882: Resolve issue with parsing custom date, time, a...

2018-03-06 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2473
  
@derekstraka Thanks for the updates, and sorry for my slow response. I'm 
going to review it!


---


[jira] [Commented] (NIFI-4882) CSVRecordReader should utilize specified date/time/timestamp format at its convertSimpleIfPossible method

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387670#comment-16387670
 ] 

ASF GitHub Bot commented on NIFI-4882:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2473
  
@derekstraka Thanks for the updates, and sorry for my slow response. I'm 
going to review it!


> CSVRecordReader should utilize specified date/time/timestamp format at its 
> convertSimpleIfPossible method
> -
>
> Key: NIFI-4882
> URL: https://issues.apache.org/jira/browse/NIFI-4882
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Koji Kawamura
>Assignee: Derek Straka
>Priority: Major
>
> CSVRecordReader.convertSimpleIfPossible method is used by ValidateRecord. The 
> method does not coerce values to the target schema field type if the raw 
> string representation in the input CSV file is not compatible.
> The type compatibility check is implemented as follows. But it does not use 
> user specified date/time/timestamp format:
> {code}
> // This will return 'false' for input '01/01/1900' when user 
> specified custom format 'MM/dd/'
> if (DataTypeUtils.isCompatibleDataType(trimmed, dataType)) {
> // The LAZY_DATE_FORMAT should be used to check 
> compatibility, too.
> return DataTypeUtils.convertType(trimmed, dataType, 
> LAZY_DATE_FORMAT, LAZY_TIME_FORMAT, LAZY_TIMESTAMP_FORMAT, fieldName);
> } else {
> return value;
> }
> {code}
> If input date strings have different format than the default format 
> '-MM-dd', then ValidateRecord processor can not validate input records.
> JacksonCSVRecordReader has the identical methods with CSVRecordReader. Those 
> classes should have an abstract class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4868) Unable to initialise `HBase_1_1_2_ClientService `

2018-03-06 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387760#comment-16387760
 ] 

Joseph Witt commented on NIFI-4868:
---

please share more information about the configuration you have and the 
environment (version of hbase/etc..)

> Unable to initialise  `HBase_1_1_2_ClientService `
> --
>
> Key: NIFI-4868
> URL: https://issues.apache.org/jira/browse/NIFI-4868
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Tools and Build
>Affects Versions: 1.1.1, 1.3.0, 1.4.0
>Reporter: Naveen Nain
>Priority: Major
>  Labels: security
>
> I'm creating a HBASE connection controller service to connect to secured 
> HBASE. I have defined the kerbrose priclpal and keytab. But I'm enabling the 
> controller service, It's giving me this error:
> {quote}2018-02-12 14:16:04,791 INFO [StandardProcessScheduler Thread-4] 
> o.a.nifi.hbase.HBase_1_1_2_ClientService 
> HBase_1_1_2_ClientService[id=88ee1856-0161-1000-019f-35d668a0300e] HBase 
> Security Enabled, logging in as principal hbase/m111.server.e...@embs.com 
> with keytab /Users/naveen/Downloads/nifi-1.0.0/conf/hbase.keytab
> 2018-02-12 14:16:05,388 INFO [StandardProcessScheduler Thread-4] 
> o.a.nifi.hbase.HBase_1_1_2_ClientService 
> HBase_1_1_2_ClientService[id=88ee1856-0161-1000-019f-35d668a0300e] 
> Successfully logged in as principal hbase/* with keytab 
> /Users/naveen/Downloads/nifi-1.0.0/conf/hbase.keytab
> 2018-02-12 14:16:05,389 INFO [StandardProcessScheduler Thread-4] 
> o.a.h.h.zookeeper.RecoverableZooKeeper Process 
> identifier=hconnection-0x6d8fedd8 connecting to ZooKeeper 
> ensemble=*:2181,**:2181,***:2181
> 2018-02-12 14:16:08,073 ERROR [StandardProcessScheduler Thread-4] 
> o.a.n.c.s.StandardControllerServiceNode 
> HBase_1_1_2_ClientService[id=88ee1856-0161-1000-019f-35d668a0300e] Failed to 
> invoke @OnEnabled method due to java.io.IOException: 
> java.lang.reflect.InvocationTargetException
> 2018-02-12 14:16:08,075 ERROR [StandardProcessScheduler Thread-4] 
> o.a.n.c.s.StandardControllerServiceNode 
> java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>  ~[hbase-client-1.1.2.jar:1.1.2]
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>  ~[hbase-client-1.1.2.jar:1.1.2]
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>  ~[hbase-client-1.1.2.jar:1.1.2]
>   at 
> org.apache.nifi.hbase.HBase_1_1_2_ClientService$1.run(HBase_1_1_2_ClientService.java:232)
>  ~[nifi-hbase_1_1_2-client-service-1.0.0.jar:1.0.0]
>   at 
> org.apache.nifi.hbase.HBase_1_1_2_ClientService$1.run(HBase_1_1_2_ClientService.java:229)
>  ~[nifi-hbase_1_1_2-client-service-1.0.0.jar:1.0.0]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_141]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_141]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>  ~[hadoop-common-2.6.2.jar:na]
>   at 
> org.apache.nifi.hbase.HBase_1_1_2_ClientService.createConnection(HBase_1_1_2_ClientService.java:229)
>  ~[nifi-hbase_1_1_2-client-service-1.0.0.jar:1.0.0]
>   at 
> org.apache.nifi.hbase.HBase_1_1_2_ClientService.onEnabled(HBase_1_1_2_ClientService.java:178)
>  ~[nifi-hbase_1_1_2-client-service-1.0.0.jar:1.0.0]
>   at sun.reflect.GeneratedMethodAccessor322.invoke(Unknown Source) 
> ~[na:na]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_141]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_141]
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>  ~[na:na]
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>  ~[na:na]
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>  ~[na:na]
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>  ~[na:na]
>   at 
> org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:348)
>  ~[na:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_141]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_141]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_141]
>   at 
> 

[jira] [Resolved] (NIFI-4939) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-4939.
--
Resolution: Duplicate

> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4939
> URL: https://issues.apache.org/jira/browse/NIFI-4939
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4937) Cannot GET /nifi-api/flow/bulletin-board behind reverse proxy

2018-03-06 Thread Damian Czaja (JIRA)
Damian Czaja created NIFI-4937:
--

 Summary: Cannot GET /nifi-api/flow/bulletin-board behind reverse 
proxy
 Key: NIFI-4937
 URL: https://issues.apache.org/jira/browse/NIFI-4937
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.5.0
 Environment: NiFi on Docker + TinyProxy
Reporter: Damian Czaja
 Attachments: nifi_bulletin_board_error.png

Hello,

I have an problem, when running NiFi on Docker behind a reverse proxy 
(TinyProxy).

TinyProxy configuration adds three headers. NiFi is working on the same host on 
port 8080.
{code:java}
tinyproxy.conf
...
AddHeader "X-ProxyHost" "public.domain.com"
AddHeader "X-ProxyContextPath" "/path/to/nifi"
AddHeader "X-ProxyPort" "443"
ReversePath "/" "http://localhost:8080/"{code}
I can access NiFi through [https://public.domain.com/path/to/nifi/nifi/] and 
NiFi works fine, but I'm getting sometimes a popup, which states, that is 
cannot GET /nifi-api/flow/bulletin-board. For ex. I happens, when I try to view 
configuration of a Controller Service.

I recognized, that this request is made directly to NiFi and not through the 
reverse proxy. It looks, that it ignores the X-Proxy headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4936) NiFi parent pom dependency management forcing versions to align defeating classloader isolation

2018-03-06 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387756#comment-16387756
 ] 

Joseph Witt commented on NIFI-4936:
---

build failed due to excessive logging in travis-ci.

 

We need to
 # stop having maven warn on parallel builds w/rat
 # get those deprecation warnings out
 # get rid of the lines that start w/ 'Progress'

> NiFi parent pom dependency management forcing versions to align defeating 
> classloader isolation
> ---
>
> Key: NIFI-4936
> URL: https://issues.apache.org/jira/browse/NIFI-4936
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions, Tools and Build
>Affects Versions: 1.1.0, 1.2.0, 1.0.1, 1.3.0, 1.4.0, 1.5.0
>Reporter: Joseph Witt
>Assignee: Joseph Witt
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: NIFI-4936.patch, build-fixing, 
> old-vs-new-dependencies.txt
>
>
> the top level pom in nifi has a massive dependency management section.  this 
> was used initially to help enforce consistent usage of dependencies across 
> nifi but this also can defeat the purpose of the classloader isolation 
> offered by nars.  We need to push down dependency version declarations to the 
> nar levels where appropriate.
> there have been reported issues of defects happening due to us using much 
> newer versions (or sometimes older) of dependencies due to this dependency 
> management model.  By pushing declarations down to the proper scope each 
> nar/etc.. can use the specific versions of components it needs and we'll stop 
> introducing issues by forcing a different version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4940) CSVReader - Header schema strategy - normalize names

2018-03-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387770#comment-16387770
 ] 

ASF GitHub Bot commented on NIFI-4940:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2513

NIFI-4940 - CSVReader - Header schema strategy - normalize names

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-4940

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2513.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2513


commit d6cf8b7ccb0d4877c3ffd422b67ee53d37f46853
Author: Pierre Villard 
Date:   2018-03-06T13:32:29Z

NIFI-4940 - CSVReader - Header schema strategy - normalize names




> CSVReader - Header schema strategy - normalize names
> 
>
> Key: NIFI-4940
> URL: https://issues.apache.org/jira/browse/NIFI-4940
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> When using the CSV Reader with the Header Schema Strategy access, we should 
> add a property (boolean) in case field names should be normalized to be avro 
> compatible. Otherwise it won't be possible, for instance, to use the 
> UpdateRecord processor and access a field with a "weird" name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2513: NIFI-4940 - CSVReader - Header schema strategy - no...

2018-03-06 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2513

NIFI-4940 - CSVReader - Header schema strategy - normalize names

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-4940

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2513.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2513


commit d6cf8b7ccb0d4877c3ffd422b67ee53d37f46853
Author: Pierre Villard 
Date:   2018-03-06T13:32:29Z

NIFI-4940 - CSVReader - Header schema strategy - normalize names




---


[GitHub] nifi pull request #2514: NIFI-4938 Upgraded org.eclipse.paho.client.mqttv3 d...

2018-03-06 Thread Himanshu-it
GitHub user Himanshu-it opened a pull request:

https://github.com/apache/nifi/pull/2514

NIFI-4938 Upgraded org.eclipse.paho.client.mqttv3 dependency version …

Off line buffering for mqtt messages will now be supported with the version 
of paho client being updated to 1.2.0 from 1.0.2 .


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Himanshu-it/nifi 
NIFI-4938-Offline-mqtt-message-buffering-support-for-mqtt-processors-with-QoS1-and-Qos2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2514.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2514


commit 49f1410c7bf61edfe9a9fb19f566743590f50636
Author: himanshu 
Date:   2018-03-06T13:38:50Z

NIFI-4938 Upgraded org.eclipse.paho.client.mqttv3 dependency version to 
1.2.0




---


[GitHub] nifi issue #2473: NIFI-4882: Resolve issue with parsing custom date, time, a...

2018-03-06 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2473
  
@derekstraka Before finalizing this review process, I'd like to analyze 
current NiFi data type conversion capability on Date, Timestamp and Time with 
or without custom format at various places comprehensively. I started analyzing 
that and already found several issues. I can not tell if this change would 
match with the entire NiFi data type conversion theme until that analysis 
finishes, although I believe it does.

So, please let me take few more time to research on this subject. If you 
are interested in what I'm doing, here is a Google spreadsheet you can check. 

https://docs.google.com/spreadsheets/d/1EEHGWw7-ZGE9SwOBwQHGUwWFCtuHubz44jTT5qpiCf0/edit?usp=sharing


---