[GitHub] nifi issue #2621: NIFI-5064 Fixes and improvements to PutKudu processor

2018-05-09 Thread junegunn
Github user junegunn commented on the issue:

https://github.com/apache/nifi/pull/2621
  
Thanks!


---


[jira] [Commented] (NIFI-5064) Fixes and improvements to PutKudu processor

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469819#comment-16469819
 ] 

ASF GitHub Bot commented on NIFI-5064:
--

Github user junegunn commented on the issue:

https://github.com/apache/nifi/pull/2621
  
Thanks!


> Fixes and improvements to PutKudu processor
> ---
>
> Key: NIFI-5064
> URL: https://issues.apache.org/jira/browse/NIFI-5064
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Junegunn Choi
>Priority: Major
> Fix For: 1.7.0
>
>
> 1. Currently, PutKudu fails with NPE on null or missing values.
> 2. {{IllegalArgumentException}} on 16-bit integer columns because of [a 
> missing {{break}} in case clause for INT16 
> columns|https://github.com/apache/nifi/blob/rel/nifi-1.6.0/nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java#L112-L115].
> 3. Also, {{IllegalArgumentException}} on 8-bit integer columns. We need a 
> separate case clause for INT8 columns where {{PartialRow#addByte}} instead of 
> {{PartialRow#addShort}} is be used.
> 4. NIFI-4384 added batch size parameter, however, it only applies to 
> FlowFiles with multiple records. {{KuduSession}} is created and closed for 
> each FlowFile, so if a FlowFile contains only a single record, no batching 
> takes place. A workaround would be to use a preprocessor to concatenate 
> multiple FlowFiles, but since {{PutHBase}} and {{PutSQL}} use 
> {{session.get(batchSize)}} to handle multiple FlowFiles at once, I think we 
> can take the same approach here with PutKudu as it simplifies the data flow.
> 5. {{PutKudu}} depends on kudu-client 1.3.0. But we can safely update to 
> 1.7.0.
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/release_notes.adoc]
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/prior_release_notes.adoc]
> A notable change in Kudu 1.7.0 is the addition of Decimal type.
> 6. {{PutKudu}} has {{Skip head line}} property for ignoring the first record 
> in a FlowFile. I suppose this was added to handle header lines in CSV files, 
> but I really don't think it's something {{PutKudu}} should handle. 
> {{CSVReader}} already has {{Treat First Line as Header}} option, so we should 
> tell the users to use it instead as we don't want to have the same option 
> here and there. Also, the default value of {{Skip head line}} is {{true}}, 
> and I found it very confusing as my use case was to stream-process 
> single-record FlowFiles. We can keep this property for backward 
> compatibility, but we should at least deprecate it and change the default 
> value to {{false}}.
> 7. Server-side errors such as uniqueness constraint violation are not checked 
> and simply ignored. When flush mode is set to {{AUTO_FLUSH_SYNC}}, we should 
> check the return value of {{KuduSession#apply}} to see it has {{RowError}}, 
> but PutKudu currently ignores it. For example, on uniqueness constraint 
> violation, we get a {{RowError}} saying "_Already present: key already 
> present (error 0)_".
> On the other hand, when flush mode is set to {{AUTO_FLUSH_BACKGROUND}}, 
> {{KuduSession#apply}}, understandably, returns null, and we should check the 
> return value of {{KuduSession#getPendingErrors()}}. And when the mode is 
> {{MANUAL_FLUSH}}, we should examine the return value of 
> {{KuduSession#flush()}} or {{KuduSession#close()}}. In this case, we also 
> have to make sure that we don't overflow the mutation buffer of 
> {{KuduSession}} by calling {{flush()}} before too late.
> 
> I'll create a pull request on GitHub. Since there are multiple issues to be 
> addressed, I made separate commits for each issue mentioned above so that 
> it's easier to review. You might want to squash them into one, or cherry-pick 
> a subset of commits if you don't agree with some decisions I made.
> Please let me know what you think. We deployed the code to a production 
> server last week and it's been running since without any issues steadily 
> processing 20K records/second.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5180) Additional Details for Consume/PublishJMS indicate incorrect default Destination Type

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469791#comment-16469791
 ] 

ASF GitHub Bot commented on NIFI-5180:
--

GitHub user markobean opened a pull request:

https://github.com/apache/nifi/pull/2694

NIFI-5180: update JMS additional details to set Destination Type to R…

…equired, default 'QUEUE'

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markobean/nifi NIFI-5180

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2694.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2694


commit d7a705ccad73d2305768dafcd186c248b7ab69b9
Author: Mark Bean 
Date:   2018-05-10T00:24:19Z

NIFI-5180: update JMS additional details to set Destination Type to 
Required, default 'QUEUE'




> Additional Details for Consume/PublishJMS indicate incorrect default 
> Destination Type
> -
>
> Key: NIFI-5180
> URL: https://issues.apache.org/jira/browse/NIFI-5180
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.6.0
>Reporter: Mark Bean
>Assignee: Mark Bean
>Priority: Minor
>  Labels: documentation
>
> The additionalDetails.html for both ConsumeJMS and PublishJMS indicate the 
> default value of 'Destination Type' is 'TOPIC'. In reality, it is 'QUEUE' (as 
> defined in AbstractJMSProcessor.java)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2694: NIFI-5180: update JMS additional details to set Des...

2018-05-09 Thread markobean
GitHub user markobean opened a pull request:

https://github.com/apache/nifi/pull/2694

NIFI-5180: update JMS additional details to set Destination Type to R…

…equired, default 'QUEUE'

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markobean/nifi NIFI-5180

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2694.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2694


commit d7a705ccad73d2305768dafcd186c248b7ab69b9
Author: Mark Bean 
Date:   2018-05-10T00:24:19Z

NIFI-5180: update JMS additional details to set Destination Type to 
Required, default 'QUEUE'




---


[GitHub] nifi issue #2542: NIFI-4971: ReportLineageToAtlas complete path can miss one...

2018-05-09 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2542
  
@MikeThomsen Just wanted to check. Is there any other feedback, or did my 
last reply address your concerns? This fix is important not only for fixing 
issue with complete path, but also to reduce the amount of notification 
messages sent from NiFi to Atlas (thru Kafka) to lower Atlas workload. Thanks!


---


[jira] [Commented] (NIFI-4971) ReportLineageToAtlas 'complete path' strategy can miss one-time lineages

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469790#comment-16469790
 ] 

ASF GitHub Bot commented on NIFI-4971:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2542
  
@MikeThomsen Just wanted to check. Is there any other feedback, or did my 
last reply address your concerns? This fix is important not only for fixing 
issue with complete path, but also to reduce the amount of notification 
messages sent from NiFi to Atlas (thru Kafka) to lower Atlas workload. Thanks!


> ReportLineageToAtlas 'complete path' strategy can miss one-time lineages
> 
>
> Key: NIFI-4971
> URL: https://issues.apache.org/jira/browse/NIFI-4971
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> For the simplest example, with GetFlowFIle (GFF) -> PutFlowFile (PFF), where 
> GFF gets files and PFF saves those files into a different directory, then 
> following provenance events will be generated:
>  # GFF RECEIVE file1
>  # PFF SEND file2
> From above provenance events, following entities and lineages should be 
> created in Atlas, labels in brackets are Atlas type names:
> {code}
> file1 (fs_path) -> GFF, PFF (nifi_flow_path) -> file2 (fs_path)
> {code}
> Entities shown in above graph are created. However, the 'nifi_flow_path' 
> entity do not have inputs/outputs referencing 'fs_path', so lineage can not 
> be seen in Atlas UI.
> This issue was discovered by [~nayakmahesh616]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2518: NIFI-4637 Added support for visibility labels to the HBase...

2018-05-09 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2518
  
@MikeThomsen Thanks for the updates. It mostly looks good. The only concern 
I have is the gap between the name of NAR and the HBase client version. One 
possible solution is to rename the NAR from `nifi-hbase_1_1_2-client-service` 
to `nifi-hbase_1_1_x-client-service` or `nifi-hbase_1_1-client-service`, to 
avoid having too specific version in it so that we can update it in the future, 
assuming NiFi can pick the right NAR for existing flow when user upgrade NiFi 
even nar name changes.

If you agree with that approach, or to keep discussion on that specific 
topic separately, would you revert the HBase dependency version back to 1.1.2 
for this PR, and submit another JIRA to do the version bump? Then we can close 
this one.

@bbende Any comment from you on this subject would be appreciated.


---


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469781#comment-16469781
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2518
  
@MikeThomsen Thanks for the updates. It mostly looks good. The only concern 
I have is the gap between the name of NAR and the HBase client version. One 
possible solution is to rename the NAR from `nifi-hbase_1_1_2-client-service` 
to `nifi-hbase_1_1_x-client-service` or `nifi-hbase_1_1-client-service`, to 
avoid having too specific version in it so that we can update it in the future, 
assuming NiFi can pick the right NAR for existing flow when user upgrade NiFi 
even nar name changes.

If you agree with that approach, or to keep discussion on that specific 
topic separately, would you revert the HBase dependency version back to 1.1.2 
for this PR, and submit another JIRA to do the version bump? Then we can close 
this one.

@bbende Any comment from you on this subject would be appreciated.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3753) ListenBeats: Compressed beats packets may cause: Error decoding Beats frame: Error decompressing frame: invalid distance too far back

2018-05-09 Thread Nicholas Carenza (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469614#comment-16469614
 ] 

Nicholas Carenza commented on NIFI-3753:


Thanks John, Unfortunately when i set max_bulk_size to 0, nothing happens at 
all. If i set it to 1, lines get sent albeit very slowly and i/o timeouts occur 
constantly. Filebeat v5.6.6 and Nifi v1.3

> ListenBeats: Compressed beats packets may cause: Error decoding Beats  frame: 
> Error decompressing  frame: invalid distance too far back
> ---
>
> Key: NIFI-3753
> URL: https://issues.apache.org/jira/browse/NIFI-3753
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Andre F de Miranda
>Priority: Critical
>
> 017-04-28 02:03:37,153 ERROR [pool-106-thread-1] 
> o.a.nifi.processors.beats.List
> enBeats
> org.apache.nifi.processors.beats.frame.BeatsFrameException: Error decoding 
> Beats
>  frame: Error decompressing  frame: invalid distance too far back
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDeco
> der.java:123) ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processors.beats.handler.BeatsSocketChannelHandler.pr
> ocessBuffer(BeatsSocketChannelHandler.java:71) 
> ~[nifi-beats-processors-1.2.0-SNA
> PSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processor.util.listen.handler.socket.StandardSocketCh
> annelHandler.run(StandardSocketChannelHandler.java:76) 
> [nifi-processor-utils-1.2
> .0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
> java:1142) [na:1.8.0_131]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Caused by: org.apache.nifi.processors.beats.frame.BeatsFrameException: Error 
> decompressing  frame: invalid distance too far back
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:292)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.process(BeatsDecoder.java:103)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> ... 5 common frames omitted
> Caused by: java.util.zip.ZipException: invalid distance too far back
> at 
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) 
> ~[na:1.8.0_131]
> at java.io.FilterInputStream.read(FilterInputStream.java:107) 
> ~[na:1.8.0_131]
> at 
> org.apache.nifi.processors.beats.frame.BeatsDecoder.processPAYLOAD(BeatsDecoder.java:277)
>  ~[nifi-beats-processors-1.2.0-SNAPSHOT.jar:1.2.0-SNAPSHOT]
> ... 6 common frames omitted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #117: NIFIREG-160 Implement a hook provider

2018-05-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/117
  
For an easy way to test this you can turn uncomment the 
LoggingEventHookProvider in providers.xml and then use the registry as normal 
to create buckets and save flows from NiFi, then tail or inspect 
logs/nifi-registry-event.log


---


[jira] [Commented] (NIFIREG-160) Implement a hook provider

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469486#comment-16469486
 ] 

ASF GitHub Bot commented on NIFIREG-160:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/117
  
For an easy way to test this you can turn uncomment the 
LoggingEventHookProvider in providers.xml and then use the registry as normal 
to create buckets and save flows from NiFi, then tail or inspect 
logs/nifi-registry-event.log


> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-160) Implement a hook provider

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469484#comment-16469484
 ] 

ASF GitHub Bot commented on NIFIREG-160:


GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/117

NIFIREG-160 Implement a hook provider

For whoever reviews/merges this, please keep the commit history and don't 
squash.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry hook-provider

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/117.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #117


commit 3371a07068dbbd5b3984f386cf61bc297ba608a9
Author: Pierre Villard 
Date:   2018-04-06T14:58:33Z

NIFIREG-160 - Initial hook provider

commit f99cf19d49bf5d301ada83a5c128b645da00458f
Author: Bryan Bende 
Date:   2018-05-08T17:38:33Z

NIFIREG-160 - Making event hooks asynchronous




> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #117: NIFIREG-160 Implement a hook provider

2018-05-09 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/117

NIFIREG-160 Implement a hook provider

For whoever reviews/merges this, please keep the commit history and don't 
squash.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry hook-provider

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/117.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #117


commit 3371a07068dbbd5b3984f386cf61bc297ba608a9
Author: Pierre Villard 
Date:   2018-04-06T14:58:33Z

NIFIREG-160 - Initial hook provider

commit f99cf19d49bf5d301ada83a5c128b645da00458f
Author: Bryan Bende 
Date:   2018-05-08T17:38:33Z

NIFIREG-160 - Making event hooks asynchronous




---


[jira] [Created] (NIFI-5180) Additional Details for Consume/PublishJMS indicate incorrect default Destination Type

2018-05-09 Thread Mark Bean (JIRA)
Mark Bean created NIFI-5180:
---

 Summary: Additional Details for Consume/PublishJMS indicate 
incorrect default Destination Type
 Key: NIFI-5180
 URL: https://issues.apache.org/jira/browse/NIFI-5180
 Project: Apache NiFi
  Issue Type: Bug
  Components: Documentation  Website
Affects Versions: 1.6.0
Reporter: Mark Bean
Assignee: Mark Bean


The additionalDetails.html for both ConsumeJMS and PublishJMS indicate the 
default value of 'Destination Type' is 'TOPIC'. In reality, it is 'QUEUE' (as 
defined in AbstractJMSProcessor.java)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5167) Reporting Task Controller Service UI failure

2018-05-09 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-5167:
--
   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> Reporting Task Controller Service UI failure
> 
>
> Key: NIFI-5167
> URL: https://issues.apache.org/jira/browse/NIFI-5167
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Mark Bean
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.7.0
>
>
> The UI for Controller Services related to a Reporting Task can become 
> non-responsive. Test case: Create a Reporting Task which requires a 
> Controller Service (e.g. SiteToSiteBulletinReportingTask.) When configuring 
> the properties for the Reporting Task, select the Controller Service property 
> (e.g. SSL Context Service) and choose "create new service". Then, configure 
> the resultant Controller Service (e.g. StandardRestrictedSSLContextService.) 
> When choosing "Apply", the properties popup window is not dismissed. Yet, the 
> properties appear to apply successfully because on subsequent configuration, 
> the properties remain as previously set. Also, a message in the app.log 
> indicates:
> "INFO [Flow Service Tasks Thread-2] o.a.nifi.controller.StandardFlowService 
> Saved flow controller org.apache.nifi.controller.FlowController@443b57d7 // 
> Another save pending = false"
> The Controller Service configuration window should dismiss appropriately when 
> "Apply" button is selected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2688: NIFI-5167: Updating how the nf-reporting-task module is in...

2018-05-09 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2688
  
Thanks @mcgilman this has been merged to master.


---


[jira] [Commented] (NIFI-5167) Reporting Task Controller Service UI failure

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469355#comment-16469355
 ] 

ASF GitHub Bot commented on NIFI-5167:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2688
  
Thanks @mcgilman this has been merged to master.


> Reporting Task Controller Service UI failure
> 
>
> Key: NIFI-5167
> URL: https://issues.apache.org/jira/browse/NIFI-5167
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Mark Bean
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.7.0
>
>
> The UI for Controller Services related to a Reporting Task can become 
> non-responsive. Test case: Create a Reporting Task which requires a 
> Controller Service (e.g. SiteToSiteBulletinReportingTask.) When configuring 
> the properties for the Reporting Task, select the Controller Service property 
> (e.g. SSL Context Service) and choose "create new service". Then, configure 
> the resultant Controller Service (e.g. StandardRestrictedSSLContextService.) 
> When choosing "Apply", the properties popup window is not dismissed. Yet, the 
> properties appear to apply successfully because on subsequent configuration, 
> the properties remain as previously set. Also, a message in the app.log 
> indicates:
> "INFO [Flow Service Tasks Thread-2] o.a.nifi.controller.StandardFlowService 
> Saved flow controller org.apache.nifi.controller.FlowController@443b57d7 // 
> Another save pending = false"
> The Controller Service configuration window should dismiss appropriately when 
> "Apply" button is selected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2688: NIFI-5167: Updating how the nf-reporting-task modul...

2018-05-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2688


---


[jira] [Commented] (NIFI-5167) Reporting Task Controller Service UI failure

2018-05-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469339#comment-16469339
 ] 

ASF subversion and git services commented on NIFI-5167:
---

Commit 22342a0e0c426e0a784925ac59161cbf7eba0c42 in nifi's branch 
refs/heads/master from [~mcgilman]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=22342a0 ]

NIFI-5167:
- Updating how the nf-reporting-task module is injected to the 
nf-controller-service module.

This closes #2688

Signed-off-by: Scott Aslan 


> Reporting Task Controller Service UI failure
> 
>
> Key: NIFI-5167
> URL: https://issues.apache.org/jira/browse/NIFI-5167
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Mark Bean
>Assignee: Matt Gilman
>Priority: Major
>
> The UI for Controller Services related to a Reporting Task can become 
> non-responsive. Test case: Create a Reporting Task which requires a 
> Controller Service (e.g. SiteToSiteBulletinReportingTask.) When configuring 
> the properties for the Reporting Task, select the Controller Service property 
> (e.g. SSL Context Service) and choose "create new service". Then, configure 
> the resultant Controller Service (e.g. StandardRestrictedSSLContextService.) 
> When choosing "Apply", the properties popup window is not dismissed. Yet, the 
> properties appear to apply successfully because on subsequent configuration, 
> the properties remain as previously set. Also, a message in the app.log 
> indicates:
> "INFO [Flow Service Tasks Thread-2] o.a.nifi.controller.StandardFlowService 
> Saved flow controller org.apache.nifi.controller.FlowController@443b57d7 // 
> Another save pending = false"
> The Controller Service configuration window should dismiss appropriately when 
> "Apply" button is selected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5167) Reporting Task Controller Service UI failure

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469341#comment-16469341
 ] 

ASF GitHub Bot commented on NIFI-5167:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2688


> Reporting Task Controller Service UI failure
> 
>
> Key: NIFI-5167
> URL: https://issues.apache.org/jira/browse/NIFI-5167
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Mark Bean
>Assignee: Matt Gilman
>Priority: Major
>
> The UI for Controller Services related to a Reporting Task can become 
> non-responsive. Test case: Create a Reporting Task which requires a 
> Controller Service (e.g. SiteToSiteBulletinReportingTask.) When configuring 
> the properties for the Reporting Task, select the Controller Service property 
> (e.g. SSL Context Service) and choose "create new service". Then, configure 
> the resultant Controller Service (e.g. StandardRestrictedSSLContextService.) 
> When choosing "Apply", the properties popup window is not dismissed. Yet, the 
> properties appear to apply successfully because on subsequent configuration, 
> the properties remain as previously set. Also, a message in the app.log 
> indicates:
> "INFO [Flow Service Tasks Thread-2] o.a.nifi.controller.StandardFlowService 
> Saved flow controller org.apache.nifi.controller.FlowController@443b57d7 // 
> Another save pending = false"
> The Controller Service configuration window should dismiss appropriately when 
> "Apply" button is selected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5073) JMSConnectionFactory doesn't resolve 'variables' properly

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469247#comment-16469247
 ] 

ASF GitHub Bot commented on NIFI-5073:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2653#discussion_r187131152
  
--- Diff: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ---
@@ -159,13 +159,15 @@ public void enable(ConfigurationContext context) 
throws InitializationException
 if (logger.isInfoEnabled()) {
 logger.info("Configuring " + 
this.getClass().getSimpleName() + " for '"
 + 
context.getProperty(CONNECTION_FACTORY_IMPL).evaluateAttributeExpressions().getValue()
 + "' to be connected to '"
-+ BROKER_URI + "'");
++ 
context.getProperty(BROKER_URI).evaluateAttributeExpressions().getValue() + 
"'");
 }
+
 // will load user provided libraries/resources on the 
classpath
-
Utils.addResourcesToClasspath(context.getProperty(CLIENT_LIB_DIR_PATH).evaluateAttributeExpressions().getValue());
+final String clientLibPath = 
context.getProperty(CLIENT_LIB_DIR_PATH).evaluateAttributeExpressions().getValue();
+ClassLoader customClassLoader = 
ClassLoaderUtils.getCustomClassLoader(clientLibPath, 
this.getClass().getClassLoader(), null);
+
Thread.currentThread().setContextClassLoader(customClassLoader);
--- End diff --

The problem with this approach I think is that we're now creating the 
ClassLoader and using it to create the connection factory. However, when we do 
that, we would need to ensure that any access to the connection factory 
instance also is performed using the ClassLoader. Since all calls into this 
Controller Service could come from different threads, I think this is going to 
cause a problem.

I think the typical pattern here is to update the property descriptor of 
CLIENT_LIB_DIR_PATH to include .dynamicallyModifiesClassPath(true). In this 
case, the framework will automatically handle creating the appropriate 
ClassLoader for each instance of the controller service and will also ensure 
that the appropriate ClassLoader is set when any method on this Controller 
Service is invoked.


> JMSConnectionFactory doesn't resolve 'variables' properly
> -
>
> Key: NIFI-5073
> URL: https://issues.apache.org/jira/browse/NIFI-5073
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Matthew Clarke
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
> Attachments: 
> 0001-NIFI-5073-JMSConnectionFactoryProvider-now-resolves-.patch
>
>
> Create a new process Group.
> Add "Variables" to the process group:
> for example:
> broker_uri=tcp://localhost:4141
> client_libs=/NiFi/custom-lib-dir/MQlib
> con_factory=blah
> Then while that process group is selected, create  a controller service.
> Create JMSConnectionFactory.
> Configure this controller service to use EL for PG defined variables above:
> ${con_factory}, ${con_factory}, and ${broker_uri}
> The controller service will remain invalid because the EL statements are not 
> properly resolved to their set values.
> Doing the exact same thing above using the external NiFi registry file works 
> as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2653: NIFI-5073: JMSConnectionFactoryProvider now resolve...

2018-05-09 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2653#discussion_r187131152
  
--- Diff: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ---
@@ -159,13 +159,15 @@ public void enable(ConfigurationContext context) 
throws InitializationException
 if (logger.isInfoEnabled()) {
 logger.info("Configuring " + 
this.getClass().getSimpleName() + " for '"
 + 
context.getProperty(CONNECTION_FACTORY_IMPL).evaluateAttributeExpressions().getValue()
 + "' to be connected to '"
-+ BROKER_URI + "'");
++ 
context.getProperty(BROKER_URI).evaluateAttributeExpressions().getValue() + 
"'");
 }
+
 // will load user provided libraries/resources on the 
classpath
-
Utils.addResourcesToClasspath(context.getProperty(CLIENT_LIB_DIR_PATH).evaluateAttributeExpressions().getValue());
+final String clientLibPath = 
context.getProperty(CLIENT_LIB_DIR_PATH).evaluateAttributeExpressions().getValue();
+ClassLoader customClassLoader = 
ClassLoaderUtils.getCustomClassLoader(clientLibPath, 
this.getClass().getClassLoader(), null);
+
Thread.currentThread().setContextClassLoader(customClassLoader);
--- End diff --

The problem with this approach I think is that we're now creating the 
ClassLoader and using it to create the connection factory. However, when we do 
that, we would need to ensure that any access to the connection factory 
instance also is performed using the ClassLoader. Since all calls into this 
Controller Service could come from different threads, I think this is going to 
cause a problem.

I think the typical pattern here is to update the property descriptor of 
CLIENT_LIB_DIR_PATH to include .dynamicallyModifiesClassPath(true). In this 
case, the framework will automatically handle creating the appropriate 
ClassLoader for each instance of the controller service and will also ensure 
that the appropriate ClassLoader is set when any method on this Controller 
Service is invoked.


---


[jira] [Updated] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-05-09 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4942:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1 merged to master

> NiFi Toolkit - Allow migration of master key without previous password
> --
>
> Key: NIFI-4942
> URL: https://issues.apache.org/jira/browse/NIFI-4942
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.5.0
>Reporter: Yolanda M. Davis
>Assignee: Andy LoPresto
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: 
> TEST-org.apache.nifi.properties.ConfigEncryptionToolTest.xml
>
>
> Currently the encryption cli in nifi toolkit requires that, in order to 
> migrate from one master key to the next, the previous master key or password 
> should be provided. In cases where the provisioning tool doesn't have the 
> previous value available this becomes challenging to provide and may be prone 
> to error. In speaking with [~alopresto] we can allow toolkit to support a 
> mode of execution such that the master key can be updated without requiring 
> the previous password. Also documentation around it's usage should be updated 
> to be clear in describing the purpose and the type of environment where this 
> command should be used (admin only access etc).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469195#comment-16469195
 ] 

ASF GitHub Bot commented on NIFI-4942:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2690


> NiFi Toolkit - Allow migration of master key without previous password
> --
>
> Key: NIFI-4942
> URL: https://issues.apache.org/jira/browse/NIFI-4942
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.5.0
>Reporter: Yolanda M. Davis
>Assignee: Andy LoPresto
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: 
> TEST-org.apache.nifi.properties.ConfigEncryptionToolTest.xml
>
>
> Currently the encryption cli in nifi toolkit requires that, in order to 
> migrate from one master key to the next, the previous master key or password 
> should be provided. In cases where the provisioning tool doesn't have the 
> previous value available this becomes challenging to provide and may be prone 
> to error. In speaking with [~alopresto] we can allow toolkit to support a 
> mode of execution such that the master key can be updated without requiring 
> the previous password. Also documentation around it's usage should be updated 
> to be clear in describing the purpose and the type of environment where this 
> command should be used (admin only access etc).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-05-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469194#comment-16469194
 ] 

ASF subversion and git services commented on NIFI-4942:
---

Commit 4f1444c0e09d974cbdca51abdd916e49fa0cfd62 in nifi's branch 
refs/heads/master from [~alopresto]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=4f1444c ]

NIFI-4942 This closes #2690. Resolved test failures in JCE limited mode.

Signed-off-by: joewitt 


> NiFi Toolkit - Allow migration of master key without previous password
> --
>
> Key: NIFI-4942
> URL: https://issues.apache.org/jira/browse/NIFI-4942
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.5.0
>Reporter: Yolanda M. Davis
>Assignee: Andy LoPresto
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: 
> TEST-org.apache.nifi.properties.ConfigEncryptionToolTest.xml
>
>
> Currently the encryption cli in nifi toolkit requires that, in order to 
> migrate from one master key to the next, the previous master key or password 
> should be provided. In cases where the provisioning tool doesn't have the 
> previous value available this becomes challenging to provide and may be prone 
> to error. In speaking with [~alopresto] we can allow toolkit to support a 
> mode of execution such that the master key can be updated without requiring 
> the previous password. Also documentation around it's usage should be updated 
> to be clear in describing the purpose and the type of environment where this 
> command should be used (admin only access etc).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2690: NIFI-4942 Resolved test failures in JCE limited mod...

2018-05-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2690


---


[GitHub] nifi issue #2671: NiFi-5102 - Adding Processors for MarkLogic DB

2018-05-09 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2671
  
Ok i've attached a patch which helps with some aspects of POM construction, 
flagging things like resource utilization since it appears to be loading full 
content into memory, and renaming the service to indicate it is a MarkLogic 
service rather than just a database service.  There is an outstanding need to 
sort out the security configuration.  For SSLContext stuff those things should 
utilize the standard mechanism of obtaining that as you can follow from a 
number of other processors.  Also, there is a kerberos context for security 
setting but there does not appear to be any associated settings for the user.  
The security configurations should be removed in favor of simple/digest for now 
OR completed and with some consistency to other items.  For security relevant 
things CVEs become a concern so we take these more seriously.  For things about 
the performance/logic of the processor interaction with MarkLogic that we can 
improve over time if needed but security we want to get right 
 up front.  The other thing that needs to happen is the nar bundles need their 
LICENSE/NOTICE file(s) added if necessary.  I looked at one of the nars and 
there would definitely need to be entries.  Please try adding these in like 
other nars and I'm happy to help tweak it to get it to the finish line.

If you have questions on how to achieve any of the above please ask.  Show 
an example nar you looked at which is similar so that we can best help close 
remaining gaps but from a place of good examples that you've looked at.

Thanks


---


[jira] [Created] (NIFI-5179) List based processor documentation should not say that DistributedMapCache service should be used.

2018-05-09 Thread Matthew Clarke (JIRA)
Matthew Clarke created NIFI-5179:


 Summary: List based processor documentation should not say that 
DistributedMapCache service should be used.
 Key: NIFI-5179
 URL: https://issues.apache.org/jira/browse/NIFI-5179
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Affects Versions: 1.6.0, 1.5.0, 1.4.0, 1.3.0, 1.2.0, 1.1.0
Reporter: Matthew Clarke


Some of the list based processors where developed pre Apache NiFi 1.0 release 
and relied on DistributedMapCache Service to retain cluster state.  The 
documentation leads to confusion now that cluster state is stored in zookeeper. 
 Documentation should be updated to illustrate what purpose the 
DistributedMapCache service has post NiFi 1.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5102) MarkLogic DB Processors

2018-05-09 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-5102:
--
Attachment: 0002-NIFI-5102-making-some-updates-to-help-but-security-n.patch

> MarkLogic DB Processors
> ---
>
> Key: NIFI-5102
> URL: https://issues.apache.org/jira/browse/NIFI-5102
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Anthony Roach
>Priority: Major
> Fix For: 1.7.0
>
> Attachments: 
> 0002-NIFI-5102-making-some-updates-to-help-but-security-n.patch
>
>
> As a data architect, I need to ingest data from my NiFi FlowFile into 
> MarkLogic database documents.  I have created the following two processors:
>  * PutMarkLogic:  Ingest Flowfile into MarkLogic database documents
>  * QueryMarkLogic:  Retrieve result set from MarkLogic into FlowFile
> I will create a pull request.
> [www.marklogic.com|http://www.marklogic.com/] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5064) Fixes and improvements to PutKudu processor

2018-05-09 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-5064.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

> Fixes and improvements to PutKudu processor
> ---
>
> Key: NIFI-5064
> URL: https://issues.apache.org/jira/browse/NIFI-5064
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Junegunn Choi
>Priority: Major
> Fix For: 1.7.0
>
>
> 1. Currently, PutKudu fails with NPE on null or missing values.
> 2. {{IllegalArgumentException}} on 16-bit integer columns because of [a 
> missing {{break}} in case clause for INT16 
> columns|https://github.com/apache/nifi/blob/rel/nifi-1.6.0/nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java#L112-L115].
> 3. Also, {{IllegalArgumentException}} on 8-bit integer columns. We need a 
> separate case clause for INT8 columns where {{PartialRow#addByte}} instead of 
> {{PartialRow#addShort}} is be used.
> 4. NIFI-4384 added batch size parameter, however, it only applies to 
> FlowFiles with multiple records. {{KuduSession}} is created and closed for 
> each FlowFile, so if a FlowFile contains only a single record, no batching 
> takes place. A workaround would be to use a preprocessor to concatenate 
> multiple FlowFiles, but since {{PutHBase}} and {{PutSQL}} use 
> {{session.get(batchSize)}} to handle multiple FlowFiles at once, I think we 
> can take the same approach here with PutKudu as it simplifies the data flow.
> 5. {{PutKudu}} depends on kudu-client 1.3.0. But we can safely update to 
> 1.7.0.
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/release_notes.adoc]
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/prior_release_notes.adoc]
> A notable change in Kudu 1.7.0 is the addition of Decimal type.
> 6. {{PutKudu}} has {{Skip head line}} property for ignoring the first record 
> in a FlowFile. I suppose this was added to handle header lines in CSV files, 
> but I really don't think it's something {{PutKudu}} should handle. 
> {{CSVReader}} already has {{Treat First Line as Header}} option, so we should 
> tell the users to use it instead as we don't want to have the same option 
> here and there. Also, the default value of {{Skip head line}} is {{true}}, 
> and I found it very confusing as my use case was to stream-process 
> single-record FlowFiles. We can keep this property for backward 
> compatibility, but we should at least deprecate it and change the default 
> value to {{false}}.
> 7. Server-side errors such as uniqueness constraint violation are not checked 
> and simply ignored. When flush mode is set to {{AUTO_FLUSH_SYNC}}, we should 
> check the return value of {{KuduSession#apply}} to see it has {{RowError}}, 
> but PutKudu currently ignores it. For example, on uniqueness constraint 
> violation, we get a {{RowError}} saying "_Already present: key already 
> present (error 0)_".
> On the other hand, when flush mode is set to {{AUTO_FLUSH_BACKGROUND}}, 
> {{KuduSession#apply}}, understandably, returns null, and we should check the 
> return value of {{KuduSession#getPendingErrors()}}. And when the mode is 
> {{MANUAL_FLUSH}}, we should examine the return value of 
> {{KuduSession#flush()}} or {{KuduSession#close()}}. In this case, we also 
> have to make sure that we don't overflow the mutation buffer of 
> {{KuduSession}} by calling {{flush()}} before too late.
> 
> I'll create a pull request on GitHub. Since there are multiple issues to be 
> addressed, I made separate commits for each issue mentioned above so that 
> it's easier to review. You might want to squash them into one, or cherry-pick 
> a subset of commits if you don't agree with some decisions I made.
> Please let me know what you think. We deployed the code to a production 
> server last week and it's been running since without any issues steadily 
> processing 20K records/second.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5064) Fixes and improvements to PutKudu processor

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469178#comment-16469178
 ] 

ASF GitHub Bot commented on NIFI-5064:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2621


> Fixes and improvements to PutKudu processor
> ---
>
> Key: NIFI-5064
> URL: https://issues.apache.org/jira/browse/NIFI-5064
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Junegunn Choi
>Priority: Major
> Fix For: 1.7.0
>
>
> 1. Currently, PutKudu fails with NPE on null or missing values.
> 2. {{IllegalArgumentException}} on 16-bit integer columns because of [a 
> missing {{break}} in case clause for INT16 
> columns|https://github.com/apache/nifi/blob/rel/nifi-1.6.0/nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java#L112-L115].
> 3. Also, {{IllegalArgumentException}} on 8-bit integer columns. We need a 
> separate case clause for INT8 columns where {{PartialRow#addByte}} instead of 
> {{PartialRow#addShort}} is be used.
> 4. NIFI-4384 added batch size parameter, however, it only applies to 
> FlowFiles with multiple records. {{KuduSession}} is created and closed for 
> each FlowFile, so if a FlowFile contains only a single record, no batching 
> takes place. A workaround would be to use a preprocessor to concatenate 
> multiple FlowFiles, but since {{PutHBase}} and {{PutSQL}} use 
> {{session.get(batchSize)}} to handle multiple FlowFiles at once, I think we 
> can take the same approach here with PutKudu as it simplifies the data flow.
> 5. {{PutKudu}} depends on kudu-client 1.3.0. But we can safely update to 
> 1.7.0.
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/release_notes.adoc]
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/prior_release_notes.adoc]
> A notable change in Kudu 1.7.0 is the addition of Decimal type.
> 6. {{PutKudu}} has {{Skip head line}} property for ignoring the first record 
> in a FlowFile. I suppose this was added to handle header lines in CSV files, 
> but I really don't think it's something {{PutKudu}} should handle. 
> {{CSVReader}} already has {{Treat First Line as Header}} option, so we should 
> tell the users to use it instead as we don't want to have the same option 
> here and there. Also, the default value of {{Skip head line}} is {{true}}, 
> and I found it very confusing as my use case was to stream-process 
> single-record FlowFiles. We can keep this property for backward 
> compatibility, but we should at least deprecate it and change the default 
> value to {{false}}.
> 7. Server-side errors such as uniqueness constraint violation are not checked 
> and simply ignored. When flush mode is set to {{AUTO_FLUSH_SYNC}}, we should 
> check the return value of {{KuduSession#apply}} to see it has {{RowError}}, 
> but PutKudu currently ignores it. For example, on uniqueness constraint 
> violation, we get a {{RowError}} saying "_Already present: key already 
> present (error 0)_".
> On the other hand, when flush mode is set to {{AUTO_FLUSH_BACKGROUND}}, 
> {{KuduSession#apply}}, understandably, returns null, and we should check the 
> return value of {{KuduSession#getPendingErrors()}}. And when the mode is 
> {{MANUAL_FLUSH}}, we should examine the return value of 
> {{KuduSession#flush()}} or {{KuduSession#close()}}. In this case, we also 
> have to make sure that we don't overflow the mutation buffer of 
> {{KuduSession}} by calling {{flush()}} before too late.
> 
> I'll create a pull request on GitHub. Since there are multiple issues to be 
> addressed, I made separate commits for each issue mentioned above so that 
> it's easier to review. You might want to squash them into one, or cherry-pick 
> a subset of commits if you don't agree with some decisions I made.
> Please let me know what you think. We deployed the code to a production 
> server last week and it's been running since without any issues steadily 
> processing 20K records/second.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5064) Fixes and improvements to PutKudu processor

2018-05-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469177#comment-16469177
 ] 

ASF subversion and git services commented on NIFI-5064:
---

Commit 02ba4cf2c85221af731f310b9f2d1624f4e2f446 in nifi's branch 
refs/heads/master from [~junegunn]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=02ba4cf ]

NIFI-5064 - Fixes and improvements to PutKudu processor

Signed-off-by: Pierre Villard 

This closes #2621.


> Fixes and improvements to PutKudu processor
> ---
>
> Key: NIFI-5064
> URL: https://issues.apache.org/jira/browse/NIFI-5064
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Junegunn Choi
>Priority: Major
>
> 1. Currently, PutKudu fails with NPE on null or missing values.
> 2. {{IllegalArgumentException}} on 16-bit integer columns because of [a 
> missing {{break}} in case clause for INT16 
> columns|https://github.com/apache/nifi/blob/rel/nifi-1.6.0/nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java#L112-L115].
> 3. Also, {{IllegalArgumentException}} on 8-bit integer columns. We need a 
> separate case clause for INT8 columns where {{PartialRow#addByte}} instead of 
> {{PartialRow#addShort}} is be used.
> 4. NIFI-4384 added batch size parameter, however, it only applies to 
> FlowFiles with multiple records. {{KuduSession}} is created and closed for 
> each FlowFile, so if a FlowFile contains only a single record, no batching 
> takes place. A workaround would be to use a preprocessor to concatenate 
> multiple FlowFiles, but since {{PutHBase}} and {{PutSQL}} use 
> {{session.get(batchSize)}} to handle multiple FlowFiles at once, I think we 
> can take the same approach here with PutKudu as it simplifies the data flow.
> 5. {{PutKudu}} depends on kudu-client 1.3.0. But we can safely update to 
> 1.7.0.
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/release_notes.adoc]
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/prior_release_notes.adoc]
> A notable change in Kudu 1.7.0 is the addition of Decimal type.
> 6. {{PutKudu}} has {{Skip head line}} property for ignoring the first record 
> in a FlowFile. I suppose this was added to handle header lines in CSV files, 
> but I really don't think it's something {{PutKudu}} should handle. 
> {{CSVReader}} already has {{Treat First Line as Header}} option, so we should 
> tell the users to use it instead as we don't want to have the same option 
> here and there. Also, the default value of {{Skip head line}} is {{true}}, 
> and I found it very confusing as my use case was to stream-process 
> single-record FlowFiles. We can keep this property for backward 
> compatibility, but we should at least deprecate it and change the default 
> value to {{false}}.
> 7. Server-side errors such as uniqueness constraint violation are not checked 
> and simply ignored. When flush mode is set to {{AUTO_FLUSH_SYNC}}, we should 
> check the return value of {{KuduSession#apply}} to see it has {{RowError}}, 
> but PutKudu currently ignores it. For example, on uniqueness constraint 
> violation, we get a {{RowError}} saying "_Already present: key already 
> present (error 0)_".
> On the other hand, when flush mode is set to {{AUTO_FLUSH_BACKGROUND}}, 
> {{KuduSession#apply}}, understandably, returns null, and we should check the 
> return value of {{KuduSession#getPendingErrors()}}. And when the mode is 
> {{MANUAL_FLUSH}}, we should examine the return value of 
> {{KuduSession#flush()}} or {{KuduSession#close()}}. In this case, we also 
> have to make sure that we don't overflow the mutation buffer of 
> {{KuduSession}} by calling {{flush()}} before too late.
> 
> I'll create a pull request on GitHub. Since there are multiple issues to be 
> addressed, I made separate commits for each issue mentioned above so that 
> it's easier to review. You might want to squash them into one, or cherry-pick 
> a subset of commits if you don't agree with some decisions I made.
> Please let me know what you think. We deployed the code to a production 
> server last week and it's been running since without any issues steadily 
> processing 20K records/second.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2621: NIFI-5064 Fixes and improvements to PutKudu processor

2018-05-09 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2621
  
Hi @junegunn - I tested your changes on a simple workflow and what you 
propose here makes a lot of sense. I don't see any breaking change and the 
error handling looks OK to me. I'm a +1 and will merge to master. Thanks for 
your work and your very detailed JIRA. Really appreciated.


---


[jira] [Commented] (NIFI-5064) Fixes and improvements to PutKudu processor

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469167#comment-16469167
 ] 

ASF GitHub Bot commented on NIFI-5064:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2621
  
Hi @junegunn - I tested your changes on a simple workflow and what you 
propose here makes a lot of sense. I don't see any breaking change and the 
error handling looks OK to me. I'm a +1 and will merge to master. Thanks for 
your work and your very detailed JIRA. Really appreciated.


> Fixes and improvements to PutKudu processor
> ---
>
> Key: NIFI-5064
> URL: https://issues.apache.org/jira/browse/NIFI-5064
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Junegunn Choi
>Priority: Major
>
> 1. Currently, PutKudu fails with NPE on null or missing values.
> 2. {{IllegalArgumentException}} on 16-bit integer columns because of [a 
> missing {{break}} in case clause for INT16 
> columns|https://github.com/apache/nifi/blob/rel/nifi-1.6.0/nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java#L112-L115].
> 3. Also, {{IllegalArgumentException}} on 8-bit integer columns. We need a 
> separate case clause for INT8 columns where {{PartialRow#addByte}} instead of 
> {{PartialRow#addShort}} is be used.
> 4. NIFI-4384 added batch size parameter, however, it only applies to 
> FlowFiles with multiple records. {{KuduSession}} is created and closed for 
> each FlowFile, so if a FlowFile contains only a single record, no batching 
> takes place. A workaround would be to use a preprocessor to concatenate 
> multiple FlowFiles, but since {{PutHBase}} and {{PutSQL}} use 
> {{session.get(batchSize)}} to handle multiple FlowFiles at once, I think we 
> can take the same approach here with PutKudu as it simplifies the data flow.
> 5. {{PutKudu}} depends on kudu-client 1.3.0. But we can safely update to 
> 1.7.0.
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/release_notes.adoc]
>  - [https://github.com/apache/kudu/blob/1.7.0/docs/prior_release_notes.adoc]
> A notable change in Kudu 1.7.0 is the addition of Decimal type.
> 6. {{PutKudu}} has {{Skip head line}} property for ignoring the first record 
> in a FlowFile. I suppose this was added to handle header lines in CSV files, 
> but I really don't think it's something {{PutKudu}} should handle. 
> {{CSVReader}} already has {{Treat First Line as Header}} option, so we should 
> tell the users to use it instead as we don't want to have the same option 
> here and there. Also, the default value of {{Skip head line}} is {{true}}, 
> and I found it very confusing as my use case was to stream-process 
> single-record FlowFiles. We can keep this property for backward 
> compatibility, but we should at least deprecate it and change the default 
> value to {{false}}.
> 7. Server-side errors such as uniqueness constraint violation are not checked 
> and simply ignored. When flush mode is set to {{AUTO_FLUSH_SYNC}}, we should 
> check the return value of {{KuduSession#apply}} to see it has {{RowError}}, 
> but PutKudu currently ignores it. For example, on uniqueness constraint 
> violation, we get a {{RowError}} saying "_Already present: key already 
> present (error 0)_".
> On the other hand, when flush mode is set to {{AUTO_FLUSH_BACKGROUND}}, 
> {{KuduSession#apply}}, understandably, returns null, and we should check the 
> return value of {{KuduSession#getPendingErrors()}}. And when the mode is 
> {{MANUAL_FLUSH}}, we should examine the return value of 
> {{KuduSession#flush()}} or {{KuduSession#close()}}. In this case, we also 
> have to make sure that we don't overflow the mutation buffer of 
> {{KuduSession}} by calling {{flush()}} before too late.
> 
> I'll create a pull request on GitHub. Since there are multiple issues to be 
> addressed, I made separate commits for each issue mentioned above so that 
> it's easier to review. You might want to squash them into one, or cherry-pick 
> a subset of commits if you don't agree with some decisions I made.
> Please let me know what you think. We deployed the code to a production 
> server last week and it's been running since without any issues steadily 
> processing 20K records/second.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5178) Stackable content

2018-05-09 Thread eric twilegar (JIRA)
eric twilegar created NIFI-5178:
---

 Summary: Stackable content
 Key: NIFI-5178
 URL: https://issues.apache.org/jira/browse/NIFI-5178
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: eric twilegar


Having an issue where I need to make decision as I'm processing a list of 
records.

Similar to an upsert/merge type of flow. I need to first check that a record 
hasn't already been imported or was already imported in some other mechanism 
prior to routing.

To do this I add an ExecuteSQL to the flow. All I really need is the 
execute.sql.rowcount:equals(0) statement and the actual results of the 
executeSQL are useless to me. Simply trying to make a decision on how to branch 
with RouteOnAttribute.

The work around now is to store the content of the original in an attribute, 
and then use ReplaceText to then plop it back on after ExecuteSQL processor. 
This can get quite cumbersome if you have 4 or 5 decision points.

What would be nice is if all processors could instead of replacing content, 
could PUSH content. Then add a processor called "PopContent" which would just 
remove the last content that was pushed onto the flowfile. 

If content was a stack then you could go off and get some data, do a few stages 
with it, then add attributes, and then pop back to the original content. In my 
case ExecuteSQL wouldn't overwrite content, but instead just push new data onto 
the stack.

Not sure if LookupService is a better mechanism for this going forward. It's 
possible I could do a lookup and instead of enriching the data add a boolean 
type key to be used as a decision point later. Such as 
"alreadyExistsInDatabase" : "true|false" instead of something like "Store_name" 
: "Greatest store on earth" that enrichment generally does. I'm sure 
SQLLookupService is coming.

Adding "original" transfer for executesql might also solve this issue without a 
lot major refactoring in nifi.

I may put in a ticket for adding original to executeSQL possibly looking at the 
code myself.

Thanks for the great tool!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-950) Perform component validation asynchronously

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469086#comment-16469086
 ] 

ASF GitHub Bot commented on NIFI-950:
-

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2693
  
Will review...


> Perform component validation asynchronously
> ---
>
> Key: NIFI-950
> URL: https://issues.apache.org/jira/browse/NIFI-950
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Priority: Major
> Attachments: self_reference_flow_fix.xml
>
>
> I created a flow that is a self referencing http loop. The flow was working 
> fine but I wanted to save the template for later testing. I downloaded the 
> the flow as a template. Then I tried testing a thread.sleep in the beginning 
> of onConfigured, createSSLContext, and validate methods of 
> StandardSSLContextService. I did a mvn clean install in the 
> nifi-nar-bundles/nifi-standard-services/nifi-ssl-context-bundle/nifi-ssl-context-service
>  directory. Then a mvn clean install in the nifi-assembly directory. After I 
> imported the template the UI became very slow when clicking to different 
> windows of the UI such as configuring a processor and the controller services 
> window.
> I then stashed my changes and rebuilt the files. Once again I imported my 
> template, and attempting to configure a processor or accessing the controller 
> services window became very slow.
> The flow xml is attached. 
> ---
> The description and attachment showed an issue where long running validation 
> caused the UI to become unresponsive. This validation should be done 
> asynchronously so that the UI always remains responsive. Initial thoughts...
> - new state to indicate that validation is in progress
> - a mechanism for refreshing validation results
> - time out for waiting for validation to complete? or need to always be 
> validating all components in case their validity is based on something 
> environmental (like a configuration file that is modified outside of the 
> application)?
> - provide better support for components that are running and become invalid
> -- related to this we need to provide guidance regarding the difference 
> between become invalid and when we should use features like bulletins and 
> yielding to rely runtime issues



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2693: NIFI-950: Make component validation asynchronous

2018-05-09 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2693
  
Will review...


---


[jira] [Resolved] (NIFI-5168) ReplaceText Processor Should Use Single FlowFile Processing Instead of Batch

2018-05-09 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-5168.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

> ReplaceText Processor Should Use Single FlowFile Processing Instead of Batch
> 
>
> Key: NIFI-5168
> URL: https://issues.apache.org/jira/browse/NIFI-5168
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.7.0
>
>
> ReplaceText loads FlowFiles in batch's. Should be single for consistent user 
> experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5168) ReplaceText Processor Should Use Single FlowFile Processing Instead of Batch

2018-05-09 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5168:
-
Component/s: (was: Core Framework)
 Extensions

> ReplaceText Processor Should Use Single FlowFile Processing Instead of Batch
> 
>
> Key: NIFI-5168
> URL: https://issues.apache.org/jira/browse/NIFI-5168
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.7.0
>
>
> ReplaceText loads FlowFiles in batch's. Should be single for consistent user 
> experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2687: NiFI-5168 ReplaceText Processor Should Use Single F...

2018-05-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2687


---


[jira] [Commented] (NIFI-5168) ReplaceText Processor Should Use Single FlowFile Processing Instead of Batch

2018-05-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469068#comment-16469068
 ] 

ASF subversion and git services commented on NIFI-5168:
---

Commit 0a44bad76e6db2c163f6d6a78bbaf8184cdce7f7 in nifi's branch 
refs/heads/master from [~patricker]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=0a44bad ]

NIFI-5168 - ReplaceText Processor Should Use Single FlowFile Processing Instead 
of Batch

Signed-off-by: Pierre Villard 

This closes #2687.


> ReplaceText Processor Should Use Single FlowFile Processing Instead of Batch
> 
>
> Key: NIFI-5168
> URL: https://issues.apache.org/jira/browse/NIFI-5168
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
>
> ReplaceText loads FlowFiles in batch's. Should be single for consistent user 
> experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2670: NIFI-5138: Bug fix to ensure that when we have a CH...

2018-05-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2670


---


[GitHub] nifi issue #2687: NiFI-5168 ReplaceText Processor Should Use Single FlowFile...

2018-05-09 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2687
  
Change looks good to me, thanks @patricker, merging to master


---


[jira] [Commented] (NIFI-5138) JSON Record Readers providing wrong schema to sub-records when there is a CHOICE of multiple RECORD types

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469058#comment-16469058
 ] 

ASF GitHub Bot commented on NIFI-5138:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2670


> JSON Record Readers providing wrong schema to sub-records when there is a 
> CHOICE of multiple RECORD types
> -
>
> Key: NIFI-5138
> URL: https://issues.apache.org/jira/browse/NIFI-5138
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.7.0
>
>
> When the JSON Record Reader is used, if the schema provides a CHOICE of two 
> different RECORD types for a sub-record, then the sub-record's schema ends up 
> being an empty schema. This results in RecordPath's not properly evaluating. 
> For example, with the schema below:
> {code:java}
> {
>   "name": "top", "namespace": "nifi",
>   "type": "record",
>   "fields": [
>     { "name": "id", "type": "string" },
>     { "name": "child", "type": [{
>          "name": "first", "type": "record",
>          "fields": [{ "name": "name", "type": "string" }]
>        }, {
>          "name": "second", "type": "record",
>          "fields": [{ "name": "id", "type": "string" }]
>        }]
>      }
>   ]
> }{code}
>  
> If I then have the following JSON:
> {code:java}
> {
>   "id": "1234",
>   "child": {
>       "id": "4321"
>   }
> }{code}
> The result is that the record returned has the correct schema. However, if I 
> then call record.getValue("child") I am returned a Record object that has no 
> schema.
> This results in the RecordPath "/child/id" returning a null value.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5138) JSON Record Readers providing wrong schema to sub-records when there is a CHOICE of multiple RECORD types

2018-05-09 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-5138:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> JSON Record Readers providing wrong schema to sub-records when there is a 
> CHOICE of multiple RECORD types
> -
>
> Key: NIFI-5138
> URL: https://issues.apache.org/jira/browse/NIFI-5138
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.7.0
>
>
> When the JSON Record Reader is used, if the schema provides a CHOICE of two 
> different RECORD types for a sub-record, then the sub-record's schema ends up 
> being an empty schema. This results in RecordPath's not properly evaluating. 
> For example, with the schema below:
> {code:java}
> {
>   "name": "top", "namespace": "nifi",
>   "type": "record",
>   "fields": [
>     { "name": "id", "type": "string" },
>     { "name": "child", "type": [{
>          "name": "first", "type": "record",
>          "fields": [{ "name": "name", "type": "string" }]
>        }, {
>          "name": "second", "type": "record",
>          "fields": [{ "name": "id", "type": "string" }]
>        }]
>      }
>   ]
> }{code}
>  
> If I then have the following JSON:
> {code:java}
> {
>   "id": "1234",
>   "child": {
>       "id": "4321"
>   }
> }{code}
> The result is that the record returned has the correct schema. However, if I 
> then call record.getValue("child") I am returned a Record object that has no 
> schema.
> This results in the RecordPath "/child/id" returning a null value.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5138) JSON Record Readers providing wrong schema to sub-records when there is a CHOICE of multiple RECORD types

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469056#comment-16469056
 ] 

ASF GitHub Bot commented on NIFI-5138:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2670
  
Code looks good to me, confirmed the reported issue and this is fixed with 
this PR. Merging to master.


> JSON Record Readers providing wrong schema to sub-records when there is a 
> CHOICE of multiple RECORD types
> -
>
> Key: NIFI-5138
> URL: https://issues.apache.org/jira/browse/NIFI-5138
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.7.0
>
>
> When the JSON Record Reader is used, if the schema provides a CHOICE of two 
> different RECORD types for a sub-record, then the sub-record's schema ends up 
> being an empty schema. This results in RecordPath's not properly evaluating. 
> For example, with the schema below:
> {code:java}
> {
>   "name": "top", "namespace": "nifi",
>   "type": "record",
>   "fields": [
>     { "name": "id", "type": "string" },
>     { "name": "child", "type": [{
>          "name": "first", "type": "record",
>          "fields": [{ "name": "name", "type": "string" }]
>        }, {
>          "name": "second", "type": "record",
>          "fields": [{ "name": "id", "type": "string" }]
>        }]
>      }
>   ]
> }{code}
>  
> If I then have the following JSON:
> {code:java}
> {
>   "id": "1234",
>   "child": {
>       "id": "4321"
>   }
> }{code}
> The result is that the record returned has the correct schema. However, if I 
> then call record.getValue("child") I am returned a Record object that has no 
> schema.
> This results in the RecordPath "/child/id" returning a null value.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5138) JSON Record Readers providing wrong schema to sub-records when there is a CHOICE of multiple RECORD types

2018-05-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469057#comment-16469057
 ] 

ASF subversion and git services commented on NIFI-5138:
---

Commit 4700b8653dc980b1cf8985430683b79eb64922a4 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=4700b86 ]

NIFI-5138: Bug fix to ensure that when we have a CHOICE between two or more 
REOCRD types that we choose the appropriate RECORD type when creating the 
Record in the JSON Reader.

Signed-off-by: Pierre Villard 

This closes #2670.


> JSON Record Readers providing wrong schema to sub-records when there is a 
> CHOICE of multiple RECORD types
> -
>
> Key: NIFI-5138
> URL: https://issues.apache.org/jira/browse/NIFI-5138
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.7.0
>
>
> When the JSON Record Reader is used, if the schema provides a CHOICE of two 
> different RECORD types for a sub-record, then the sub-record's schema ends up 
> being an empty schema. This results in RecordPath's not properly evaluating. 
> For example, with the schema below:
> {code:java}
> {
>   "name": "top", "namespace": "nifi",
>   "type": "record",
>   "fields": [
>     { "name": "id", "type": "string" },
>     { "name": "child", "type": [{
>          "name": "first", "type": "record",
>          "fields": [{ "name": "name", "type": "string" }]
>        }, {
>          "name": "second", "type": "record",
>          "fields": [{ "name": "id", "type": "string" }]
>        }]
>      }
>   ]
> }{code}
>  
> If I then have the following JSON:
> {code:java}
> {
>   "id": "1234",
>   "child": {
>       "id": "4321"
>   }
> }{code}
> The result is that the record returned has the correct schema. However, if I 
> then call record.getValue("child") I am returned a Record object that has no 
> schema.
> This results in the RecordPath "/child/id" returning a null value.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2670: NIFI-5138: Bug fix to ensure that when we have a CHOICE be...

2018-05-09 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2670
  
Code looks good to me, confirmed the reported issue and this is fixed with 
this PR. Merging to master.


---


[jira] [Commented] (NIFI-950) Perform component validation asynchronously

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469051#comment-16469051
 ] 

ASF GitHub Bot commented on NIFI-950:
-

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2693

NIFI-950: Make component validation asynchronous

This PR addresses NIFI-950 as well as a handful of other related JIRAs. I 
used a single PR because a lot of the solutions to the issues built upon one 
another and because the intent of this PR is basically to make clustering more 
stable and to make the UI feel less sluggish when clustered.


Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-950

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2693.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2693


commit b155a7474e98eb597b2417fef276a6bcddf18d78
Author: Mark Payne 
Date:   2018-04-11T19:36:54Z

NIFI-950: Make component validation asynchronous
NIFI-950: Still seeing some slow response times when instantiating a large 
template in cluster mode so making some minor tweaks based on the results of 
CPU profiling
NIFI-5112: Refactored FlowSerializer so that it creates the desired 
intermediate data model that can be serialized, separate from serializing. This 
allows us to hold the FlowController's Read Lock only while creating the data 
model, not while actually serializing the data. Configured Jersey Client in 
ThreadPoolRequestReplicator not to look for features using the Service Loader 
for every request. Updated Template object to hold a DOM Node that represents 
the template contents instead of having to serialize the DTO, then parse the 
serialized form as a DOM object each time that it needs to be serialized.
NIFI-5112: Change ThreadPoolRequestReplicator to use OkHttp client instead 
of Jersey Client
NIFI-5111: Ensure that if a node is no longer cluster coordinator, that it 
clears any stale heartbeats.
NIFI-5110: Notify StandardProcessScheduler when a component is removed so 
that it will clean up any resource related to component lifecycle.

commit be01d8e1d2fc985a852ee849bdb4c39639642ae1
Author: Mark Payne 
Date:   2018-04-24T19:52:36Z

NIFI-950: Avoid gathering the Status objects for entire flow when we don't 
need them; removed unnecessary code

commit 1751d276813fa52fa98f9656c712931c130e66f3
Author: Mark Payne 
Date:   2018-05-03T20:07:10Z

NIFI-950: Bug fixes

commit d6e29be47b9befa218df4ecb0af33d7b15df6be2
Author: Mark Payne 
Date:   2018-05-08T17:35:44Z

NIFI-950: Bug fix; added validation status to ProcessorDTO, 
ControllerServiceDTO, ReportingTaskDTO; updated DebugFlow to allow for pause 
time to be set in the customValidate method for testing functionality

commit 352bc4c9eb880f29cb2aefcb29539923f00ad89b
Author: Mark Payne 
Date:   2018-05-08T19:59:43Z


[GitHub] nifi pull request #2693: NIFI-950: Make component validation asynchronous

2018-05-09 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2693

NIFI-950: Make component validation asynchronous

This PR addresses NIFI-950 as well as a handful of other related JIRAs. I 
used a single PR because a lot of the solutions to the issues built upon one 
another and because the intent of this PR is basically to make clustering more 
stable and to make the UI feel less sluggish when clustered.


Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-950

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2693.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2693


commit b155a7474e98eb597b2417fef276a6bcddf18d78
Author: Mark Payne 
Date:   2018-04-11T19:36:54Z

NIFI-950: Make component validation asynchronous
NIFI-950: Still seeing some slow response times when instantiating a large 
template in cluster mode so making some minor tweaks based on the results of 
CPU profiling
NIFI-5112: Refactored FlowSerializer so that it creates the desired 
intermediate data model that can be serialized, separate from serializing. This 
allows us to hold the FlowController's Read Lock only while creating the data 
model, not while actually serializing the data. Configured Jersey Client in 
ThreadPoolRequestReplicator not to look for features using the Service Loader 
for every request. Updated Template object to hold a DOM Node that represents 
the template contents instead of having to serialize the DTO, then parse the 
serialized form as a DOM object each time that it needs to be serialized.
NIFI-5112: Change ThreadPoolRequestReplicator to use OkHttp client instead 
of Jersey Client
NIFI-5111: Ensure that if a node is no longer cluster coordinator, that it 
clears any stale heartbeats.
NIFI-5110: Notify StandardProcessScheduler when a component is removed so 
that it will clean up any resource related to component lifecycle.

commit be01d8e1d2fc985a852ee849bdb4c39639642ae1
Author: Mark Payne 
Date:   2018-04-24T19:52:36Z

NIFI-950: Avoid gathering the Status objects for entire flow when we don't 
need them; removed unnecessary code

commit 1751d276813fa52fa98f9656c712931c130e66f3
Author: Mark Payne 
Date:   2018-05-03T20:07:10Z

NIFI-950: Bug fixes

commit d6e29be47b9befa218df4ecb0af33d7b15df6be2
Author: Mark Payne 
Date:   2018-05-08T17:35:44Z

NIFI-950: Bug fix; added validation status to ProcessorDTO, 
ControllerServiceDTO, ReportingTaskDTO; updated DebugFlow to allow for pause 
time to be set in the customValidate method for testing functionality

commit 352bc4c9eb880f29cb2aefcb29539923f00ad89b
Author: Mark Payne 
Date:   2018-05-08T19:59:43Z

NIFI-950: Addressing test failures

commit bc800b8dafcb093089382eba5077079d389121d2
Author: Mark Payne 
Date:   2018-05-09T14:02:03Z

NIFI-950: Bug fixes




---


[jira] [Updated] (NIFI-5173) Graph search control fails to demonstrate component selection

2018-05-09 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-5173:
--
Status: Patch Available  (was: In Progress)

> Graph search control fails to demonstrate component selection
> -
>
> Key: NIFI-5173
> URL: https://issues.apache.org/jira/browse/NIFI-5173
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.6.0
>Reporter: Alex Aversa
>Assignee: Matt Gilman
>Priority: Minor
>
> When using the graph search control to locate a component within a flow, the 
> searched item fails to render as selected on the graph. The item is 
> positioned correctly, but is not highlighted accordingly. Preliminary 
> research indicated that within the *nfActions.show* method, the 
> *selection.classed('selected',true);* is being reset by a subsequent call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-160) Implement a hook provider

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469029#comment-16469029
 ] 

ASF GitHub Bot commented on NIFIREG-160:


Github user pvillard31 commented on the issue:

https://github.com/apache/nifi-registry/pull/110
  
Great! We are definitely on the same page here. Closing this PR, thanks 
again!


> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-160) Implement a hook provider

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469030#comment-16469030
 ] 

ASF GitHub Bot commented on NIFIREG-160:


Github user pvillard31 closed the pull request at:

https://github.com/apache/nifi-registry/pull/110


> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #110: NIFIREG-160 - WIP - Hook provider

2018-05-09 Thread pvillard31
Github user pvillard31 closed the pull request at:

https://github.com/apache/nifi-registry/pull/110


---


[GitHub] nifi-registry issue #110: NIFIREG-160 - WIP - Hook provider

2018-05-09 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi-registry/pull/110
  
Great! We are definitely on the same page here. Closing this PR, thanks 
again!


---


[jira] [Resolved] (NIFI-5156) Update 'Google Cloud SDK' version and refactor GCP processors' code

2018-05-09 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-5156.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

> Update 'Google Cloud SDK' version and refactor GCP processors' code
> ---
>
> Key: NIFI-5156
> URL: https://issues.apache.org/jira/browse/NIFI-5156
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
> Fix For: 1.7.0
>
>
> The current version of nifi-gcp-bundle has the following problems:
>  # It is of a very old version
>  # The bundle uses 
> [google-cloud|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml#L65-L82]
>  which is a uber/fat jar that comes with all google-cloud-java SDKs making 
> the overall NAR bundle heavy.
> The following improvements are identified and have to be made:
>  * Update the SDK to a more recent version
>  * Introduce google-cloud-bom and then use the necessary SDK (storage, 
> bigquery, etc) as and when needed, thus reducing the overall NAR size.
>  * Refactor and make necessary changes to the current version of GCS 
> processors, if needed because of the update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5156) Update 'Google Cloud SDK' version and refactor GCP processors' code

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469024#comment-16469024
 ] 

ASF GitHub Bot commented on NIFI-5156:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2680
  
Thanks @zenfenan, merged to master.


> Update 'Google Cloud SDK' version and refactor GCP processors' code
> ---
>
> Key: NIFI-5156
> URL: https://issues.apache.org/jira/browse/NIFI-5156
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> The current version of nifi-gcp-bundle has the following problems:
>  # It is of a very old version
>  # The bundle uses 
> [google-cloud|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml#L65-L82]
>  which is a uber/fat jar that comes with all google-cloud-java SDKs making 
> the overall NAR bundle heavy.
> The following improvements are identified and have to be made:
>  * Update the SDK to a more recent version
>  * Introduce google-cloud-bom and then use the necessary SDK (storage, 
> bigquery, etc) as and when needed, thus reducing the overall NAR size.
>  * Refactor and make necessary changes to the current version of GCS 
> processors, if needed because of the update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5156) Update 'Google Cloud SDK' version and refactor GCP processors' code

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469025#comment-16469025
 ] 

ASF GitHub Bot commented on NIFI-5156:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2680


> Update 'Google Cloud SDK' version and refactor GCP processors' code
> ---
>
> Key: NIFI-5156
> URL: https://issues.apache.org/jira/browse/NIFI-5156
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> The current version of nifi-gcp-bundle has the following problems:
>  # It is of a very old version
>  # The bundle uses 
> [google-cloud|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml#L65-L82]
>  which is a uber/fat jar that comes with all google-cloud-java SDKs making 
> the overall NAR bundle heavy.
> The following improvements are identified and have to be made:
>  * Update the SDK to a more recent version
>  * Introduce google-cloud-bom and then use the necessary SDK (storage, 
> bigquery, etc) as and when needed, thus reducing the overall NAR size.
>  * Refactor and make necessary changes to the current version of GCS 
> processors, if needed because of the update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2680: NIFI-5156: Updated GCP SDK to latest version

2018-05-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2680


---


[jira] [Commented] (NIFI-5156) Update 'Google Cloud SDK' version and refactor GCP processors' code

2018-05-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469023#comment-16469023
 ] 

ASF subversion and git services commented on NIFI-5156:
---

Commit f742a3a6accd7695a32b1e4c42c2ec93619a7b9a in nifi's branch 
refs/heads/master from [~sivaprasanna]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=f742a3a ]

NIFI-5156: Updated GCP SDK to latest version

Signed-off-by: Pierre Villard 

This closes #2680.


> Update 'Google Cloud SDK' version and refactor GCP processors' code
> ---
>
> Key: NIFI-5156
> URL: https://issues.apache.org/jira/browse/NIFI-5156
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> The current version of nifi-gcp-bundle has the following problems:
>  # It is of a very old version
>  # The bundle uses 
> [google-cloud|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/pom.xml#L65-L82]
>  which is a uber/fat jar that comes with all google-cloud-java SDKs making 
> the overall NAR bundle heavy.
> The following improvements are identified and have to be made:
>  * Update the SDK to a more recent version
>  * Introduce google-cloud-bom and then use the necessary SDK (storage, 
> bigquery, etc) as and when needed, thus reducing the overall NAR size.
>  * Refactor and make necessary changes to the current version of GCS 
> processors, if needed because of the update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4731) BigQuery processors

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469018#comment-16469018
 ] 

ASF GitHub Bot commented on NIFI-4731:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2682
  
Hi @danieljimenez - thanks for this PR, I'll try to find time to review it 
if no one does. @zenfenan might be in a position to help.

Just a quick comment to let you know that I'm about to merge #2680 to 
improve the way dependencies are managed for this bundle. You might need to 
rebase your PR but it shouldn't be a big change. Basically, you wouldn't have 
to specify the version of your dependencies as the version are already set 
within the google-cloud pom file.

Thanks again!


> BigQuery processors
> ---
>
> Key: NIFI-4731
> URL: https://issues.apache.org/jira/browse/NIFI-4731
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mikhail Sosonkin
>Priority: Major
>
> NIFI should have processors for putting data into BigQuery (Streaming and 
> Batch).
> Initial working processors can be found this repository: 
> https://github.com/nologic/nifi/tree/NIFI-4731/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery
> I'd like to get them into Nifi proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2683: NIFI-5146 Only support HTTP or HTTPS operation for ...

2018-05-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2683


---


[jira] [Commented] (NIFI-5146) Ability to configure HTTP and HTTPS simultaneously causes HostHeader issues

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469012#comment-16469012
 ] 

ASF GitHub Bot commented on NIFI-5146:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2683


> Ability to configure HTTP and HTTPS simultaneously causes HostHeader issues
> ---
>
> Key: NIFI-5146
> URL: https://issues.apache.org/jira/browse/NIFI-5146
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Aldrin Piri
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: hostname, http, https, security
> Fix For: 1.7.0
>
>
> The host header whitelisting evaluation is only done when NiFi is configured 
> in secure mode, determined by the setting of an HTTPS port.  (see 
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java#L161
>  and 
> [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/HostHeaderHandler.java#L190).]
> However, in the case where both are enabled, the HTTP port is not enumerated 
> in possible combinations and explicit inclusions of a given socket that would 
> be HTTP is stripped via 
> [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/HostHeaderHandler.java#L143.]
> It is possible that concurrently running HTTP and HTTPS no longer makes 
> sense, in which case we could evaluate the relevant properties and prevent 
> startup for an unintended configuration.  Alternatively, we would need to 
> adjust the custom hostname interpretation to also include consideration for 
> the HTTP port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-160) Implement a hook provider

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469013#comment-16469013
 ] 

ASF GitHub Bot commented on NIFIREG-160:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/110
  
Thanks for the quick review. Using the commit message definitely makes 
sense, I will add those fields back to the event for creating a flow version. 

I think we could support an option to override the commit message as part 
of the CLI commands, most likely an optional argument on the 
import-flow-version command. Eventually it would be nice to have some kind of 
tags (or a better name) that can be put on flows/items, and then the tags could 
be used to create these kinds of workflows, but obviously that would be a 
longer term effort and using the commit message is a nice option for right now.

I'll keep going on my branch with adding the author and comment fields and 
unit tests and then try and get something posted. Thanks for getting the ball 
rolling on this, it should be really helpful for users.


> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5146) Ability to configure HTTP and HTTPS simultaneously causes HostHeader issues

2018-05-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469008#comment-16469008
 ] 

ASF subversion and git services commented on NIFI-5146:
---

Commit 7a4990e7fe7c38c95b4ee1436a822428ff1f5f98 in nifi's branch 
refs/heads/master from [~alopresto]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=7a4990e ]

NIFI-5146 Only support HTTP or HTTPS operation for NiFi API/UI

- Added logic to check for simultaneous configuration of HTTP and HTTPS 
connectors in JettyServer.
- Added test logging resources. Added unit tests.
- Refactored shared functionality to generic method which accepts lambdas.
  Fixed unit test with logging side effects.
- Added note about exclusive HTTP/HTTPS behavior to Admin Guide. Fixed typos.

This closes #2683.

Signed-off-by: Kevin Doran 


> Ability to configure HTTP and HTTPS simultaneously causes HostHeader issues
> ---
>
> Key: NIFI-5146
> URL: https://issues.apache.org/jira/browse/NIFI-5146
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.6.0
>Reporter: Aldrin Piri
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: hostname, http, https, security
> Fix For: 1.7.0
>
>
> The host header whitelisting evaluation is only done when NiFi is configured 
> in secure mode, determined by the setting of an HTTPS port.  (see 
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/JettyServer.java#L161
>  and 
> [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/HostHeaderHandler.java#L190).]
> However, in the case where both are enabled, the HTTP port is not enumerated 
> in possible combinations and explicit inclusions of a given socket that would 
> be HTTP is stripped via 
> [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-jetty/src/main/java/org/apache/nifi/web/server/HostHeaderHandler.java#L143.]
> It is possible that concurrently running HTTP and HTTPS no longer makes 
> sense, in which case we could evaluate the relevant properties and prevent 
> startup for an unintended configuration.  Alternatively, we would need to 
> adjust the custom hostname interpretation to also include consideration for 
> the HTTP port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #110: NIFIREG-160 - WIP - Hook provider

2018-05-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/110
  
Thanks for the quick review. Using the commit message definitely makes 
sense, I will add those fields back to the event for creating a flow version. 

I think we could support an option to override the commit message as part 
of the CLI commands, most likely an optional argument on the 
import-flow-version command. Eventually it would be nice to have some kind of 
tags (or a better name) that can be put on flows/items, and then the tags could 
be used to create these kinds of workflows, but obviously that would be a 
longer term effort and using the commit message is a nice option for right now.

I'll keep going on my branch with adding the author and comment fields and 
unit tests and then try and get something posted. Thanks for getting the ball 
rolling on this, it should be really helpful for users.


---


[jira] [Commented] (NIFI-5173) Graph search control fails to demonstrate component selection

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469005#comment-16469005
 ] 

ASF GitHub Bot commented on NIFI-5173:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2692

NIFI-5173: Fixing component selection issue in zoom handling

NIFI-5173:
- Removing unnecessary logic in the zoom handler since the zoom event is no 
longer triggered during onClick.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5173

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2692.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2692


commit 180d911fbf72441d64064ca078706b923c0fb1b0
Author: Matt Gilman 
Date:   2018-05-09T16:07:07Z

NIFI-5173:
- Removing unnecessary logic in the zoom handler since the zoom event is no 
longer triggered during onClick.




> Graph search control fails to demonstrate component selection
> -
>
> Key: NIFI-5173
> URL: https://issues.apache.org/jira/browse/NIFI-5173
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.6.0
>Reporter: Alex Aversa
>Assignee: Matt Gilman
>Priority: Minor
>
> When using the graph search control to locate a component within a flow, the 
> searched item fails to render as selected on the graph. The item is 
> positioned correctly, but is not highlighted accordingly. Preliminary 
> research indicated that within the *nfActions.show* method, the 
> *selection.classed('selected',true);* is being reset by a subsequent call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2692: NIFI-5173: Fixing component selection issue in zoom...

2018-05-09 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2692

NIFI-5173: Fixing component selection issue in zoom handling

NIFI-5173:
- Removing unnecessary logic in the zoom handler since the zoom event is no 
longer triggered during onClick.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5173

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2692.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2692


commit 180d911fbf72441d64064ca078706b923c0fb1b0
Author: Matt Gilman 
Date:   2018-05-09T16:07:07Z

NIFI-5173:
- Removing unnecessary logic in the zoom handler since the zoom event is no 
longer triggered during onClick.




---


[jira] [Commented] (NIFIREG-160) Implement a hook provider

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468998#comment-16468998
 ] 

ASF GitHub Bot commented on NIFIREG-160:


Github user pvillard31 commented on the issue:

https://github.com/apache/nifi-registry/pull/110
  
@bbende thanks for working on this! I really think it'll be a nice addition 
to the NiFi Registry and what you did looks great. I think it's best to close 
this PR and you submit a new one. It'll be easier, and I'll be happy to review 
it.

I'd just recommend keeping author and comment fields in the Event objects. 
Reason is: I'd imagine people that could add a specific tag in the comment to 
trigger automatic actions. Example, I have a flow I'm working on in Dev, and I 
consider the version I got is ready enough to get tested in Staging and 
deployed in production. I could commit my changes and comment with something 
like "[STAGING-READY]". This way, the deployment of the workflow in Staging 
would be automatically triggered.

I actually think it could be interesting to allow users overriding the 
comment when importing a new flow version from one registry to another. This 
would ease automatic deployment across multiple environments using the above 
mechanism. Just an idea... there could be something better. Do you have 
something in mind on your side?


> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #110: NIFIREG-160 - WIP - Hook provider

2018-05-09 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi-registry/pull/110
  
@bbende thanks for working on this! I really think it'll be a nice addition 
to the NiFi Registry and what you did looks great. I think it's best to close 
this PR and you submit a new one. It'll be easier, and I'll be happy to review 
it.

I'd just recommend keeping author and comment fields in the Event objects. 
Reason is: I'd imagine people that could add a specific tag in the comment to 
trigger automatic actions. Example, I have a flow I'm working on in Dev, and I 
consider the version I got is ready enough to get tested in Staging and 
deployed in production. I could commit my changes and comment with something 
like "[STAGING-READY]". This way, the deployment of the workflow in Staging 
would be automatically triggered.

I actually think it could be interesting to allow users overriding the 
comment when importing a new flow version from one registry to another. This 
would ease automatic deployment across multiple environments using the above 
mechanism. Just an idea... there could be something better. Do you have 
something in mind on your side?


---


[jira] [Commented] (NIFI-5170) Update Grok to 0.1.9

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468996#comment-16468996
 ] 

ASF GitHub Bot commented on NIFI-5170:
--

GitHub user ottobackwards opened a pull request:

https://github.com/apache/nifi/pull/2691

NIFI-5170 Upgrad Grok to version 0.1.9

Upgrade to the new java-grok release and update for changes in the library.
This includes:

- Changes to the namespace from io.thekraken to io.krakens
- Refactoring to use the new GrokCompiler api
- Refactoring to do customValidation, since Grok will throw an 
IllegalArgumentException is an expression
references a Grok that is not defined, which is a change of behavior

- ExtractGrok now supports the default patterns, so the patterns file 
property is no longer required

Handles both the Record Reader and Legacy Processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ottobackwards/nifi update-grok-019

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2691.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2691


commit d05d72830cfb63458f5f87213be5a64ca12c3270
Author: Otto Fowler 
Date:   2018-05-08T19:53:20Z

NIFI-5170 Upgrad Grok to version 0.1.9

Upgrade to the new java-grok release and update for changes in the library.
This includes:

- Changes to the namespace from io.thekraken to io.krakens
- Refactoring to use the new GrokCompiler api
- Refactoring to do customValidation, since Grok will throw an 
IllegalArgumentException is an expression
references a Grok that is not defined, which is a change of behavior

Handles both the Record Reader and Legacy Processor




> Update Grok to 0.1.9
> 
>
> Key: NIFI-5170
> URL: https://issues.apache.org/jira/browse/NIFI-5170
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Otto Fowler
>Assignee: Otto Fowler
>Priority: Major
>
> Grok 0.1.9 has been released, including work for empty capture support.
>  
> https://github.com/thekrakken/java-grok#maven-repository



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2691: NIFI-5170 Upgrad Grok to version 0.1.9

2018-05-09 Thread ottobackwards
GitHub user ottobackwards opened a pull request:

https://github.com/apache/nifi/pull/2691

NIFI-5170 Upgrad Grok to version 0.1.9

Upgrade to the new java-grok release and update for changes in the library.
This includes:

- Changes to the namespace from io.thekraken to io.krakens
- Refactoring to use the new GrokCompiler api
- Refactoring to do customValidation, since Grok will throw an 
IllegalArgumentException is an expression
references a Grok that is not defined, which is a change of behavior

- ExtractGrok now supports the default patterns, so the patterns file 
property is no longer required

Handles both the Record Reader and Legacy Processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ottobackwards/nifi update-grok-019

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2691.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2691


commit d05d72830cfb63458f5f87213be5a64ca12c3270
Author: Otto Fowler 
Date:   2018-05-08T19:53:20Z

NIFI-5170 Upgrad Grok to version 0.1.9

Upgrade to the new java-grok release and update for changes in the library.
This includes:

- Changes to the namespace from io.thekraken to io.krakens
- Refactoring to use the new GrokCompiler api
- Refactoring to do customValidation, since Grok will throw an 
IllegalArgumentException is an expression
references a Grok that is not defined, which is a change of behavior

Handles both the Record Reader and Legacy Processor




---


[jira] [Created] (MINIFICPP-490) Create common recipes based on the C & C2 api

2018-05-09 Thread marco polo (JIRA)
marco polo created MINIFICPP-490:


 Summary: Create common recipes based on the C & C2 api
 Key: MINIFICPP-490
 URL: https://issues.apache.org/jira/browse/MINIFICPP-490
 Project: NiFi MiNiFi C++
  Issue Type: Sub-task
Reporter: marco polo
Assignee: marco polo






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5173) Graph search control fails to demonstrate component selection

2018-05-09 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reassigned NIFI-5173:
-

Assignee: Matt Gilman

> Graph search control fails to demonstrate component selection
> -
>
> Key: NIFI-5173
> URL: https://issues.apache.org/jira/browse/NIFI-5173
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.6.0
>Reporter: Alex Aversa
>Assignee: Matt Gilman
>Priority: Minor
>
> When using the graph search control to locate a component within a flow, the 
> searched item fails to render as selected on the graph. The item is 
> positioned correctly, but is not highlighted accordingly. Preliminary 
> research indicated that within the *nfActions.show* method, the 
> *selection.classed('selected',true);* is being reset by a subsequent call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-160) Implement a hook provider

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468927#comment-16468927
 ] 

ASF GitHub Bot commented on NIFIREG-160:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/110
  
@pvillard31 I branched off your commit and resolved conflicts and took a 
stab at making the the event publishing/consuming asynchronous so that the main 
request path won't be impacted by any hang-ups from the providers. 

Also tried to make the event concept a little bit less specific to flows so 
that it can be easily re-used for any other types of items that we may store in 
the registry like extensions, assets, etc.

My branch is here: 
https://github.com/bbende/nifi-registry/commits/hook-provider

Let me know what you think about this approach. If we want to head down 
this path I can submit a PR to your branch, or I can open a new PR against 
registry that includes your commit + these changes. In the meantime I'll work 
on some unit tests.


> Implement a hook provider
> -
>
> Key: NIFIREG-160
> URL: https://issues.apache.org/jira/browse/NIFIREG-160
> Project: NiFi Registry
>  Issue Type: New Feature
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> In order to extend NiFi Registry and NiFi CLI features to integrate with 
> automated deployment pipelines, it would be useful to provide a hook 
> extension point that can be configured by users to trigger actions when a new 
> flow snapshot version is committed in the Registry.
> A first implementation of this extension point could be a "script hook": a 
> script would be executed when a new flow snapshot version is committed. 
> Arguments passed to the script would be: bucket ID, flow ID, version, author 
> and comment.
> This would enable a lot of scenarios including automatically deploy flows 
> from one environment to another.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #110: NIFIREG-160 - WIP - Hook provider

2018-05-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/110
  
@pvillard31 I branched off your commit and resolved conflicts and took a 
stab at making the the event publishing/consuming asynchronous so that the main 
request path won't be impacted by any hang-ups from the providers. 

Also tried to make the event concept a little bit less specific to flows so 
that it can be easily re-used for any other types of items that we may store in 
the registry like extensions, assets, etc.

My branch is here: 
https://github.com/bbende/nifi-registry/commits/hook-provider

Let me know what you think about this approach. If we want to head down 
this path I can submit a PR to your branch, or I can open a new PR against 
registry that includes your commit + these changes. In the meantime I'll work 
on some unit tests.


---


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468763#comment-16468763
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2518
  
@ijokarumawak Anything left before you're comfortable merging this?


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2518: NIFI-4637 Added support for visibility labels to the HBase...

2018-05-09 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2518
  
@ijokarumawak Anything left before you're comfortable merging this?


---


[jira] [Updated] (NIFI-5177) Failed to merge Journal Files leads to LockObtainFailedException: Lock obtain timed out exception

2018-05-09 Thread AmitC15 (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AmitC15 updated NIFI-5177:
--
Description: 
NiFI version: 1.5

Cluster setup +  external zookeeper on each one of them.

Log: 

[ Date ] 2018-05-08 15:53:12,193 [ Priority ] ERROR [ Text 3 ] [Provenance 
Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository 
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
[NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock|mailto:NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock]
 at org.apache.lucene.store.Lock.obtain(Lock.java:89) at 
org.apache.lucene.index.IndexWriter.(IndexWriter.java:755) at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.createWriter(SimpleIndexManager.java:198)
 at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowIndexWriter(SimpleIndexManager.java:227)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1712)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository$8.run(PersistentProvenanceRepository.java:1300)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

 Happened twice this week on 2 different environments.

After effects:   
 * specific node disconnects from cluster (requires restart)
 * UI not accessible from all nodes.
 * Also led once to a different issue -  failed to connect node to cluster due 
to: java.lang.IllegalStateException: Signaled to end recovery, but there are 
more recovery files for Partition in directory

 

 

 

  was:
NiFI version: 1.5

Cluster setup +  external zookeeper on each one of them.

Log: 

[ Date ] 2018-05-08 15:53:12,193 [ Priority ] ERROR [ Text 3 ] [Provenance 
Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository 
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
[NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock|mailto:NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock]
 at org.apache.lucene.store.Lock.obtain(Lock.java:89) at 
org.apache.lucene.index.IndexWriter.(IndexWriter.java:755) at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.createWriter(SimpleIndexManager.java:198)
 at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowIndexWriter(SimpleIndexManager.java:227)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1712)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository$8.run(PersistentProvenanceRepository.java:1300)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

 

 

After effects:   specific node disconnects from cluster (requires restart),   
UI not accessible from all nodes.

 

 

 


> Failed to merge Journal Files leads to LockObtainFailedException: Lock obtain 
> timed out exception
> -
>
> Key: NIFI-5177
> URL: https://issues.apache.org/jira/browse/NIFI-5177
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: AmitC15
>Priority: Critical
>
> NiFI version: 1.5
> Cluster setup +  external zookeeper on each one of them.
> Log: 
> [ Date ] 2018-05-08 15:53:12,193 [ Priority ] ERROR [ Text 3 ] [Provenance 
> Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository 
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
> [NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock|mailto:NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock]
>  at 

[jira] [Updated] (NIFI-5177) Failed to merge Journal Files leads to LockObtainFailedException: Lock obtain timed out exception

2018-05-09 Thread AmitC15 (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AmitC15 updated NIFI-5177:
--
Description: 
NiFI version: 1.5

Cluster setup +  external zookeeper on each one of them.

Log: 

[ Date ] 2018-05-08 15:53:12,193 [ Priority ] ERROR [ Text 3 ] [Provenance 
Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository 
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
[NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock|mailto:NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock]
 at org.apache.lucene.store.Lock.obtain(Lock.java:89) at 
org.apache.lucene.index.IndexWriter.(IndexWriter.java:755) at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.createWriter(SimpleIndexManager.java:198)
 at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowIndexWriter(SimpleIndexManager.java:227)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1712)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository$8.run(PersistentProvenanceRepository.java:1300)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

 

 

After effects:   specific node disconnects from cluster (requires restart),   
UI not accessible from all nodes.

 

 

 

  was:
NiFI version: 1.5

Cluster setup +  external zookeeper on each one of them.

Log: 

[ Date ] 2018-05-08 15:53:12,193 [ Priority ] ERROR [ Text 3 ] [Provenance 
Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository 
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
[NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock|mailto:NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock]
 at org.apache.lucene.store.Lock.obtain(Lock.java:89) at 
org.apache.lucene.index.IndexWriter.(IndexWriter.java:755) at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.createWriter(SimpleIndexManager.java:198)
 at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowIndexWriter(SimpleIndexManager.java:227)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1712)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository$8.run(PersistentProvenanceRepository.java:1300)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) (Log Path:NIFI > NJ-Nifi02-PRD - 
nifi-app.log ,Host: NJ-Nifi02-PRD ,Apps: [nifi])

 

 

After effects:   specific node disconnects from cluster (requires restart),   
UI not accessible from all nodes.

 

 

 


> Failed to merge Journal Files leads to LockObtainFailedException: Lock obtain 
> timed out exception
> -
>
> Key: NIFI-5177
> URL: https://issues.apache.org/jira/browse/NIFI-5177
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: AmitC15
>Priority: Critical
>
> NiFI version: 1.5
> Cluster setup +  external zookeeper on each one of them.
> Log: 
> [ Date ] 2018-05-08 15:53:12,193 [ Priority ] ERROR [ Text 3 ] [Provenance 
> Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository 
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
> [NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock|mailto:NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock]
>  at org.apache.lucene.store.Lock.obtain(Lock.java:89) at 
> org.apache.lucene.index.IndexWriter.(IndexWriter.java:755) at 
> 

[jira] [Created] (NIFI-5177) Failed to merge Journal Files leads to LockObtainFailedException: Lock obtain timed out exception

2018-05-09 Thread AmitC15 (JIRA)
AmitC15 created NIFI-5177:
-

 Summary: Failed to merge Journal Files leads to 
LockObtainFailedException: Lock obtain timed out exception
 Key: NIFI-5177
 URL: https://issues.apache.org/jira/browse/NIFI-5177
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.5.0
Reporter: AmitC15


NiFI version: 1.5

Cluster setup +  external zookeeper on each one of them.

Log: 

[ Date ] 2018-05-08 15:53:12,193 [ Priority ] ERROR [ Text 3 ] [Provenance 
Repository Rollover Thread-1] o.a.n.p.PersistentProvenanceRepository 
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
[NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock|mailto:NativeFSLock@/nifi/nifi-1.5.0/provenance_repository/index-1524294029000/write.lock]
 at org.apache.lucene.store.Lock.obtain(Lock.java:89) at 
org.apache.lucene.index.IndexWriter.(IndexWriter.java:755) at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.createWriter(SimpleIndexManager.java:198)
 at 
org.apache.nifi.provenance.lucene.SimpleIndexManager.borrowIndexWriter(SimpleIndexManager.java:227)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1712)
 at 
org.apache.nifi.provenance.PersistentProvenanceRepository$8.run(PersistentProvenanceRepository.java:1300)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) (Log Path:NIFI > NJ-Nifi02-PRD - 
nifi-app.log ,Host: NJ-Nifi02-PRD ,Apps: [nifi])

 

 

After effects:   specific node disconnects from cluster (requires restart),   
UI not accessible from all nodes.

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)