[jira] [Updated] (NIFI-6613) When FlowFile Repository fails to update due to previous failure, it should log the root cause

2019-09-12 Thread Koji Kawamura (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-6613:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> When FlowFile Repository fails to update due to previous failure, it should 
> log the root cause
> --
>
> Key: NIFI-6613
> URL: https://issues.apache.org/jira/browse/NIFI-6613
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When the FlowFile Repository (more specifically, the LengthDelimitedJournal 
> of the write-ahead log) fails to update, it logs the reason. However, all 
> subsequent attempts to update the repo will first check if the repo is 
> 'poisoned' and if so throw an Exception. This gets logged as something like:
> {code:java}
> Failed to process session due to 
> org.apache.nifi.processor.exception.ProcessException: FlowFile Repository 
> failed to update: org.apache.nifi.processor.exception.ProcessException: 
> FlowFile Repository failed to 
> updateorg.apache.nifi.processor.exception.ProcessException: FlowFile 
> Repository failed to updateat 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:405)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:336)
> at 
> org.apache.nifi.processors.script.ExecuteScript.onTrigger(ExecuteScript.java:228)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)Caused by: java.io.IOException: 
> Cannot update journal file /flowfile_repository/journals/4614619461.journal 
> because this journal has already encountered a failure when attempting to 
> write to the file. If the repository is able to checkpoint, then this problem 
> will resolve itself. However, if the repository is unable to be checkpointed 
> (for example, due to being out of storage space or having too many open 
> files), then this issue may require manual intervention. {code}
> Because there may be many Processors attempting to update the repository, 
> this causes a lot of errors in the logs and makes it difficult to understand 
> the underlying cause. When the journal becomes "poisoned" we should hold onto 
> the Throwable that caused it and log it in this error message so that each 
> update indicates the root cause. This will make it much easier to track what 
> happened.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6613) When FlowFile Repository fails to update due to previous failure, it should log the root cause

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928952#comment-16928952
 ] 

ASF subversion and git services commented on NIFI-6613:
---

Commit 6b17c4b1347d91177bdece540b3485e962e30a2b in nifi's branch 
refs/heads/master from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=6b17c4b ]

NIFI-6613: If LengthDelimitedJournal gets poisoned, log the reason and hold 
onto it so that it can be included as the cause of subsequent Exceptions that 
are thrown

This closes #3704.

Signed-off-by: Koji Kawamura 


> When FlowFile Repository fails to update due to previous failure, it should 
> log the root cause
> --
>
> Key: NIFI-6613
> URL: https://issues.apache.org/jira/browse/NIFI-6613
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When the FlowFile Repository (more specifically, the LengthDelimitedJournal 
> of the write-ahead log) fails to update, it logs the reason. However, all 
> subsequent attempts to update the repo will first check if the repo is 
> 'poisoned' and if so throw an Exception. This gets logged as something like:
> {code:java}
> Failed to process session due to 
> org.apache.nifi.processor.exception.ProcessException: FlowFile Repository 
> failed to update: org.apache.nifi.processor.exception.ProcessException: 
> FlowFile Repository failed to 
> updateorg.apache.nifi.processor.exception.ProcessException: FlowFile 
> Repository failed to updateat 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:405)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:336)
> at 
> org.apache.nifi.processors.script.ExecuteScript.onTrigger(ExecuteScript.java:228)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)Caused by: java.io.IOException: 
> Cannot update journal file /flowfile_repository/journals/4614619461.journal 
> because this journal has already encountered a failure when attempting to 
> write to the file. If the repository is able to checkpoint, then this problem 
> will resolve itself. However, if the repository is unable to be checkpointed 
> (for example, due to being out of storage space or having too many open 
> files), then this issue may require manual intervention. {code}
> Because there may be many Processors attempting to update the repository, 
> this causes a lot of errors in the logs and makes it difficult to understand 
> the underlying cause. When the journal becomes "poisoned" we should hold onto 
> the Throwable that caused it and log it in this error message so that each 
> update indicates the root cause. This will make it much easier to track what 
> happened.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] asfgit closed pull request #3704: NIFI-6613: If LengthDelimitedJournal gets poisoned, log the reason an…

2019-09-12 Thread GitBox
asfgit closed pull request #3704: NIFI-6613: If LengthDelimitedJournal gets 
poisoned, log the reason an…
URL: https://github.com/apache/nifi/pull/3704
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] ijokarumawak commented on issue #3704: NIFI-6613: If LengthDelimitedJournal gets poisoned, log the reason an…

2019-09-12 Thread GitBox
ijokarumawak commented on issue #3704: NIFI-6613: If LengthDelimitedJournal 
gets poisoned, log the reason an…
URL: https://github.com/apache/nifi/pull/3704#issuecomment-531050821
 
 
   The JIRA description and code change look good to me, +1. Travis tests 
failed but I believe the cause is not related to this PR. Merging to master. 
Thanks @markap14!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] SamHjelmfelt opened a new pull request #3732: NIFI-6662: Adding Kudu Lookup Service

2019-09-12 Thread GitBox
SamHjelmfelt opened a new pull request #3732: NIFI-6662: Adding Kudu Lookup 
Service
URL: https://github.com/apache/nifi/pull/3732
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Adds lookup service for Apache Kudu
   https://issues.apache.org/jira/browse/NIFI-6662
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ X ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ X ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ X ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ X ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ X ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ X ] Have you written or updated unit tests to verify your changes?
   - [ X ] Have you verified that the full build is successful on both JDK 8 
and JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ X ] If adding new Properties, have you added `.displayName` in addition 
to .name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6596) Move AmazonS3EncryptionService to nifi-asw-service-api module

2019-09-12 Thread Koji Kawamura (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-6596:

Fix Version/s: 1.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Move AmazonS3EncryptionService to nifi-asw-service-api module
> -
>
> Key: NIFI-6596
> URL: https://issues.apache.org/jira/browse/NIFI-6596
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Troy Melhase
>Assignee: Troy Melhase
>Priority: Trivial
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Seeing logs like this:
> {{019-08-27 16:14:35,075 WARN [main] 
> o.a.n.n.StandardExtensionDiscoveringManager Component 
> org.apache.nifi.processors.aws.s3.FetchS3Object is bundled with its 
> referenced Controller Service APIs 
> org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs 
> should not be bundled with component implementations that reference it.}}
> {{2019-08-27 16:14:35,081 WARN [main] 
> o.a.n.n.StandardExtensionDiscoveringManager Component 
> org.apache.nifi.processors.aws.s3.PutS3Object is bundled with its referenced 
> Controller Service APIs 
> org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs 
> should not be bundled with component implementations that reference it.}}
> {{2019-08-27 16:14:35,195 WARN [main] 
> o.a.n.n.StandardExtensionDiscoveringManager Controller Service 
> org.apache.nifi.processors.aws.s3.encryption.StandardS3EncryptionService is 
> bundled with its supporting APIs 
> org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs 
> should not be bundled with the implementations.}}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] asfgit closed pull request #3694: NIFI-6596 Moves AmazonS3EncryptionService interface to `nifi-aws-service-api` package

2019-09-12 Thread GitBox
asfgit closed pull request #3694: NIFI-6596 Moves AmazonS3EncryptionService 
interface to `nifi-aws-service-api` package
URL: https://github.com/apache/nifi/pull/3694
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6596) Move AmazonS3EncryptionService to nifi-asw-service-api module

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928950#comment-16928950
 ] 

ASF subversion and git services commented on NIFI-6596:
---

Commit 93e6f195d939ea0187f48f306a750ee41ac82de8 in nifi's branch 
refs/heads/master from Troy Melhase
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=93e6f19 ]

NIFI-6596 Moves AmazonS3EncryptionService interface
to `nifi-aws-service-api` package.

This closes #3694.

Signed-off-by: Koji Kawamura 


> Move AmazonS3EncryptionService to nifi-asw-service-api module
> -
>
> Key: NIFI-6596
> URL: https://issues.apache.org/jira/browse/NIFI-6596
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Troy Melhase
>Assignee: Troy Melhase
>Priority: Trivial
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Seeing logs like this:
> {{019-08-27 16:14:35,075 WARN [main] 
> o.a.n.n.StandardExtensionDiscoveringManager Component 
> org.apache.nifi.processors.aws.s3.FetchS3Object is bundled with its 
> referenced Controller Service APIs 
> org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs 
> should not be bundled with component implementations that reference it.}}
> {{2019-08-27 16:14:35,081 WARN [main] 
> o.a.n.n.StandardExtensionDiscoveringManager Component 
> org.apache.nifi.processors.aws.s3.PutS3Object is bundled with its referenced 
> Controller Service APIs 
> org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs 
> should not be bundled with component implementations that reference it.}}
> {{2019-08-27 16:14:35,195 WARN [main] 
> o.a.n.n.StandardExtensionDiscoveringManager Controller Service 
> org.apache.nifi.processors.aws.s3.encryption.StandardS3EncryptionService is 
> bundled with its supporting APIs 
> org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs 
> should not be bundled with the implementations.}}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (NIFI-6662) Add Kudu Lookup Service

2019-09-12 Thread Sam Hjelmfelt (Jira)
Sam Hjelmfelt created NIFI-6662:
---

 Summary: Add Kudu Lookup Service
 Key: NIFI-6662
 URL: https://issues.apache.org/jira/browse/NIFI-6662
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Sam Hjelmfelt


Lookup service for Apache Kudu



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] ijokarumawak commented on issue #3694: NIFI-6596 Moves AmazonS3EncryptionService interface to `nifi-aws-service-api` package

2019-09-12 Thread GitBox
ijokarumawak commented on issue #3694: NIFI-6596 Moves 
AmazonS3EncryptionService interface to `nifi-aws-service-api` package
URL: https://github.com/apache/nifi/pull/3694#issuecomment-531049360
 
 
   Thanks @natural for the fix. LGTM +1. Merging to master!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] natural commented on a change in pull request #3594: NIFI-3833 Support for Encrypted Flow File Repositories

2019-09-12 Thread GitBox
natural commented on a change in pull request #3594: NIFI-3833 Support for 
Encrypted Flow File Repositories
URL: https://github.com/apache/nifi/pull/3594#discussion_r323578236
 
 

 ##
 File path: 
nifi-commons/nifi-write-ahead-log/src/main/java/org/apache/nifi/wali/HashMapSnapshot.java
 ##
 @@ -264,10 +267,11 @@ public synchronized void writeSnapshot(final 
SnapshotCapture snapshot) throws
 }
 
 // Write to the partial file.
-try (final FileOutputStream fileOut = new 
FileOutputStream(getPartialFile());
 
 Review comment:
   ~~(in this specific case, however, moving the stream construction back into 
the try-with-resources declaration still causes test failures)~~
   
   resolved, all the declarations moved into the try-with-resources block.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6661) HandleHttpRequest - Failed to receive content from HTTP Request

2019-09-12 Thread William Gosse (Jira)
William Gosse created NIFI-6661:
---

 Summary: HandleHttpRequest - Failed to receive content from HTTP 
Request
 Key: NIFI-6661
 URL: https://issues.apache.org/jira/browse/NIFI-6661
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: William Gosse


I have had couple occurrence of the following exception occurring on an 
HandleHttpRequest pprocessor:
nifi-app_2019-09-06_12.0.log:2019-09-06 12:24:48,132 ERROR [Timer-Driven 
Process Thread-3] o.a.n.p.standard.HandleHttpRequest 
HandleHttpRequest[id=6ceef915-4430-30fa-09d2-b12bb2142172] Failed to receive 
content from HTTP Request from 108.26.163.22 due to java.io.IOException: 
java.util.concurrent.TimeoutException: Idle timeout expired: 61/60 ms: 
java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout 
expired: 61/60 ms
 
When it occurs the HandleHttpRequest processor stops excepting requests and I 
have to restart nifi in order to recover.
 
Is there anything I can do to better handle this exception?

Also I only see this happening with one of my user who may be ealing with 
network latency.

I have not been able to recreate the issue myself with this exact excetion 
message. However the fact the it causes HandleHttpRequest processor to stop 
functioning seems serious to me. 
The only time that ever happened to me is when it to  long to get to the 
HandleHttpResponse. I currently have the Request Expiration property for my 
StandardHttpContextMap set to 10 minutes. If the this value is exceeded the 
HandleHttpRequest hangs up.  In this specific issue that time out also was 
reached but it seems that the flow never got beyond the HandleHttpRequest.




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] joewitt commented on issue #3731: NIFI-6660: Fixed ordering of directory creation and timestamp gathering

2019-09-12 Thread GitBox
joewitt commented on issue #3731: NIFI-6660: Fixed ordering of directory 
creation and timestamp gathering
URL: https://github.com/apache/nifi/pull/3731#issuecomment-530999389
 
 
   haha +1 thanks for fixing that @markap14 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 opened a new pull request #3731: NIFI-6660: Fixed ordering of directory creation and timestamp gathering

2019-09-12 Thread GitBox
markap14 opened a new pull request #3731: NIFI-6660: Fixed ordering of 
directory creation and timestamp gathering
URL: https://github.com/apache/nifi/pull/3731
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6660) Occasional test failure in TestIndexDirectoryManager

2019-09-12 Thread Mark Payne (Jira)
Mark Payne created NIFI-6660:


 Summary: Occasional test failure in TestIndexDirectoryManager
 Key: NIFI-6660
 URL: https://issues.apache.org/jira/browse/NIFI-6660
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne


Occasionally the TestIndexDirectoryManager test will fail, in the 
testGetDirectoriesBefore method. The assertion on Line 160 fails: 
`assertEquals(2, dirsBefore.size());`

This appears to be due to the fact that in lines 152 and 153, we obtain a 
timestamp, then create a directory, and the assertion assumes that the last 
modified date of the directory will be <= the timestamp. However, if the 
timestamp changes between when we obtain the timestamp and when the file is 
created, this won't be true.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] mcgilman commented on issue #3728: NIFI-6381 - Make Parameters and Parameter Contexts searchable in UI

2019-09-12 Thread GitBox
mcgilman commented on issue #3728: NIFI-6381 - Make Parameters and Parameter 
Contexts searchable in UI
URL: https://github.com/apache/nifi/pull/3728#issuecomment-530994966
 
 
   Will review...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Issue Comment Deleted] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Otto Fowler (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otto Fowler updated NIFI-6654:
--
Comment: was deleted

(was: If you use a second instance of the fields array, instead of re-using the 
same array this test passes.


{code:java}
 @Test
public void testSchemaCompareSelfRef() {
// Set up first schema
final SimpleRecordSchema personSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
final List personFields = new ArrayList<>();
personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
personFields.add(new RecordField("SIN", 
RecordFieldType.STRING.getDataType()));
personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(personSchema)));
personSchema.setFields(personFields);
// Set up second schema. Must be completely separate set of objects 
otherwise overloaded comparison
// operators (particularly that from the SimpleRecordSchema class) will 
not be called.
SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
//personFields.clear();
final List personFields2 = new ArrayList<>();
personFields2.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
personFields2.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
personFields2.add(new RecordField("SIN", 
RecordFieldType.STRING.getDataType()));
personFields2.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(secondSchema)));
secondSchema.setFields(personFields);
assertTrue(personSchema.equals(secondSchema));
}
{code})

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
>  assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  *secondSchema.setFields(personFields);*
>  assertTrue(schema.equals(secondSchema));
>  assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> @Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must be 

[jira] [Commented] (NIFI-6619) RouteOnAttribute: Create new Routing Strategy to route only first rule that is true

2019-09-12 Thread Joseph Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928871#comment-16928871
 ] 

Joseph Witt commented on NIFI-6619:
---

a possible work around to consider

I believe UpdateAttribute's advanced UI lets you set rules with specific order. 
 That could set the relationship name you want for instance as a flowfile 
attribute.  Then you could follow it with RouteOnAttribute using 
attribute/relationship name.  Not saying the idea for RouteOnAttribute isn't 
good to do just saying this might get you going.

THanks

> RouteOnAttribute: Create new Routing Strategy to route only first rule that 
> is true
> ---
>
> Key: NIFI-6619
> URL: https://issues.apache.org/jira/browse/NIFI-6619
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.9.2
>Reporter: Raymond
>Priority: Major
> Attachments: image-2019-09-12-22-15-51-027.png
>
>
> Currently the RouteOnAttribute has the strategy  "Route to Property name". 
> The behavior is that for each rule that is true a message (clone of flowfile) 
> will be sent to the next step. 
> I would like to have another strategy:
> "Route to first matched Property name" or "Route to Property name by first 
> match" or Route to first Property name which evaluates true".
> This will ensure that next step gets exactly one message.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928867#comment-16928867
 ] 

Otto Fowler commented on NIFI-6654:
---

If you use a second instance of the fields array, instead of re-using the same 
array this test passes.


{code:java}
 @Test
public void testSchemaCompareSelfRef() {
// Set up first schema
final SimpleRecordSchema personSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
final List personFields = new ArrayList<>();
personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
personFields.add(new RecordField("SIN", 
RecordFieldType.STRING.getDataType()));
personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(personSchema)));
personSchema.setFields(personFields);
// Set up second schema. Must be completely separate set of objects 
otherwise overloaded comparison
// operators (particularly that from the SimpleRecordSchema class) will 
not be called.
SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
//personFields.clear();
final List personFields2 = new ArrayList<>();
personFields2.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
personFields2.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
personFields2.add(new RecordField("SIN", 
RecordFieldType.STRING.getDataType()));
personFields2.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(secondSchema)));
secondSchema.setFields(personFields);
assertTrue(personSchema.equals(secondSchema));
}
{code}

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
>  assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  *secondSchema.setFields(personFields);*
>  assertTrue(schema.equals(secondSchema));
>  assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> @Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must 

[GitHub] [nifi] asfgit closed pull request #3730: Nifi 6649 - Repair Test and added check for log debug enabled

2019-09-12 Thread GitBox
asfgit closed pull request #3730: Nifi 6649 - Repair Test and added check for 
log debug enabled
URL: https://github.com/apache/nifi/pull/3730
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6649) Back Pressure Prediction: Separate query interval configuration from prediction interval

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928862#comment-16928862
 ] 

ASF subversion and git services commented on NIFI-6649:
---

Commit 5106b764da021149054231d08c25381c736964bb in nifi's branch 
refs/heads/master from Yolanda M. Davis
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=5106b76 ]

NIFI-6649 - repaired test due to min zero fix

NIFI-6649 - added check if logging debug is enabled

Signed-off-by: Matthew Burgess 

This closes #3730


> Back Pressure Prediction: Separate query interval configuration from 
> prediction interval
> 
>
> Key: NIFI-6649
> URL: https://issues.apache.org/jira/browse/NIFI-6649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the interval setting `nifi.analytics.predict.interval` dictates 
> both how observations should be queried as well as how far to look into the 
> future. These should be separated to allow users to project further into the 
> future using few past predictions (or vice versa).  For example allowing to 
> predict 60 minutes into the future using the last 5 minutes of data.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6649) Back Pressure Prediction: Separate query interval configuration from prediction interval

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928861#comment-16928861
 ] 

ASF subversion and git services commented on NIFI-6649:
---

Commit 5106b764da021149054231d08c25381c736964bb in nifi's branch 
refs/heads/master from Yolanda M. Davis
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=5106b76 ]

NIFI-6649 - repaired test due to min zero fix

NIFI-6649 - added check if logging debug is enabled

Signed-off-by: Matthew Burgess 

This closes #3730


> Back Pressure Prediction: Separate query interval configuration from 
> prediction interval
> 
>
> Key: NIFI-6649
> URL: https://issues.apache.org/jira/browse/NIFI-6649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the interval setting `nifi.analytics.predict.interval` dictates 
> both how observations should be queried as well as how far to look into the 
> future. These should be separated to allow users to project further into the 
> future using few past predictions (or vice versa).  For example allowing to 
> predict 60 minutes into the future using the last 5 minutes of data.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (NIFI-6619) RouteOnAttribute: Create new Routing Strategy to route only first rule that is true

2019-09-12 Thread Raymond (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928854#comment-16928854
 ] 

Raymond edited comment on NIFI-6619 at 9/12/19 8:00 PM:


Thanks for your reply. I think that for RouteByOrder is a valid use case as 
well. This routing strategy is however one step further from how the current 
processor works. As Joseph says there is now no ordering implied. I've seen 
such routing processors at other integration systems. Those go mostly like:

1) Define the rules

2) Set the order (mostly the user moves rules up and down)

3) The processors start a the top rule and goes down. If a rule is true, the 
processor stops further execution (if none than unmatched)

I think that use case would be better in a separate routing processor to avoid 
confusion.

 

The use case of this ticket is more about that rules for example rule A and 
rule B can both true, but next step expects exactly one message. Now you always 
gets two messages. This can only be avoided with boolean logic of the 
expression language (for simple cases, but not with many rules based on 
multiple attributes) or by making multiple routing steps. New routing strategy 
would make this possible, but probably when a RouteByOrder processor maybe this 
strategy would be less nescesary.

 

 

 

 

 


was (Author: skin27):
Thanks for your reply. I think that for RouteByOrder is a valid use case as 
well. This routing strategy is however one step further from how the current 
processor works. As Joseph says there is now no ordering implied. I've seen 
such routing processors at other integration system. Those go mostly like:

1) Define the rules

2) Set the order (mostly the user moves rules up and down)

3) The processors start a the top rule and goes down. If a rule is true, the 
processor stops further execution (if none than unmatched)

I think that use case would be better in a separate routing processor to avoid 
confusion.

 

The use case of this ticket is more that there rule A and rule B, those rule 
can also both true. Now if you always gets two messages. This can only be 
avoided with boolean logic of the expression language (for simple cases, but 
not with many rules based on multiple attributes) or by making multiple routing 
steps

 

 

 

 

 

> RouteOnAttribute: Create new Routing Strategy to route only first rule that 
> is true
> ---
>
> Key: NIFI-6619
> URL: https://issues.apache.org/jira/browse/NIFI-6619
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.9.2
>Reporter: Raymond
>Priority: Major
> Attachments: image-2019-09-12-22-15-51-027.png
>
>
> Currently the RouteOnAttribute has the strategy  "Route to Property name". 
> The behavior is that for each rule that is true a message (clone of flowfile) 
> will be sent to the next step. 
> I would like to have another strategy:
> "Route to first matched Property name" or "Route to Property name by first 
> match" or Route to first Property name which evaluates true".
> This will ensure that next step gets exactly one message.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] YolandaMDavis opened a new pull request #3730: Nifi 6649 - Repair Test and added check for log debug enabled

2019-09-12 Thread GitBox
YolandaMDavis opened a new pull request #3730: Nifi 6649 - Repair Test and 
added check for log debug enabled
URL: https://github.com/apache/nifi/pull/3730
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6619) RouteOnAttribute: Create new Routing Strategy to route only first rule that is true

2019-09-12 Thread Raymond (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928854#comment-16928854
 ] 

Raymond commented on NIFI-6619:
---

Thanks for your reply. I think that for RouteByOrder is a valid use case as 
well. This routing strategy is however one step further from how the current 
processor works. As Joseph says there is now no ordering implied. I've seen 
such routing processors at other integration system. Those go mostly like:

1) Define the rules

2) Set the order (mostly the user moves rules up and down)

3) The processors start a the top rule and goes down. If a rule is true, the 
processor stops further execution (if none than unmatched)

I think that use case would be better in a separate routing processor to avoid 
confusion.

 

The use case of this ticket is more that there rule A and rule B, those rule 
can also both true. Now if you always gets two messages. This can only be 
avoided with boolean logic of the expression language (for simple cases, but 
not with many rules based on multiple attributes) or by making multiple routing 
steps

 

 

 

 

 

> RouteOnAttribute: Create new Routing Strategy to route only first rule that 
> is true
> ---
>
> Key: NIFI-6619
> URL: https://issues.apache.org/jira/browse/NIFI-6619
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.9.2
>Reporter: Raymond
>Priority: Major
> Attachments: image-2019-09-12-22-15-51-027.png
>
>
> Currently the RouteOnAttribute has the strategy  "Route to Property name". 
> The behavior is that for each rule that is true a message (clone of flowfile) 
> will be sent to the next step. 
> I would like to have another strategy:
> "Route to first matched Property name" or "Route to Property name by first 
> match" or Route to first Property name which evaluates true".
> This will ensure that next step gets exactly one message.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] bbende commented on a change in pull request #3723: NIFI-6656 Added a default visibility expression configuration item to…

2019-09-12 Thread GitBox
bbende commented on a change in pull request #3723: NIFI-6656 Added a default 
visibility expression configuration item to…
URL: https://github.com/apache/nifi/pull/3723#discussion_r323916207
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase-client-service-api/src/main/java/org/apache/nifi/hbase/mapcache/MapCacheVisibility.java
 ##
 @@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.hbase.mapcache;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+
+public interface MapCacheVisibility {
+PropertyDescriptor VISIBILITY_EXPRESSION = new PropertyDescriptor.Builder()
 
 Review comment:
   I would lean towards defining this property directly in each HBase map cache 
service, even though I realize it is the same exact property twice, but we 
already have the other properties duplicated. 
   
   If we do that then we don't need to implement this interface or introduce 
the additional dependencies to the API. module.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] scottyaslan commented on issue #3729: NIFI-6659 - Open create new parameter context dialog in edit mode.

2019-09-12 Thread GitBox
scottyaslan commented on issue #3729: NIFI-6659 - Open create new parameter 
context dialog in edit mode.
URL: https://github.com/apache/nifi/pull/3729#issuecomment-530970662
 
 
   Reviewing...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6659) Create new parameter context option from process group config opens the new param context dialog in a read-only state.

2019-09-12 Thread Robert Fellows (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Fellows updated NIFI-6659:
-
Status: Patch Available  (was: In Progress)

> Create new parameter context option from process group config opens the new 
> param context dialog in a read-only state.
> --
>
> Key: NIFI-6659
> URL: https://issues.apache.org/jira/browse/NIFI-6659
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Robert Fellows
>Assignee: Robert Fellows
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323904545
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
+
+- *Sensitive Value* -  Set to "Yes" if the Parameter's Value should be 
considered sensitive. If sensitive, the value of the Parameter will not be 
shown in the UI once applied. The default setting is "No".
+
+- *Description* - A description that explains what the Parameter is, how it is 
to be used, etc. This field is optional.
+
+Once these settings are configured, select "Apply". Add additional Parameters 
or edit any existing Parameters.
+
+image:update-parameter-context.png[Update Parameter Context]
+
+To complete the process, select "Apply" from the Parameter Context window. The 
following operations are performed to validate all components that reference 
the added or modified parameters: Stopping/Restarting affected Processors, 
Disabling/Re-enabling affected Controller Services, Updating Parameter Context.
+
+image:parameters-validate-affected-components.png[Validate Affected Components]
+
+The Referencing Components section lists any components referencing the 
parameters in the parameter context organized by process group.
+
+[[assigning_parameter_context_to_PG]]
+ Assigning a Parameter Context to a Process Group
+For a component to reference a Parameter, its Process Group must first be 
assigned to a Parameter Context. Once assigned, processors and controller 
services within that Process Group may only reference Parameters 

[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323902790
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
+
+- *Sensitive Value* -  Set to "Yes" if the Parameter's Value should be 
considered sensitive. If sensitive, the value of the Parameter will not be 
shown in the UI once applied. The default setting is "No".
+
+- *Description* - A description that explains what the Parameter is, how it is 
to be used, etc. This field is optional.
+
+Once these settings are configured, select "Apply". Add additional Parameters 
or edit any existing Parameters.
+
+image:update-parameter-context.png[Update Parameter Context]
+
+To complete the process, select "Apply" from the Parameter Context window. The 
following operations are performed to validate all components that reference 
the added or modified parameters: Stopping/Restarting affected Processors, 
Disabling/Re-enabling affected Controller Services, Updating Parameter Context.
+
+image:parameters-validate-affected-components.png[Validate Affected Components]
+
+The Referencing Components section lists any components referencing the 
parameters in the parameter context organized by process group.
+
+[[assigning_parameter_context_to_PG]]
+ Assigning a Parameter Context to a Process Group
+For a component to reference a Parameter, its Process Group must first be 
assigned to a Parameter Context. Once assigned, processors and controller 
services within that Process Group may only reference Parameters 

[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323901654
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
+
+- *Sensitive Value* -  Set to "Yes" if the Parameter's Value should be 
considered sensitive. If sensitive, the value of the Parameter will not be 
shown in the UI once applied. The default setting is "No".
+
+- *Description* - A description that explains what the Parameter is, how it is 
to be used, etc. This field is optional.
+
+Once these settings are configured, select "Apply". Add additional Parameters 
or edit any existing Parameters.
+
+image:update-parameter-context.png[Update Parameter Context]
+
+To complete the process, select "Apply" from the Parameter Context window. The 
following operations are performed to validate all components that reference 
the added or modified parameters: Stopping/Restarting affected Processors, 
Disabling/Re-enabling affected Controller Services, Updating Parameter Context.
+
+image:parameters-validate-affected-components.png[Validate Affected Components]
+
+The Referencing Components section lists any components referencing the 
parameters in the parameter context organized by process group.
+
+[[assigning_parameter_context_to_PG]]
+ Assigning a Parameter Context to a Process Group
+For a component to reference a Parameter, its Process Group must first be 
assigned to a Parameter Context. Once assigned, processors and controller 
services within that Process Group may only reference Parameters 

[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323903298
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
+
+- *Sensitive Value* -  Set to "Yes" if the Parameter's Value should be 
considered sensitive. If sensitive, the value of the Parameter will not be 
shown in the UI once applied. The default setting is "No".
+
+- *Description* - A description that explains what the Parameter is, how it is 
to be used, etc. This field is optional.
+
+Once these settings are configured, select "Apply". Add additional Parameters 
or edit any existing Parameters.
+
+image:update-parameter-context.png[Update Parameter Context]
+
+To complete the process, select "Apply" from the Parameter Context window. The 
following operations are performed to validate all components that reference 
the added or modified parameters: Stopping/Restarting affected Processors, 
Disabling/Re-enabling affected Controller Services, Updating Parameter Context.
+
+image:parameters-validate-affected-components.png[Validate Affected Components]
+
+The Referencing Components section lists any components referencing the 
parameters in the parameter context organized by process group.
+
+[[assigning_parameter_context_to_PG]]
+ Assigning a Parameter Context to a Process Group
+For a component to reference a Parameter, its Process Group must first be 
assigned to a Parameter Context. Once assigned, processors and controller 
services within that Process Group may only reference Parameters 

[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323895112
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
+
+- *Sensitive Value* -  Set to "Yes" if the Parameter's Value should be 
considered sensitive. If sensitive, the value of the Parameter will not be 
shown in the UI once applied. The default setting is "No".
 
 Review comment:
   It may make sense to reiterate here that sensitive parameters can only be 
referenced by sensitive properties and non-sensitive params by non-sensitive 
properties. It's also worth noting, I think, that once a Parameter is created, 
its sensitivity flag cannot be changed. The parameter would have to be deleted 
and recreated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323894615
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
 
 Review comment:
   It's probably worth noting that if this is checked, and a value is set, the 
value wins, and the checkbox is ignored.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323902652
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
+
+- *Sensitive Value* -  Set to "Yes" if the Parameter's Value should be 
considered sensitive. If sensitive, the value of the Parameter will not be 
shown in the UI once applied. The default setting is "No".
+
+- *Description* - A description that explains what the Parameter is, how it is 
to be used, etc. This field is optional.
+
+Once these settings are configured, select "Apply". Add additional Parameters 
or edit any existing Parameters.
+
+image:update-parameter-context.png[Update Parameter Context]
+
+To complete the process, select "Apply" from the Parameter Context window. The 
following operations are performed to validate all components that reference 
the added or modified parameters: Stopping/Restarting affected Processors, 
Disabling/Re-enabling affected Controller Services, Updating Parameter Context.
+
+image:parameters-validate-affected-components.png[Validate Affected Components]
+
+The Referencing Components section lists any components referencing the 
parameters in the parameter context organized by process group.
+
+[[assigning_parameter_context_to_PG]]
+ Assigning a Parameter Context to a Process Group
+For a component to reference a Parameter, its Process Group must first be 
assigned to a Parameter Context. Once assigned, processors and controller 
services within that Process Group may only reference Parameters 

[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323897283
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
+
+- *Sensitive Value* -  Set to "Yes" if the Parameter's Value should be 
considered sensitive. If sensitive, the value of the Parameter will not be 
shown in the UI once applied. The default setting is "No".
+
+- *Description* - A description that explains what the Parameter is, how it is 
to be used, etc. This field is optional.
+
+Once these settings are configured, select "Apply". Add additional Parameters 
or edit any existing Parameters.
+
+image:update-parameter-context.png[Update Parameter Context]
+
+To complete the process, select "Apply" from the Parameter Context window. The 
following operations are performed to validate all components that reference 
the added or modified parameters: Stopping/Restarting affected Processors, 
Disabling/Re-enabling affected Controller Services, Updating Parameter Context.
+
+image:parameters-validate-affected-components.png[Validate Affected Components]
+
+The Referencing Components section lists any components referencing the 
parameters in the parameter context organized by process group.
+
+[[assigning_parameter_context_to_PG]]
+ Assigning a Parameter Context to a Process Group
+For a component to reference a Parameter, its Process Group must first be 
assigned to a Parameter Context. Once assigned, processors and controller 
services within that Process Group may only reference Parameters 

[GitHub] [nifi] markap14 commented on a change in pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
markap14 commented on a change in pull request #3725: NIFI-6558 Added 
Parameters to User Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725#discussion_r323903724
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/user-guide.adoc
 ##
 @@ -711,15 +711,187 @@ whatever comments are appropriate for this component. 
Use of the Comments tab is
 image::comments-tab.png["Comments Tab"]
 
 
-=== Additional Help
+ Additional Help
 
 You can access additional documentation about each Processor's usage by 
right-clicking on the Processor and selecting 'Usage' from the context menu. 
Alternatively, select Help from the Global Menu in the top-right corner of the 
UI to display a Help page with all of the documentation, including usage 
documentation for all the Processors that are available. Click on the desired 
Processor to view usage documentation.
 
+[[Parameters]]
+=== Parameters
+The values of properties in the flow, including sensitive properties, can be 
parameterized using Parameters. Parameters are created and configured within 
the NiFi UI. Any property can be configured to reference a Parameter with the 
following conditions:
+
+ - A sensitive property can only reference a Sensitive Parameter
+ - A non-sensitive property can only reference a non-Sensitive Parameter
+ - Properties that reference Controller Services can not use Parameters
+
+NOTE: NiFi automatically picks up new or modified parameters.
+
+[[parameter-contexts]]
+ Parameter Contexts
+Parameters are created within Parameter Contexts. Parameter Contexts are 
globally defined/accessible to the NiFi instance. Access policies can be 
applied to Parameter Contexts to determine which users can create them. Once 
created, policies to read and write to a specific Parameter Context can also be 
applied (see <> for more information).
+
+= Creating a Parameter Context
+To create a Parameter Context, select Parameter Contexts from the Global Menu:
+
+image:parameter-contexts-selection.png["Global Menu - Parameter Contexts"]
+
+In the Parameter Contexts window, click the `+` button in the upper-right 
corner and the Add Parameter Context window opens. The window has two tabs: 
Settings and Parameters.
+
+image:parameter-contexts-settings.png["Parameter Contexts - Settings"]
+
+On the "Settings" tab, add a name for the Parameter Context and a description 
if desired.  Select "Apply" to save the Parameter Context or select the 
"Parameters" tab to add parameters to the context.
+
+ Adding a Parameter to a Parameter Context
+Parameters can be added during Parameter Context creation or added to existing 
Parameter Contexts.
+
+During Parameter Context creation, select the "Parameters" tab. Click the `+` 
button to open the Add Parameter window.
+
+image:add-parameter-during-parameter-context-creation.png[Add Parameter]
+
+To add parameters to an existing Parameter Context, open the Parameter Context 
window and click the Edit button (image:iconEdit.png["Edit"]) in the row of the 
desired Parameter Context.
+
+image:edit-parameter-context.png[Edit Parameter Context]
+
+On the "Parameters" tab, click the `+` button to open the Add Parameter window.
+
+The Add Parameter window has the following settings:
+
+- *Name* - A name that is used to denote the Parameter. Only alpha-numeric 
characters (a-z, A-Z, 0-9), hyphens ( - ), underscores ( _ ), periods ( . ), 
and spaces are allowed.
+
+- *Value* - The value that will be used when the Parameter is referenced.
+
+- *Set empty string* - Check to explicity set the value of the Parameter to an 
empty string. Unchecked by default.
+
+- *Sensitive Value* -  Set to "Yes" if the Parameter's Value should be 
considered sensitive. If sensitive, the value of the Parameter will not be 
shown in the UI once applied. The default setting is "No".
+
+- *Description* - A description that explains what the Parameter is, how it is 
to be used, etc. This field is optional.
+
+Once these settings are configured, select "Apply". Add additional Parameters 
or edit any existing Parameters.
+
+image:update-parameter-context.png[Update Parameter Context]
+
+To complete the process, select "Apply" from the Parameter Context window. The 
following operations are performed to validate all components that reference 
the added or modified parameters: Stopping/Restarting affected Processors, 
Disabling/Re-enabling affected Controller Services, Updating Parameter Context.
+
+image:parameters-validate-affected-components.png[Validate Affected Components]
+
+The Referencing Components section lists any components referencing the 
parameters in the parameter context organized by process group.
+
+[[assigning_parameter_context_to_PG]]
+ Assigning a Parameter Context to a Process Group
+For a component to reference a Parameter, its Process Group must first be 
assigned to a Parameter Context. Once assigned, processors and controller 
services within that Process Group may only reference Parameters 

[GitHub] [nifi] rfellows opened a new pull request #3729: NIFI-6659 - Open create new parameter context dialog in edit mode.

2019-09-12 Thread GitBox
rfellows opened a new pull request #3729: NIFI-6659 - Open create new parameter 
context dialog in edit mode.
URL: https://github.com/apache/nifi/pull/3729
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   This fixes the bug that exists when attempting to create a new parameter 
context from the Process Group Configuration dialog where it was opening in a 
read-only mode.
   
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6659) Create new parameter context option from process group config opens the new param context dialog in a read-only state.

2019-09-12 Thread Robert Fellows (Jira)
Robert Fellows created NIFI-6659:


 Summary: Create new parameter context option from process group 
config opens the new param context dialog in a read-only state.
 Key: NIFI-6659
 URL: https://issues.apache.org/jira/browse/NIFI-6659
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core UI
Reporter: Robert Fellows
Assignee: Robert Fellows






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6381) Make Parameters and Parameter Contexts searchable in UI

2019-09-12 Thread Robert Fellows (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Fellows updated NIFI-6381:
-
Status: Patch Available  (was: In Progress)

> Make Parameters and Parameter Contexts searchable in UI
> ---
>
> Key: NIFI-6381
> URL: https://issues.apache.org/jira/browse/NIFI-6381
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Robert Fellows
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928817#comment-16928817
 ] 

Ron Chittaro commented on NIFI-6654:


Validating a fix for this.

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
>  assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  *secondSchema.setFields(personFields);*
>  assertTrue(schema.equals(secondSchema));
>  assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> @Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must be completely separate set of objects 
> otherwise overloaded comparison
>  // operators (particularly that from the SimpleRecordSchema class) will not 
> be called.
>  SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  personFields.clear();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(secondSchema)));
>  secondSchema.setFields(personFields);
>  assertTrue(personSchema.equals(secondSchema));
> }
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] rfellows opened a new pull request #3728: NIFI-6381 - Make Parameters and Parameter Contexts searchable in UI

2019-09-12 Thread GitBox
rfellows opened a new pull request #3728: NIFI-6381 - Make Parameters and 
Parameter Contexts searchable in UI
URL: https://github.com/apache/nifi/pull/3728
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Adds parameter contexts and parameters to the things that can be searched.
   
   Parameter contexts match on name, id, and/or description. Selecting a 
Parameter context result opens the Parameter Context List dialog with the 
appropriate one selected in the table. Users must have READ access, otherwise 
that context (and parameters) will not be searched.
   
   Parameters match on name, description, and value (for non-sensitive 
parameters). Selecting a Parameter result opens the parameter context dialog 
and selects the parameter in the table.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6658) Provide capability for obtaining diagnostic information for admins

2019-09-12 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-6658:
-
Fix Version/s: 1.10.0
   Status: Patch Available  (was: Open)

> Provide capability for obtaining diagnostic information for admins
> --
>
> Key: NIFI-6658
> URL: https://issues.apache.org/jira/browse/NIFI-6658
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When users run into issues, there are several different questions that we end 
> up asking very often:
>  * What version of NiFi?
>  * What does thread dump look like?
>  * What version of Java?
>  * Operating system info
>  * Disk space usage
>  * Cluster information
>  * How many open file handles are used / allowed?
> And several others that are along those same lines.
> We already have the ability for an admin to run `bin/nifi.sh dump ` 
> to gather the thread dump. We should expand this capability to provide more 
> than just a thread dump and to provide the answers to these common questions, 
> so that a user can easily just obtain the diagnostic information easily with 
> a single command.
> It probably makes more sense to introduce a new command, `bin/nifi.sh 
> diagnostics ` rather than just adding this to the `dump` command 
> because there are times that we need to gather several thread dumps, and we 
> don't need to gather all of this information each time. Some may already have 
> scripts, etc. that are setup to parse the thread dumps, as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] markap14 opened a new pull request #3727: NIFI-6658: Implement new bin/nifi.sh diagnostics command that is resp…

2019-09-12 Thread GitBox
markap14 opened a new pull request #3727: NIFI-6658: Implement new bin/nifi.sh 
diagnostics command that is resp…
URL: https://github.com/apache/nifi/pull/3727
 
 
   …onsible for obtaining diagnostic information about many different parts of 
nifi, the operating system, etc.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6658) Provide capability for obtaining diagnostic information for admins

2019-09-12 Thread Mark Payne (Jira)
Mark Payne created NIFI-6658:


 Summary: Provide capability for obtaining diagnostic information 
for admins
 Key: NIFI-6658
 URL: https://issues.apache.org/jira/browse/NIFI-6658
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Mark Payne
Assignee: Mark Payne


When users run into issues, there are several different questions that we end 
up asking very often:
 * What version of NiFi?
 * What does thread dump look like?
 * What version of Java?
 * Operating system info
 * Disk space usage
 * Cluster information
 * How many open file handles are used / allowed?

And several others that are along those same lines.

We already have the ability for an admin to run `bin/nifi.sh dump ` 
to gather the thread dump. We should expand this capability to provide more 
than just a thread dump and to provide the answers to these common questions, 
so that a user can easily just obtain the diagnostic information easily with a 
single command.

It probably makes more sense to introduce a new command, `bin/nifi.sh 
diagnostics ` rather than just adding this to the `dump` command 
because there are times that we need to gather several thread dumps, and we 
don't need to gather all of this information each time. Some may already have 
scripts, etc. that are setup to parse the thread dumps, as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] scottyaslan commented on issue #3718: NIFI-6630 - Add a goto action in the property table for properties th…

2019-09-12 Thread GitBox
scottyaslan commented on issue #3718: NIFI-6630 - Add a goto action in the 
property table for properties th…
URL: https://github.com/apache/nifi/pull/3718#issuecomment-530945623
 
 
   Thanks @rfellows this has been merged to master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (NIFI-6630) Add a "Go To" button in property dialog that goes to a referenced parameter

2019-09-12 Thread Scott Aslan (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan resolved NIFI-6630.
---
Resolution: Fixed

> Add a "Go To" button in property dialog that goes to a referenced parameter
> ---
>
> Key: NIFI-6630
> URL: https://issues.apache.org/jira/browse/NIFI-6630
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Mark Payne
>Assignee: Robert Fellows
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We now have the ability to "promote" a property value to a Parameter, which 
> is extremely helpful. Once a property is referencing a Parameter, though, it 
> would be helpful to be able to click a button to jump to the referenced 
> parameter in the Parameter Context. If the property references multiple 
> parameters, perhaps it should just jump to the first parameter referenced.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (NIFI-6657) Update FDS Select to match other form field input style specifications

2019-09-12 Thread Scott Aslan (Jira)
Scott Aslan created NIFI-6657:
-

 Summary: Update FDS Select to match other form field input style 
specifications
 Key: NIFI-6657
 URL: https://issues.apache.org/jira/browse/NIFI-6657
 Project: Apache NiFi
  Issue Type: Improvement
  Components: FDS
Reporter: Scott Aslan
 Fix For: fds-0.3


The nifi fds [https://apache.github.io/nifi-fds/#Select] component should look 
and feel more like the [https://apache.github.io/nifi-fds/#Input] components so 
that when a select is used in a form with other fds inputs it will not look so 
different.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] bbende commented on issue #3726: NIFI-5816 Switch SFTP processors to use SSHJ

2019-09-12 Thread GitBox
bbende commented on issue #3726: NIFI-5816 Switch SFTP processors to use SSHJ
URL: https://github.com/apache/nifi/pull/3726#issuecomment-530942458
 
 
   Still need to update the NOTICE for standard-processors and the assembly, 
will push another commit for that in a little bit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-5816) SFTP cannot connect due to JSch limitations

2019-09-12 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-5816:
--
Status: Patch Available  (was: Open)

> SFTP cannot connect due to JSch limitations
> ---
>
> Key: NIFI-5816
> URL: https://issues.apache.org/jira/browse/NIFI-5816
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Laurenceau Julien
>Assignee: Bryan Bende
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hi,
> The JSch library used for SFTP does not support HostKeyAlgorithms=ed25519 
> whereas it is the current standard. This make SFTP / SSH unusable when 
> dealing with recent openssh config.
> On dbeaver project they switched to sshj.
> [https://github.com/dbeaver/dbeaver/issues/2202]
> [https://community.hortonworks.com/answers/226377/view.html]
>  
> https://stackoverflow.com/questions/2003419/com-jcraft-jsch-jschexception-unknownhostkey
> One more argument against JSch is that it does not support rsa key length 
> other than default (2048).
> ssh-keygen -o -t rsa -b 4096 -f id_rsa -> does not work with nifi
> ssh-keygen -t rsa -f id_rsa -> works with nifi
> Thanks and regards
> JL
> PS : sorry but I do not know nifi deep enough to fill all fields.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (NIFI-5816) SFTP cannot connect due to JSch limitations

2019-09-12 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende reassigned NIFI-5816:
-

Assignee: Bryan Bende

> SFTP cannot connect due to JSch limitations
> ---
>
> Key: NIFI-5816
> URL: https://issues.apache.org/jira/browse/NIFI-5816
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Laurenceau Julien
>Assignee: Bryan Bende
>Priority: Minor
>
> Hi,
> The JSch library used for SFTP does not support HostKeyAlgorithms=ed25519 
> whereas it is the current standard. This make SFTP / SSH unusable when 
> dealing with recent openssh config.
> On dbeaver project they switched to sshj.
> [https://github.com/dbeaver/dbeaver/issues/2202]
> [https://community.hortonworks.com/answers/226377/view.html]
>  
> https://stackoverflow.com/questions/2003419/com-jcraft-jsch-jschexception-unknownhostkey
> One more argument against JSch is that it does not support rsa key length 
> other than default (2048).
> ssh-keygen -o -t rsa -b 4096 -f id_rsa -> does not work with nifi
> ssh-keygen -t rsa -f id_rsa -> works with nifi
> Thanks and regards
> JL
> PS : sorry but I do not know nifi deep enough to fill all fields.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] bbende opened a new pull request #3726: NIFI-5816 Switch SFTP processors to use SSHJ

2019-09-12 Thread GitBox
bbende opened a new pull request #3726: NIFI-5816 Switch SFTP processors to use 
SSHJ
URL: https://github.com/apache/nifi/pull/3726
 
 
   I used the atmoz/sftp Docker container to test the SFTP processors:
   https://github.com/atmoz/sftp
   
   Generate Server Keys 
   ```
   ssh-keygen -t ed25519 -f ssh_host_ed25519_key < /dev/null
   ssh-keygen -t rsa -b 4096 -f ssh_host_rsa_key < /dev/null
   ```
   
   Generate User Keys
   ```
   ssh-keygen -t ed25519 -f foo_ed25519_key < /dev/null
   ssh-keygen -t rsa -b 4096 -f foo_rsa_key < /dev/null
   ```
   
   Launch Container
   ```
   docker run \
   -v /path/to/sftp-keys/foo_rsa_key.pub:/home/foo/.ssh/keys/foo_rsa_key.pub:ro 
\
   -v 
/path/to/sftp-keys/foo_ed25519_key.pub:/home/foo/.ssh/keys/foo_ed25519_key.pub:ro
 \
   -v /path/to/sftp-keys/ssh_host_ed25519_key:/etc/ssh/ssh_host_ed25519_key \
   -v /path/to/sftp-keys/ssh_host_rsa_key:/etc/ssh/ssh_host_rsa_key \
   -v /path/to/sftp-share:/home/foo/sftp-share \
   -p :22 -p 1080:1080 -d atmoz/sftp \
   foo::1001
   ```
   Then you can configured an SFTP processor to with host of 'localhost' and 
port of ''.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6630) Add a "Go To" button in property dialog that goes to a referenced parameter

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928760#comment-16928760
 ] 

ASF subversion and git services commented on NIFI-6630:
---

Commit 75c47388a611a2bb3ef398ad331cb0d05a88acd6 in nifi's branch 
refs/heads/master from Rob Fellows
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=75c4738 ]

NIFI-6630 - Add a goto action in the property table for properties that 
reference parameters.

NIFI-6630 - base convert and goto parameter logic on any reference to a 
parameter contained in the text of the property value.

This closes #3718

Signed-off-by: Scott Aslan 


> Add a "Go To" button in property dialog that goes to a referenced parameter
> ---
>
> Key: NIFI-6630
> URL: https://issues.apache.org/jira/browse/NIFI-6630
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Mark Payne
>Assignee: Robert Fellows
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We now have the ability to "promote" a property value to a Parameter, which 
> is extremely helpful. Once a property is referencing a Parameter, though, it 
> would be helpful to be able to click a button to jump to the referenced 
> parameter in the Parameter Context. If the property references multiple 
> parameters, perhaps it should just jump to the first parameter referenced.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] asfgit closed pull request #3718: NIFI-6630 - Add a goto action in the property table for properties th…

2019-09-12 Thread GitBox
asfgit closed pull request #3718: NIFI-6630 - Add a goto action in the property 
table for properties th…
URL: https://github.com/apache/nifi/pull/3718
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6630) Add a "Go To" button in property dialog that goes to a referenced parameter

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928759#comment-16928759
 ] 

ASF subversion and git services commented on NIFI-6630:
---

Commit 75c47388a611a2bb3ef398ad331cb0d05a88acd6 in nifi's branch 
refs/heads/master from Rob Fellows
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=75c4738 ]

NIFI-6630 - Add a goto action in the property table for properties that 
reference parameters.

NIFI-6630 - base convert and goto parameter logic on any reference to a 
parameter contained in the text of the property value.

This closes #3718

Signed-off-by: Scott Aslan 


> Add a "Go To" button in property dialog that goes to a referenced parameter
> ---
>
> Key: NIFI-6630
> URL: https://issues.apache.org/jira/browse/NIFI-6630
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Mark Payne
>Assignee: Robert Fellows
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We now have the ability to "promote" a property value to a Parameter, which 
> is extremely helpful. Once a property is referencing a Parameter, though, it 
> would be helpful to be able to click a button to jump to the referenced 
> parameter in the Parameter Context. If the property references multiple 
> parameters, perhaps it should just jump to the first parameter referenced.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] mcgilman commented on a change in pull request #3637: NIFI-6400 Better options, consistent ids for ShellUserGroupProvider.

2019-09-12 Thread GitBox
mcgilman commented on a change in pull request #3637: NIFI-6400 Better options, 
consistent ids for ShellUserGroupProvider.
URL: https://github.com/apache/nifi/pull/3637#discussion_r323849994
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/authorizers.xml
 ##
 @@ -171,14 +171,15 @@
 on systems that support `sh`.  Implementations available for Linux and 
Mac OS, and are selected by the
 provider based on the system property `os.name`.
 
-'Initial Refresh Delay' - duration to wait before first refresh.  
Default is '5 mins'.
 'Refresh Delay' - duration to wait between subsequent refreshes.  
Default is '5 mins'.
+'Exclude Groups' - regular expression used to exclude groups.  Default 
is '', which means no groups are excluded.
+'Exclude Users' - regular expression used to exclude users.  Default 
is '', which means no users are excluded.
 -->
 

[jira] [Resolved] (NIFI-6634) UI - Indicate variable are no longer recommended and favor parameters

2019-09-12 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-6634.
---
Fix Version/s: 1.10.0
   Resolution: Fixed

> UI - Indicate variable are no longer recommended and favor parameters
> -
>
> Key: NIFI-6634
> URL: https://issues.apache.org/jira/browse/NIFI-6634
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Robert Fellows
>Assignee: Robert Fellows
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Variables less powerful than parameters. Specifically, they don't support 
> sensitive values. On the Variables dialog, this should be conveyed to the 
> user to help guide them to use parameters instead.
> One suggestion for wording:
> "Variables are still supported for compatibility purposes but they do not 
> allow the same power as Parameters such as support for sensitive parameters.  
> Variables will be removed in a later release so please change to using 
> parameters."



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6634) UI - Indicate variable are no longer recommended and favor parameters

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928718#comment-16928718
 ] 

ASF subversion and git services commented on NIFI-6634:
---

Commit 2f7448c9e5ab010520d0567b1bba9300415720ce in nifi's branch 
refs/heads/master from Rob Fellows
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2f7448c ]

NIFI-6634 - Indicate variables are no longer recommended and favor parameters

This closes #3721


> UI - Indicate variable are no longer recommended and favor parameters
> -
>
> Key: NIFI-6634
> URL: https://issues.apache.org/jira/browse/NIFI-6634
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Robert Fellows
>Assignee: Robert Fellows
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Variables less powerful than parameters. Specifically, they don't support 
> sensitive values. On the Variables dialog, this should be conveyed to the 
> user to help guide them to use parameters instead.
> One suggestion for wording:
> "Variables are still supported for compatibility purposes but they do not 
> allow the same power as Parameters such as support for sensitive parameters.  
> Variables will be removed in a later release so please change to using 
> parameters."



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] mcgilman commented on issue #3721: NIFI-6634 - Indicate variables are no longer recommended and favor parameters

2019-09-12 Thread GitBox
mcgilman commented on issue #3721: NIFI-6634 - Indicate variables are no longer 
recommended and favor parameters
URL: https://github.com/apache/nifi/pull/3721#issuecomment-530909556
 
 
   Thanks @rfellows! This has been merged to master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] asfgit closed pull request #3721: NIFI-6634 - Indicate variables are no longer recommended and favor parameters

2019-09-12 Thread GitBox
asfgit closed pull request #3721: NIFI-6634 - Indicate variables are no longer 
recommended and favor parameters
URL: https://github.com/apache/nifi/pull/3721
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] mattyb149 commented on a change in pull request #3684: NIFI-6295: Refactored NiFiRecordSerDe to handle nested complex types

2019-09-12 Thread GitBox
mattyb149 commented on a change in pull request #3684: NIFI-6295: Refactored 
NiFiRecordSerDe to handle nested complex types
URL: https://github.com/apache/nifi/pull/3684#discussion_r323839120
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/test/java/org/apache/hive/streaming/TestNiFiRecordSerDe.java
 ##
 @@ -0,0 +1,387 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hive.streaming;
+
+import org.apache.hadoop.hive.common.type.Date;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.serde.serdeConstants;
+import org.apache.hadoop.hive.serde2.SerDeException;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.nifi.avro.AvroTypeUtil;
+import org.apache.nifi.serialization.SimpleRecordSchema;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.util.MockComponentLog;
+import org.junit.Test;
+
+import java.nio.charset.StandardCharsets;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+
+public class TestNiFiRecordSerDe {
 
 Review comment:
   I took most of your proposed changes, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] andrewmlim opened a new pull request #3725: NIFI-6558 Added Parameters to User Guide and Sys Admin Guide

2019-09-12 Thread GitBox
andrewmlim opened a new pull request #3725: NIFI-6558 Added Parameters to User 
Guide and Sys Admin Guide
URL: https://github.com/apache/nifi/pull/3725
 
 
   Also made the following changes in the User Guide:
   - Made references to buttons consistent (used quotes instead of preformatted 
text)
   - Edited Variables section and included a Warning for functionality being 
removed in future release
   - Made some minor edits/additions to recently added Analytics Prediction 
content


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6649) Back Pressure Prediction: Separate query interval configuration from prediction interval

2019-09-12 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-6649:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Back Pressure Prediction: Separate query interval configuration from 
> prediction interval
> 
>
> Key: NIFI-6649
> URL: https://issues.apache.org/jira/browse/NIFI-6649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the interval setting `nifi.analytics.predict.interval` dictates 
> both how observations should be queried as well as how far to look into the 
> future. These should be separated to allow users to project further into the 
> future using few past predictions (or vice versa).  For example allowing to 
> predict 60 minutes into the future using the last 5 minutes of data.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6649) Back Pressure Prediction: Separate query interval configuration from prediction interval

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928699#comment-16928699
 ] 

ASF subversion and git services commented on NIFI-6649:
---

Commit 8e1452a3f342ee45dfd589d343c9ecf6a7cf6825 in nifi's branch 
refs/heads/master from Yolanda M. Davis
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8e1452a ]

NIFI-6649 - added separate query interval configuration for observation queries
NIFI-6649 - documentation update

NIFI-6649 - add debug logging for score and prediction information

NIFI-6649 - fix to ensure counts return minimum value of 0 if not infinite or 
NaN

Signed-off-by: Matthew Burgess 

This closes #3719


> Back Pressure Prediction: Separate query interval configuration from 
> prediction interval
> 
>
> Key: NIFI-6649
> URL: https://issues.apache.org/jira/browse/NIFI-6649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the interval setting `nifi.analytics.predict.interval` dictates 
> both how observations should be queried as well as how far to look into the 
> future. These should be separated to allow users to project further into the 
> future using few past predictions (or vice versa).  For example allowing to 
> predict 60 minutes into the future using the last 5 minutes of data.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6649) Back Pressure Prediction: Separate query interval configuration from prediction interval

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928696#comment-16928696
 ] 

ASF subversion and git services commented on NIFI-6649:
---

Commit 8e1452a3f342ee45dfd589d343c9ecf6a7cf6825 in nifi's branch 
refs/heads/master from Yolanda M. Davis
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8e1452a ]

NIFI-6649 - added separate query interval configuration for observation queries
NIFI-6649 - documentation update

NIFI-6649 - add debug logging for score and prediction information

NIFI-6649 - fix to ensure counts return minimum value of 0 if not infinite or 
NaN

Signed-off-by: Matthew Burgess 

This closes #3719


> Back Pressure Prediction: Separate query interval configuration from 
> prediction interval
> 
>
> Key: NIFI-6649
> URL: https://issues.apache.org/jira/browse/NIFI-6649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the interval setting `nifi.analytics.predict.interval` dictates 
> both how observations should be queried as well as how far to look into the 
> future. These should be separated to allow users to project further into the 
> future using few past predictions (or vice versa).  For example allowing to 
> predict 60 minutes into the future using the last 5 minutes of data.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6649) Back Pressure Prediction: Separate query interval configuration from prediction interval

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928698#comment-16928698
 ] 

ASF subversion and git services commented on NIFI-6649:
---

Commit 8e1452a3f342ee45dfd589d343c9ecf6a7cf6825 in nifi's branch 
refs/heads/master from Yolanda M. Davis
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8e1452a ]

NIFI-6649 - added separate query interval configuration for observation queries
NIFI-6649 - documentation update

NIFI-6649 - add debug logging for score and prediction information

NIFI-6649 - fix to ensure counts return minimum value of 0 if not infinite or 
NaN

Signed-off-by: Matthew Burgess 

This closes #3719


> Back Pressure Prediction: Separate query interval configuration from 
> prediction interval
> 
>
> Key: NIFI-6649
> URL: https://issues.apache.org/jira/browse/NIFI-6649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the interval setting `nifi.analytics.predict.interval` dictates 
> both how observations should be queried as well as how far to look into the 
> future. These should be separated to allow users to project further into the 
> future using few past predictions (or vice versa).  For example allowing to 
> predict 60 minutes into the future using the last 5 minutes of data.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] asfgit closed pull request #3719: NIFI-6649 - added separate query interval configuration for observati…

2019-09-12 Thread GitBox
asfgit closed pull request #3719: NIFI-6649 - added separate query interval 
configuration for observati…
URL: https://github.com/apache/nifi/pull/3719
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6649) Back Pressure Prediction: Separate query interval configuration from prediction interval

2019-09-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928697#comment-16928697
 ] 

ASF subversion and git services commented on NIFI-6649:
---

Commit 8e1452a3f342ee45dfd589d343c9ecf6a7cf6825 in nifi's branch 
refs/heads/master from Yolanda M. Davis
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8e1452a ]

NIFI-6649 - added separate query interval configuration for observation queries
NIFI-6649 - documentation update

NIFI-6649 - add debug logging for score and prediction information

NIFI-6649 - fix to ensure counts return minimum value of 0 if not infinite or 
NaN

Signed-off-by: Matthew Burgess 

This closes #3719


> Back Pressure Prediction: Separate query interval configuration from 
> prediction interval
> 
>
> Key: NIFI-6649
> URL: https://issues.apache.org/jira/browse/NIFI-6649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.10.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the interval setting `nifi.analytics.predict.interval` dictates 
> both how observations should be queried as well as how far to look into the 
> future. These should be separated to allow users to project further into the 
> future using few past predictions (or vice versa).  For example allowing to 
> predict 60 minutes into the future using the last 5 minutes of data.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] tpalfy opened a new pull request #3724: NIFI-6640 - UNION/CHOICE types not handled correctly

2019-09-12 Thread GitBox
tpalfy opened a new pull request #3724: NIFI-6640 - UNION/CHOICE types not 
handled correctly
URL: https://github.com/apache/nifi/pull/3724
 
 
   3 important changes:
   1. FieldTypeInference had a bug when dealing with multiple datatypes for
the same field where some (but not all) were in a wider-than-the-other
relationship.
Before: Some datatypes could be lost. String was wider than any other.
After: Consistent behaviour. String is NOT wider than any other.
   2. Choosing a datatype for a value from a ChoiceDataType:
Before it chose the first compatible datatype as the basis of conversion.
After change it tries to find the most suitable datatype.
   3. Conversion of a value of avro union type:
Before it chose the first compatible datatype as the basis of conversion.
After change it tries to find the most suitable datatype.
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928618#comment-16928618
 ] 

Ron Chittaro commented on NIFI-6654:


[~joewitt] Sorry, I took PR to mean 'Problem Report'...not 'Pull Request'. I 
don't have a fix for this in hand as this is the first time I have looked at 
this code. 

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
>  assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  *secondSchema.setFields(personFields);*
>  assertTrue(schema.equals(secondSchema));
>  assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> @Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must be completely separate set of objects 
> otherwise overloaded comparison
>  // operators (particularly that from the SimpleRecordSchema class) will not 
> be called.
>  SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  personFields.clear();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(secondSchema)));
>  secondSchema.setFields(personFields);
>  assertTrue(personSchema.equals(secondSchema));
> }
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] adarmiento commented on issue #3698: WIP: NIFI-6628: Separate out logging of extensions vs. nifi framework

2019-09-12 Thread GitBox
adarmiento commented on issue #3698: WIP: NIFI-6628: Separate out logging of 
extensions vs. nifi framework 
URL: https://github.com/apache/nifi/pull/3698#issuecomment-530870492
 
 
   Hello @markap14, and sorry for my late reply. 
   
   I started looking at the 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework, right now, I added some 
loggers for the packages in the nifi-framework-core module. 
   Being all the packages in the form of org.apache.nifi.[something] what I've 
done in this commit seems a really verbose and error-prone solution. 
   What do you think about it? Should I keep diving into packages looking for 
framework related ones, or could there be a less verbose way?
   Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] MikeThomsen commented on issue #3723: NIFI-6656 Added a default visibility expression configuration item to…

2019-09-12 Thread GitBox
MikeThomsen commented on issue #3723: NIFI-6656 Added a default visibility 
expression configuration item to…
URL: https://github.com/apache/nifi/pull/3723#issuecomment-530868240
 
 
   
[MapCache_Visibility.txt](https://github.com/apache/nifi/files/3606151/MapCache_Visibility.txt)
   
   Test template. This is what the `hbase-site.xml` for hbase 2.0.5 looks like 
to test:
   
   ```
   
   
   
   
 hbase.security.authorization
 true
   
   
 hbase.coprocessor.region.classes
 
org.apache.hadoop.hbase.security.visibility.VisibilityController
   
   
 hbase.coprocessor.master.classes
 
org.apache.hadoop.hbase.security.visibility.VisibilityController
   
   
   
   
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Chittaro updated NIFI-6654:
---
Description: 
We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
 assertTrue(schema.equals(schema));

final SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 *secondSchema.setFields(personFields);*
 assertTrue(schema.equals(secondSchema));
 assertTrue(secondSchema.equals(schema));
{quote}
 

To reproduce the stack overflow I wrote the following test cases which I 
believe behaves more like it would in the 'UpdateProcessor' (or anywhere else a 
schema comparison is happening) where the schema object and all objects within 
are completely different memory references.

 

@Test
public void testSchemaCompareSelfRef() {

 // Set up first schema
 final SimpleRecordSchema personSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(personSchema)));
 personSchema.setFields(personFields);

 // Set up second schema. Must be completely separate set of objects otherwise 
overloaded comparison
 // operators (particularly that from the SimpleRecordSchema class) will not be 
called.


 SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 personFields.clear();


 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));


 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));


 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));


 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(secondSchema)));
 secondSchema.setFields(personFields);

 assertTrue(personSchema.equals(secondSchema));
}

 

 

  was:
We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
 assertTrue(schema.equals(schema));

final 

[jira] [Updated] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Chittaro updated NIFI-6654:
---
Description: 
We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
 assertTrue(schema.equals(schema));

final SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 *secondSchema.setFields(personFields);*
 assertTrue(schema.equals(secondSchema));
 assertTrue(secondSchema.equals(schema));
{quote}
 

To reproduce the stack overflow I wrote the following test cases which I 
believe behaves more like it would in the 'UpdateProcessor' (or anywhere else a 
schema comparison is happening) where the schema object and all objects within 
are completely different memory references.

 

@Test
public void testSchemaCompareSelfRef() {

 // Set up first schema
 final SimpleRecordSchema personSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(personSchema)));
 personSchema.setFields(personFields);

 // Set up second schema. Must be completely separate set of objects otherwise 
overloaded comparison
 // operators (particularly that from the SimpleRecordSchema class) will not be 
called.
 SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 personFields.clear();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(secondSchema)));

 secondSchema.setFields(personFields);

 assertTrue(personSchema.equals(secondSchema));
}

 

  was:
We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
 assertTrue(schema.equals(schema));

final SimpleRecordSchema 

[GitHub] [nifi] MikeThomsen commented on issue #3723: NIFI-6656 Added a default visibility expression configuration item to…

2019-09-12 Thread GitBox
MikeThomsen commented on issue #3723: NIFI-6656 Added a default visibility 
expression configuration item to…
URL: https://github.com/apache/nifi/pull/3723#issuecomment-530866821
 
 
   @bbende @ijokarumawak could one of you review?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Joseph Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928612#comment-16928612
 ] 

Joseph Witt commented on NIFI-6654:
---

This is a good guide 
https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide

Welcome to the NiFi community.

Thanks

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
>  assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  *secondSchema.setFields(personFields);*
>  assertTrue(schema.equals(secondSchema));
>  assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> @Test
>  public void testSchemaCompareSelfRef() {
> // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
> // Set up second schema. Must be completely separate set of objects otherwise 
> overloaded comparison
>  // operators (particularly that from the SimpleRecordSchema class) will not 
> be called.
>  SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  personFields.clear();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(secondSchema)));
> secondSchema.setFields(personFields);
> assertTrue(personSchema.equals(secondSchema));
>  }



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Chittaro updated NIFI-6654:
---
Description: 
We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
 assertTrue(schema.equals(schema));

final SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 *secondSchema.setFields(personFields);*
 assertTrue(schema.equals(secondSchema));
 assertTrue(secondSchema.equals(schema));
{quote}
 

To reproduce the stack overflow I wrote the following test cases which I 
believe behaves more like it would in the 'UpdateProcessor' (or anywhere else a 
schema comparison is happening) where the schema object and all objects within 
are completely different memory references.

 

@Test
 public void testSchemaCompareSelfRef() {

// Set up first schema
 final SimpleRecordSchema personSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(personSchema)));
 personSchema.setFields(personFields);

// Set up second schema. Must be completely separate set of objects otherwise 
overloaded comparison
 // operators (particularly that from the SimpleRecordSchema class) will not be 
called.
 SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 personFields.clear();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(secondSchema)));

secondSchema.setFields(personFields);

assertTrue(personSchema.equals(secondSchema));
 }

  was:
We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
 assertTrue(schema.equals(schema));

final SimpleRecordSchema 

[GitHub] [nifi] MikeThomsen opened a new pull request #3723: NIFI-6656 Added a default visibility expression configuration item to…

2019-09-12 Thread GitBox
MikeThomsen opened a new pull request #3723: NIFI-6656 Added a default 
visibility expression configuration item to…
URL: https://github.com/apache/nifi/pull/3723
 
 
   … the HBase map cache client services.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Chittaro updated NIFI-6654:
---
Description: 
We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
 assertTrue(schema.equals(schema));

final SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 *secondSchema.setFields(personFields);*
 assertTrue(schema.equals(secondSchema));
 assertTrue(secondSchema.equals(schema));
{quote}
 

To reproduce the stack overflow I wrote the following test cases which I 
believe behaves more like it would in the 'UpdateProcessor' (or anywhere else a 
schema comparison is happening) where the schema object and all objects within 
are completely different memory references.

 
{quote}@Test
 public void testSchemaCompareSelfRef() {

// Set up first schema
 final SimpleRecordSchema personSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(personSchema)));
 personSchema.setFields(personFields);

// Set up second schema. Must be completely separate set of objects otherwise 
overloaded comparison
 // operators (particularly that from the SimpleRecordSchema class) will not be 
called.
 SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 personFields.clear();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(secondSchema)));

secondSchema.setFields(personFields);

assertTrue(personSchema.equals(secondSchema));
 }
{quote}
 

  was:
We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
personFields.add(new RecordField("name", RecordFieldType.STRING.getDataType()));
personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
assertTrue(schema.equals(schema));

final 

[jira] [Created] (NIFI-6656) Add support for setting HBase visibility expressions to HBase map cache clients

2019-09-12 Thread Mike Thomsen (Jira)
Mike Thomsen created NIFI-6656:
--

 Summary: Add support for setting HBase visibility expressions to 
HBase map cache clients
 Key: NIFI-6656
 URL: https://issues.apache.org/jira/browse/NIFI-6656
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen


Add a field to the map cache controller services to define what visibility 
expression should be applied to them.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928608#comment-16928608
 ] 

Ron Chittaro commented on NIFI-6654:


Hi [~joewitt] , no worries on the priority reset...understood. If you don't 
mind, can you elaborate on the PR process? I am new here ;->

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
> personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
> personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
> assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> *secondSchema.setFields(personFields);*
> assertTrue(schema.equals(secondSchema));
> assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> {quote}@Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must be completely separate set of objects 
> otherwise overloaded comparison
>  // operators (particularly that from the SimpleRecordSchema class) will not 
> be called.
>  SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  personFields.clear();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(secondSchema)));
>  secondSchema.setFields(personFields);
>  assertTrue(personSchema.equals(secondSchema));
> }
> {quote}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] scottyaslan commented on issue #3718: NIFI-6630 - Add a goto action in the property table for properties th…

2019-09-12 Thread GitBox
scottyaslan commented on issue #3718: NIFI-6630 - Add a goto action in the 
property table for properties th…
URL: https://github.com/apache/nifi/pull/3718#issuecomment-530863278
 
 
   Reviewing...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-6655) Investigate maven-failsafe-plugin version for Java 11 compatibility

2019-09-12 Thread Jeff Storck (Jira)
Jeff Storck created NIFI-6655:
-

 Summary: Investigate maven-failsafe-plugin version for Java 11 
compatibility
 Key: NIFI-6655
 URL: https://issues.apache.org/jira/browse/NIFI-6655
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Tools and Build
Reporter: Jeff Storck


The maven-surefire-plugin was upgraded to 2.22.0, most likely 
maven-failsafe-plugin should be updated to that version as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6655) Investigate maven-failsafe-plugin version for Java 11 compatibility

2019-09-12 Thread Jeff Storck (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-6655:
--
Labels: Java11  (was: )

> Investigate maven-failsafe-plugin version for Java 11 compatibility
> ---
>
> Key: NIFI-6655
> URL: https://issues.apache.org/jira/browse/NIFI-6655
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Tools and Build
>Reporter: Jeff Storck
>Priority: Major
>  Labels: Java11
>
> The maven-surefire-plugin was upgraded to 2.22.0, most likely 
> maven-failsafe-plugin should be updated to that version as well.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Joseph Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928601#comment-16928601
 ] 

Joseph Witt commented on NIFI-6654:
---

[~rchittaro] thanks for reporting and with such good detail.  This sort of 
issue would not be a blocker for the community producing a NiFi release so I've 
reset the priority to a typical level.  As far as fix version that can be set 
once there is a PR to resolve it and review cycles nearing or completing a 
merge.  We generally avoid setting fix version unless there are already commits 
landed for some feature that we've learned is incomplete or something that 
would rise to the level of a blocker (like a core issue in the framework).

Will you be submitting a PR on this?

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
> personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
> personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
> assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> *secondSchema.setFields(personFields);*
> assertTrue(schema.equals(secondSchema));
> assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> {quote}@Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must be completely separate set of objects 
> otherwise overloaded comparison
>  // operators (particularly that from the SimpleRecordSchema class) will not 
> be called.
>  SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  personFields.clear();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(secondSchema)));
>  secondSchema.setFields(personFields);
>  assertTrue(personSchema.equals(secondSchema));
> }
> {quote}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Joseph Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-6654:
--
Fix Version/s: (was: 1.10.0)

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
> personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
> personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
> assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> *secondSchema.setFields(personFields);*
> assertTrue(schema.equals(secondSchema));
> assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> {quote}@Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must be completely separate set of objects 
> otherwise overloaded comparison
>  // operators (particularly that from the SimpleRecordSchema class) will not 
> be called.
>  SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  personFields.clear();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(secondSchema)));
>  secondSchema.setFields(personFields);
>  assertTrue(personSchema.equals(secondSchema));
> }
> {quote}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928600#comment-16928600
 ] 

Ron Chittaro commented on NIFI-6654:


Meant to add, the problem is down in the RecordData.equals() where ultimately 2 
schemas are compared, when they are self referential this is endless recursion:

 
{quote}@Override
public boolean equals(final Object obj) {
 if (obj == this) {
 return true;
 }
 if (obj == null) {
 return false;
 }
 if (!(obj instanceof RecordDataType)) {
 return false;
 }

 final RecordDataType other = (RecordDataType) obj;
 *return Objects.equals(childSchema, other.childSchema);*
}
{quote}
 

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
> Fix For: 1.10.0
>
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
> personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
> personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
> assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> *secondSchema.setFields(personFields);*
> assertTrue(schema.equals(secondSchema));
> assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> {quote}@Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must be completely separate set of objects 
> otherwise overloaded comparison
>  // operators (particularly that from the SimpleRecordSchema class) will not 
> be called.
>  SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  personFields.clear();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(secondSchema)));
>  secondSchema.setFields(personFields);
>  assertTrue(personSchema.equals(secondSchema));
> }
> {quote}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Joseph Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-6654:
--
Priority: Major  (was: Blocker)

> Stack overflow with self referencing avro schema
> 
>
> Key: NIFI-6654
> URL: https://issues.apache.org/jira/browse/NIFI-6654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Ron Chittaro
>Priority: Major
> Fix For: 1.10.0
>
>
> We (and our customers, thus why I marked this a blocker as we cannot proceed 
> with 1.9.2 and in some cases product depolyment at all) have run into a stack 
> overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
> 1.7 to 1.9.2. I did some digging and it happens when:
>  * A schema comparison operation is performed, AND
>  * the schema in question is self referential
> I was a bit confused initially as I found that a unit test case for this 
> already existed and was passing when I checked out and built 1.9.2. The test 
> case is:
> {quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
> TestSimpleRecordSchema.java located in: 
> nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
> {quote}
> However, if you dig a bit into this test case it is passing because the two 
> schemas being compared contain the same memory references for the fields 
> within them, thus comparisons do not exercise completely the underlying 
> equals() operators of the objects contained in the schema. See bold below:
> {quote}final SimpleRecordSchema schema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> final List personFields = new ArrayList<>();
> personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
> personFields.add(new RecordField("sibling", 
> RecordFieldType.RECORD.getRecordDataType(schema)));
> *schema.setFields(personFields);*
> schema.hashCode();
> assertTrue(schema.equals(schema));
> final SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
> *secondSchema.setFields(personFields);*
> assertTrue(schema.equals(secondSchema));
> assertTrue(secondSchema.equals(schema));
> {quote}
>  
> To reproduce the stack overflow I wrote the following test cases which I 
> believe behaves more like it would in the 'UpdateProcessor' (or anywhere else 
> a schema comparison is happening) where the schema object and all objects 
> within are completely different memory references.
>  
> {quote}@Test
> public void testSchemaCompareSelfRef() {
>  // Set up first schema
>  final SimpleRecordSchema personSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  final List personFields = new ArrayList<>();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(personSchema)));
>  personSchema.setFields(personFields);
>  // Set up second schema. Must be completely separate set of objects 
> otherwise overloaded comparison
>  // operators (particularly that from the SimpleRecordSchema class) will not 
> be called.
>  SimpleRecordSchema secondSchema = new 
> SimpleRecordSchema(SchemaIdentifier.EMPTY);
>  personFields.clear();
>  personFields.add(new RecordField("name", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("address", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("SIN", 
> RecordFieldType.STRING.getDataType()));
>  personFields.add(new RecordField("PersonalData", 
> RecordFieldType.RECORD.getRecordDataType(secondSchema)));
>  secondSchema.setFields(personFields);
>  assertTrue(personSchema.equals(secondSchema));
> }
> {quote}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (NIFI-6654) Stack overflow with self referencing avro schema

2019-09-12 Thread Ron Chittaro (Jira)
Ron Chittaro created NIFI-6654:
--

 Summary: Stack overflow with self referencing avro schema
 Key: NIFI-6654
 URL: https://issues.apache.org/jira/browse/NIFI-6654
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.9.2
Reporter: Ron Chittaro
 Fix For: 1.10.0


We (and our customers, thus why I marked this a blocker as we cannot proceed 
with 1.9.2 and in some cases product depolyment at all) have run into a stack 
overflow exception with the 'UpdateRecord' processor when upgrading from Nifi 
1.7 to 1.9.2. I did some digging and it happens when:
 * A schema comparison operation is performed, AND
 * the schema in question is self referential

I was a bit confused initially as I found that a unit test case for this 
already existed and was passing when I checked out and built 1.9.2. The test 
case is:
{quote}testHashCodeAndEqualsWithSelfReferencingSchema() from 
TestSimpleRecordSchema.java located in: 
nifi\nifi-commons\nifi-record\src\test\java\org\apache\nifi\serialization
{quote}
However, if you dig a bit into this test case it is passing because the two 
schemas being compared contain the same memory references for the fields within 
them, thus comparisons do not exercise completely the underlying equals() 
operators of the objects contained in the schema. See bold below:
{quote}final SimpleRecordSchema schema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);

final List personFields = new ArrayList<>();
personFields.add(new RecordField("name", RecordFieldType.STRING.getDataType()));
personFields.add(new RecordField("sibling", 
RecordFieldType.RECORD.getRecordDataType(schema)));

*schema.setFields(personFields);*

schema.hashCode();
assertTrue(schema.equals(schema));

final SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
*secondSchema.setFields(personFields);*
assertTrue(schema.equals(secondSchema));
assertTrue(secondSchema.equals(schema));
{quote}
 

To reproduce the stack overflow I wrote the following test cases which I 
believe behaves more like it would in the 'UpdateProcessor' (or anywhere else a 
schema comparison is happening) where the schema object and all objects within 
are completely different memory references.

 
{quote}@Test
public void testSchemaCompareSelfRef() {

 // Set up first schema
 final SimpleRecordSchema personSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 final List personFields = new ArrayList<>();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(personSchema)));
 personSchema.setFields(personFields);

 // Set up second schema. Must be completely separate set of objects otherwise 
overloaded comparison
 // operators (particularly that from the SimpleRecordSchema class) will not be 
called.
 SimpleRecordSchema secondSchema = new 
SimpleRecordSchema(SchemaIdentifier.EMPTY);
 personFields.clear();
 personFields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("address", 
RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("SIN", RecordFieldType.STRING.getDataType()));
 personFields.add(new RecordField("PersonalData", 
RecordFieldType.RECORD.getRecordDataType(secondSchema)));

 secondSchema.setFields(personFields);

 assertTrue(personSchema.equals(secondSchema));
}
{quote}
 

 

 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6619) RouteOnAttribute: Create new Routing Strategy to route only first rule that is true

2019-09-12 Thread Joseph Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928589#comment-16928589
 ] 

Joseph Witt commented on NIFI-6619:
---

The problem with this I think is that there is no ordering implied by 
component/processor properties.  We could have the processor understand that 
property names for routes will be sorted alphanumerically perhaps but that 
doens't mean the user experience/understanding would reflect that.

> RouteOnAttribute: Create new Routing Strategy to route only first rule that 
> is true
> ---
>
> Key: NIFI-6619
> URL: https://issues.apache.org/jira/browse/NIFI-6619
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.9.2
>Reporter: Raymond
>Priority: Major
> Attachments: image-2019-09-12-22-15-51-027.png
>
>
> Currently the RouteOnAttribute has the strategy  "Route to Property name". 
> The behavior is that for each rule that is true a message (clone of flowfile) 
> will be sent to the next step. 
> I would like to have another strategy:
> "Route to first matched Property name" or "Route to Property name by first 
> match" or Route to first Property name which evaluates true".
> This will ensure that next step gets exactly one message.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (NIFI-6619) RouteOnAttribute: Create new Routing Strategy to route only first rule that is true

2019-09-12 Thread HondaWei (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928587#comment-16928587
 ] 

HondaWei commented on NIFI-6619:


It's interesting.  I would like to add a strategy which name is RouteByOrder. 
It will evaluate the following properties and route the message to the first 
match.  If you have any suggestion, please let me know.

.  !image-2019-09-12-22-15-51-027.png!

> RouteOnAttribute: Create new Routing Strategy to route only first rule that 
> is true
> ---
>
> Key: NIFI-6619
> URL: https://issues.apache.org/jira/browse/NIFI-6619
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.9.2
>Reporter: Raymond
>Priority: Major
> Attachments: image-2019-09-12-22-15-51-027.png
>
>
> Currently the RouteOnAttribute has the strategy  "Route to Property name". 
> The behavior is that for each rule that is true a message (clone of flowfile) 
> will be sent to the next step. 
> I would like to have another strategy:
> "Route to first matched Property name" or "Route to Property name by first 
> match" or Route to first Property name which evaluates true".
> This will ensure that next step gets exactly one message.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (NIFI-6619) RouteOnAttribute: Create new Routing Strategy to route only first rule that is true

2019-09-12 Thread HondaWei (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HondaWei updated NIFI-6619:
---
Attachment: image-2019-09-12-22-15-51-027.png

> RouteOnAttribute: Create new Routing Strategy to route only first rule that 
> is true
> ---
>
> Key: NIFI-6619
> URL: https://issues.apache.org/jira/browse/NIFI-6619
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.9.2
>Reporter: Raymond
>Priority: Major
> Attachments: image-2019-09-12-22-15-51-027.png
>
>
> Currently the RouteOnAttribute has the strategy  "Route to Property name". 
> The behavior is that for each rule that is true a message (clone of flowfile) 
> will be sent to the next step. 
> I would like to have another strategy:
> "Route to first matched Property name" or "Route to Property name by first 
> match" or Route to first Property name which evaluates true".
> This will ensure that next step gets exactly one message.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [nifi] mcgilman commented on issue #3721: NIFI-6634 - Indicate variables are no longer recommended and favor parameters

2019-09-12 Thread GitBox
mcgilman commented on issue #3721: NIFI-6634 - Indicate variables are no longer 
recommended and favor parameters
URL: https://github.com/apache/nifi/pull/3721#issuecomment-530840561
 
 
   Will review...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] adarmiento commented on a change in pull request #3700: NIFI-6638: Empty multiple queues at once at different flow levels

2019-09-12 Thread GitBox
adarmiento commented on a change in pull request #3700: NIFI-6638: Empty 
multiple queues at once at different flow levels
URL: https://github.com/apache/nifi/pull/3700#discussion_r323741173
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-actions.js
 ##
 @@ -1086,183 +1497,173 @@
 },
 
 /**
- * Deletes the flow files in the specified connection.
+ * Deletes the flow files inside the selected connections.
  *
  * @param {type} selection
  */
-emptyQueue: function (selection) {
-if (selection.size() !== 1 || 
!nfCanvasUtils.isConnection(selection)) {
-return;
-}
+emptySelectedQueues: function (selection) {
+var connections = selection.filter(function (d) {
+var selectionItem = d3.select(this);
+return nfCanvasUtils.isConnection(selectionItem);
+});
+
+var actionName = selection.size() > 1 ? 'Empty Selected Queues' : 
'Empty Selected Queue';
+
+var dialogContent = selection.size() > 1
+? 'Are you sure you want to empty the selected queues? All 
FlowFiles waiting at the time of the request will be removed.'
+: 'Are you sure you want to empty the selected queue? All 
FlowFiles waiting at the time of the request will be removed.';
 
 // prompt the user before emptying the queue
 nfDialog.showYesNoDialog({
-headerText: 'Empty Queue',
-dialogContent: 'Are you sure you want to empty this queue? All 
FlowFiles waiting at the time of the request will be removed.',
+headerText: actionName,
+dialogContent: dialogContent,
 noText: 'Cancel',
 yesText: 'Empty',
 yesHandler: function () {
-// get the connection data
-var connection = selection.datum();
-
-var MAX_DELAY = 4;
-var cancelled = false;
-var dropRequest = null;
-var dropRequestTimer = null;
-
-// updates the progress bar
-var updateProgress = function (percentComplete) {
-// remove existing labels
-var progressBar = $('#drop-request-percent-complete');
-progressBar.find('div.progress-label').remove();
-progressBar.find('md-progress-linear').remove();
-
-// update the progress bar
-var label = $('').text(percentComplete + '%');
-
(nfNgBridge.injector.get('$compile')($(''))(nfNgBridge.rootScope)).appendTo(progressBar);
-progressBar.append(label);
-};
+emptyQueues(actionName,connections.data(),undefined);
+}
+});
+},
 
-// update the button model of the drop request status 
dialog
-$('#drop-request-status-dialog').modal('setButtonModel', [{
-buttonText: 'Stop',
-color: {
-base: '#728E9B',
-hover: '#004849',
-text: '#ff'
-},
-handler: {
-click: function () {
-cancelled = true;
-
-// we are waiting for the next poll attempt
-if (dropRequestTimer !== null) {
-// cancel it
-clearTimeout(dropRequestTimer);
-
-// cancel the drop request
-completeDropRequest();
-}
-}
-}
-}]);
+/**
+ * Empty all the queues inside the selected process groups.
+ *
+ * @param {type} selection
+ */
+emptyProcessGroupsQueues: function (selection) {
+var selectionSize = selection.size();
+var connections = [];
+var errors = [];
+var actionName = '';
+var dialogContent = '';
+
+if(selectionSize === 0) {
+actionName = 'Empty Current Process Group Queues';
+dialogContent = 'Are you sure you want to empty all queues 
inside the current process group? All FlowFiles waiting at the time of the 
request will be removed.';
+connections = d3.selectAll('g.connection').data();
+
+if(connections.length === 0) {
+// display the "no queues to empty" dialog

[GitHub] [nifi-minifi] kevdoran commented on a change in pull request #168: MINIFI-512 Change bootstrap port command handling

2019-09-12 Thread GitBox
kevdoran commented on a change in pull request #168: MINIFI-512 Change 
bootstrap port command handling
URL: https://github.com/apache/nifi-minifi/pull/168#discussion_r323709034
 
 

 ##
 File path: 
minifi-bootstrap/src/main/java/org/apache/nifi/minifi/bootstrap/RunMiNiFi.java
 ##
 @@ -1562,6 +1564,12 @@ void setAutoRestartNiFi(final boolean restart) {
 }
 
 void setMiNiFiCommandControlPort(final int port, final String secretKey) 
throws IOException {
+
+if (this.secretKey != null && this.ccPort != UNINITIALIZED_CC_PORT) {
 
 Review comment:
   Thanks for taking a look @jzonthemtn. These two values always have to be set 
together. I made it `&&` in case (in the future, I do not see this possibility 
in the current code) the bootstrap process ever got into a bad state where it 
was partially initialized, we would probably still want to allow processing 
this command again to attempt to retry to get it fully initialized. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] natural commented on a change in pull request #3594: NIFI-3833 Support for Encrypted Flow File Repositories

2019-09-12 Thread GitBox
natural commented on a change in pull request #3594: NIFI-3833 Support for 
Encrypted Flow File Repositories
URL: https://github.com/apache/nifi/pull/3594#discussion_r323583075
 
 

 ##
 File path: 
nifi-commons/nifi-write-ahead-log/src/main/java/org/wali/MinimalLockingWriteAheadLog.java
 ##
 @@ -360,8 +369,13 @@ private Long recoverFromSnapshot(final Map 
recordMap) throws IOExcept
 
 // at this point, we know the snapshotPath exists because if it 
didn't, then we either returned null
 // or we renamed partialPath to snapshotPath. So just Recover from 
snapshotPath.
-try (final DataInputStream dataIn = new DataInputStream(new 
BufferedInputStream(Files.newInputStream(snapshotPath, 
StandardOpenOption.READ {
-final String waliImplementationClass = dataIn.readUTF();
+try (final DataInputStream dataIn = new DataInputStream(new 
BufferedInputStream(SimpleCipherInputStream.wrapWithKey(Files.newInputStream(snapshotPath,
 StandardOpenOption.READ), cipherKey {
+String waliImplementationClass;
+try {
+waliImplementationClass = dataIn.readUTF();
+} catch (final java.io.UTFDataFormatException e) {
+throw new IOException("malformed input");
 
 Review comment:
   This is really a great catch, thank you for seeing this.  I've updated the 
message and included `snapshotPath` and the original exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] natural commented on a change in pull request #3594: NIFI-3833 Support for Encrypted Flow File Repositories

2019-09-12 Thread GitBox
natural commented on a change in pull request #3594: NIFI-3833 Support for 
Encrypted Flow File Repositories
URL: https://github.com/apache/nifi/pull/3594#discussion_r323578236
 
 

 ##
 File path: 
nifi-commons/nifi-write-ahead-log/src/main/java/org/apache/nifi/wali/HashMapSnapshot.java
 ##
 @@ -264,10 +267,11 @@ public synchronized void writeSnapshot(final 
SnapshotCapture snapshot) throws
 }
 
 // Write to the partial file.
-try (final FileOutputStream fileOut = new 
FileOutputStream(getPartialFile());
 
 Review comment:
   (in this specific case, however, moving the stream construction back into 
the try-with-resources declaration still causes test failures)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] natural commented on a change in pull request #3594: NIFI-3833 Support for Encrypted Flow File Repositories

2019-09-12 Thread GitBox
natural commented on a change in pull request #3594: NIFI-3833 Support for 
Encrypted Flow File Repositories
URL: https://github.com/apache/nifi/pull/3594#discussion_r323575686
 
 

 ##
 File path: nifi-docs/src/main/asciidoc/administration-guide.adoc
 ##
 @@ -2477,6 +2477,13 @@ implementation.
 |`nifi.flowfile.repository.always.sync`|If set to `true`, any change to the 
repository will be synchronized to the disk, meaning that NiFi will ask the 
operating system not to cache the information. This is very expensive and can 
significantly reduce NiFi performance. However, if it is `false`, there could 
be the potential for data loss if either there is a sudden power loss or the 
operating system crashes. The default value is `false`.
 |
 
+ Encryption
+
+The FlowFile repository can be configured to encrypt all files as they are 
written to disk.  To enable this encryption,
+set the `nifi.flowfile.repository.always.key.1` property to a 16 or 32 bit 
value like this:
 
 Review comment:
   Great point!
   
   Still working on the key management + documentation bits.  Will update when 
I've got more.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] natural commented on a change in pull request #3594: NIFI-3833 Support for Encrypted Flow File Repositories

2019-09-12 Thread GitBox
natural commented on a change in pull request #3594: NIFI-3833 Support for 
Encrypted Flow File Repositories
URL: https://github.com/apache/nifi/pull/3594#discussion_r323575365
 
 

 ##
 File path: 
nifi-commons/nifi-write-ahead-log/src/main/java/org/apache/nifi/wali/SimpleCipherOutputStream.java
 ##
 @@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.wali;
+
+import org.bouncycastle.crypto.io.CipherOutputStream;
+import org.bouncycastle.crypto.modes.AEADBlockCipher;
+
+import javax.crypto.SecretKey;
+import java.io.IOException;
+import java.io.OutputStream;
+
+/**
+ * This class extends {@link CipherOutputStream} with a static factory method 
for constructing
+ * an output stream with an AEAD block cipher.
+ *
+ * Note that the {@link CipherOutputStream} implementation writes the MAC at 
the end of the stream during `close`.
+ * If streams using this class aren't closed properly, the result may be a 
stream without a MAC written, which
+ * causes a MAC authentication failure in the input stream.
+ *
+ */
+public class SimpleCipherOutputStream extends CipherOutputStream {
+/**
+ * Constructs an {@link OutputStream} from an existing {@link 
OutputStream} and block cipher.
+ *
+ * @param out output stream to wrap.
+ * @param cipher block cipher, initialized for encryption.
+ */
+public SimpleCipherOutputStream(OutputStream out, AEADBlockCipher cipher) {
+super(out, cipher);
+}
+
+/**
+ * Static factory for wrapping an output stream with a block cipher.
+ *
+ * NB:  this function eagerly writes the initial cipher values to the 
plain output stream before returning the cipher stream.
+ *
+ * @param out output stream to wrap.
+ * @param key cipher key.
+ * @return wrapped output stream.
+ * @throws IOException if the stream cannot be written eagerly, or if the 
cipher cannot be initialized.
+ */
+public static OutputStream wrapWithKey(OutputStream out, SecretKey key) 
throws IOException {
+if (key == null) {
+return out;
 
 Review comment:
   We're checking for null keys at the call site now, so no more implicit 
wrapping.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [nifi] natural commented on a change in pull request #3594: NIFI-3833 Support for Encrypted Flow File Repositories

2019-09-12 Thread GitBox
natural commented on a change in pull request #3594: NIFI-3833 Support for 
Encrypted Flow File Repositories
URL: https://github.com/apache/nifi/pull/3594#discussion_r323574842
 
 

 ##
 File path: 
nifi-commons/nifi-write-ahead-log/src/main/java/org/apache/nifi/wali/SimpleCipherInputStream.java
 ##
 @@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.wali;
+
+import org.bouncycastle.crypto.io.CipherInputStream;
+import org.bouncycastle.crypto.modes.AEADBlockCipher;
+
+import javax.crypto.SecretKey;
+import java.io.IOException;
+import java.io.InputStream;
+
+/**
+ * This class extends {@link CipherInputStream} with a static factory method 
for constructing
+ * an input stream with an AEAD block cipher.
+ */
+public class SimpleCipherInputStream extends CipherInputStream {
+protected AEADBlockCipher cipher;
+
+/**
+ * Constructs an {@link InputStream} from an existing {@link InputStream} 
and block cipher.
+ *
+ * @param in input stream to wrap.
+ * @param cipher block cipher, initialized for decryption.
+ */
+public SimpleCipherInputStream(InputStream in, AEADBlockCipher cipher) {
+super(in, cipher);
+this.cipher = cipher;
+}
+
+/**
+ * Static factory for wrapping an input stream with a block cipher.
+ *
+ * NB:  this function eagerly reads the initial cipher values from the 
plain input stream before returning the cipher stream.
+ *
+ * @param in input stream to wrap.
+ * @param key cipher key.
+ * @return wrapped input stream.
+ * @throws IOException if the stream cannot be read eagerly, or if the 
cipher cannot be initialized.
+ */
+public static InputStream wrapWithKey(InputStream in, SecretKey key) 
throws IOException {
+if (key == null) {
+return in;
+}
+
+if (in.markSupported()) {
+in.mark(0);
+}
+
+// Read the marker, the iv, and the aad in the same order as they're 
written in the SimpleCipherOutputStream:
+try {
+final int marker = in.read();
+if (marker != SimpleCipherUtil.MARKER_BYTE) {
+if (in.markSupported()) {
+in.reset();
+}
+return in;
+}
+
+byte[] iv = new byte[SimpleCipherUtil.IV_BYTE_LEN];
+int len = in.read(iv);
+if (len != iv.length) {
+throw new IOException("Could not read IV.");
+}
+
+byte[] aad = new byte[SimpleCipherUtil.AAD_BYTE_LEN];
+len = in.read(aad);
+if (len != aad.length) {
+throw new IOException("Could not read AAD.");
+}
+
+AEADBlockCipher cipher = SimpleCipherUtil.initCipher(key, false, 
iv, aad);
+return new SimpleCipherInputStream(in, cipher);
+
+} catch (final IOException ignored) {
 
 Review comment:
   Good catch, removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >