[GitHub] [nifi] markobean commented on pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


markobean commented on pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#issuecomment-875188312


   @mattyb149 I force pushed an update. (The force was to overwrite the "wip" 
which was not necessary.) I also discovered a related issue in 
StandardFlowSynchronizer.java which I believe requires a similar NULL check for 
these properties.
   I pushed quickly as I don't expect these changes to violate the 
contrib-check profile, but I'm running that now just to be sure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] emiliosetiadarma opened a new pull request #5202: NIFI-6325 Added AWS Sensitive Property Provider

2021-07-06 Thread GitBox


emiliosetiadarma opened a new pull request #5202:
URL: https://github.com/apache/nifi/pull/5202


   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   - Added AWS Sensitive Property Provider as well as Integration Test
   - Updated Toolkit Guide with PropertyProtectionScheme migration example
   - Updated SensitivePropertyProvider interface to implement the close 
function to close resources that a Sensitive Property Provider might have 
opened.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markobean commented on pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


markobean commented on pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#issuecomment-875130866


   Yeah, that WIP commit was simply a placeholder after I took 5 minutes to 
look at it from work, and commit so I could continue after a fresh pull at 
home. Just reading your comment now. Will fix this up ASAP. Stay tuned..


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8615) ExecuteScript with python when use module directory

2021-07-06 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376069#comment-17376069
 ] 

Matt Burgess commented on NIFI-8615:


There have been number of improvements/fixes to Jython scripting between 1.13.2 
and the upcoming 1.14.0 release, when 1.14.0 is released please try with that 
and if fixed, we can close this as Overcome By Events.

> ExecuteScript with python when use module directory
> ---
>
> Key: NIFI-8615
> URL: https://issues.apache.org/jira/browse/NIFI-8615
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: debian linux
>Reporter: Aldric DUPONT
>Priority: Major
>  Labels: Debian
> Attachments: Capture2.PNG, Capture3.PNG, test_bug.xml
>
>
>  
> *I use additional module "pytz" in python, this module directory is locate in 
> "/usr/local/lib/python3.9/site-packages"*
> *I use  sample1 in "Script Body" : (no comment)*
> _from org.python.core.util.FileUtil import wrap_
> _from org.apache.nifi.processors.script import ExecuteScript_
> _from datetime import datetime, tzinfo_
> _flow_file = session.get()_
> _import pytz_
> _from pytz import timezone_
> _utc = pytz.utc_
> _eastern = timezone('US/Eastern')_
> _flow_file = session.putAttribute(flow_file, 'timezone', eastern.zone)_
> _flow_file = session.putAttribute(flow_file, 'utc', utc.zone)_
> _flow_file = session.putAttribute(flow_file, 'test', 'salut')_
> _session.transfer(flow_file, ExecuteScript.REL_SUCCESS)_
>  
> *and sample2 with comment :* __ 
> _from org.python.core.util.FileUtil import wrap_
> _from org.apache.nifi.processors.script import ExecuteScript_
> _from datetime import datetime, tzinfo_
> _flow_file = session.get()_
> _"""_
> _import pytz_
> _from pytz import timezone_
> _utc = pytz.utc_
> _eastern = timezone('US/Eastern')_
> _flow_file = session.putAttribute(flow_file, 'timezone', eastern.zone)_
> _flow_file = session.putAttribute(flow_file, 'utc', utc.zone)_
> _"""_
> _flow_file = session.putAttribute(flow_file, 'test', 'salut')_
> _session.transfer(flow_file, ExecuteScript.REL_SUCCESS)_
>  
> +*Try 1*+
> *When use sample1 in version 1.13.2 ExecuteScript make error " :* 
> {color:#ff8b00}_ERROR [Timer-Driven Process Thread-4] 
> o.a.nifi.processors.script.ExecuteScript 
> ExecuteScript[id=790ea9a2-0179-1000-a662-30042349b329] Failed to process 
> session due to org.apache.nifi.processor.exception.ProcessException: 
> javax.script.ScriptException: ImportError: No module named pytz in 

[GitHub] [nifi] jfrazee commented on a change in pull request #5136: NIFI-8668 ConsumeAzureEventHub NiFi processors need to support storag…

2021-07-06 Thread GitBox


jfrazee commented on a change in pull request #5136:
URL: https://github.com/apache/nifi/pull/5136#discussion_r664874346



##
File path: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/processor/util/StandardValidators.java
##
@@ -667,6 +667,46 @@ public ValidationResult validate(final String subject, 
final String input, final
 };
 }
 
+public static Validator createRegexMatchingValidatorWithEL(final Pattern 
pattern, final String validationMessage) {

Review comment:
   @timeabarna I don't think I understand why we need both 
`createRegexMatchingValidator()` and `createRegexMatchingValidatorWithEL()`. If 
`createRegexMatchingValidator()` isn't working with parameters wouldn't it work 
to just make the improvement you've made in the `WithEL` variant?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8615) ExecuteScript with python when use module directory

2021-07-06 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8615:
---
Fix Version/s: (was: 1.12.1)

> ExecuteScript with python when use module directory
> ---
>
> Key: NIFI-8615
> URL: https://issues.apache.org/jira/browse/NIFI-8615
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: debian linux
>Reporter: Aldric DUPONT
>Priority: Major
>  Labels: Debian
> Attachments: Capture2.PNG, Capture3.PNG, test_bug.xml
>
>
>  
> *I use additional module "pytz" in python, this module directory is locate in 
> "/usr/local/lib/python3.9/site-packages"*
> *I use  sample1 in "Script Body" : (no comment)*
> _from org.python.core.util.FileUtil import wrap_
> _from org.apache.nifi.processors.script import ExecuteScript_
> _from datetime import datetime, tzinfo_
> _flow_file = session.get()_
> _import pytz_
> _from pytz import timezone_
> _utc = pytz.utc_
> _eastern = timezone('US/Eastern')_
> _flow_file = session.putAttribute(flow_file, 'timezone', eastern.zone)_
> _flow_file = session.putAttribute(flow_file, 'utc', utc.zone)_
> _flow_file = session.putAttribute(flow_file, 'test', 'salut')_
> _session.transfer(flow_file, ExecuteScript.REL_SUCCESS)_
>  
> *and sample2 with comment :* __ 
> _from org.python.core.util.FileUtil import wrap_
> _from org.apache.nifi.processors.script import ExecuteScript_
> _from datetime import datetime, tzinfo_
> _flow_file = session.get()_
> _"""_
> _import pytz_
> _from pytz import timezone_
> _utc = pytz.utc_
> _eastern = timezone('US/Eastern')_
> _flow_file = session.putAttribute(flow_file, 'timezone', eastern.zone)_
> _flow_file = session.putAttribute(flow_file, 'utc', utc.zone)_
> _"""_
> _flow_file = session.putAttribute(flow_file, 'test', 'salut')_
> _session.transfer(flow_file, ExecuteScript.REL_SUCCESS)_
>  
> +*Try 1*+
> *When use sample1 in version 1.13.2 ExecuteScript make error " :* 
> {color:#ff8b00}_ERROR [Timer-Driven Process Thread-4] 
> o.a.nifi.processors.script.ExecuteScript 
> ExecuteScript[id=790ea9a2-0179-1000-a662-30042349b329] Failed to process 
> session due to org.apache.nifi.processor.exception.ProcessException: 
> javax.script.ScriptException: ImportError: No module named pytz in 

[jira] [Updated] (NIFI-1449) PutEmail needs more unit tests

2021-07-06 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-1449:
---
Issue Type: Test  (was: Improvement)

> PutEmail needs more unit tests
> --
>
> Key: NIFI-1449
> URL: https://issues.apache.org/jira/browse/NIFI-1449
> Project: Apache NiFi
>  Issue Type: Test
>Reporter: Joe Percivall
>Assignee: Andre F de Miranda
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the standard processor PutEmail only has two units. One verifies 
> "testHostNotFound" and the other "testEmailPropertyFormatters". Both route to 
> failure. There is no checking that the processor actually functions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8742) Unable to view FlowFile Content in cluster mode

2021-07-06 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8742:
-
Fix Version/s: (was: 1.14.0)

> Unable to view FlowFile Content in cluster mode
> ---
>
> Key: NIFI-8742
> URL: https://issues.apache.org/jira/browse/NIFI-8742
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.14.0
>Reporter: Mark Payne
>Priority: Critical
>
> When I create some content and List Queue I can see the FlowFile in the 
> queue. I can then download it. However, when I attempt to view it, I get a 
> TimeoutException:
> {code:java}
> 2021-06-25 18:08:55,958 WARN [Replicate Request Thread-1] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
> /nifi-api/flowfile-queues/452afb8c-017a-1000--46f5f263/flowfiles/907f11da-666f-428f-9582-b9afb0ac107a/content
>  to localhost:8481 due to java.net.SocketTimeoutException: timeout
> 2021-06-25 18:08:55,962 WARN [Replicate Request Thread-1] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator
> java.net.SocketTimeoutException: timeout
>   at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
>   at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
>   at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
>   at okio.RealBufferedSource.indexOf(RealBufferedSource.kt:427)
>   at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:320)
>   at okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29)
>   at 
> okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:178)
>   at 
> okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:106)
>   at 
> okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:79)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201)
>   at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:136)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:640)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:832)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
>   at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
>   at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
>   at okio.InputStreamSource.read(JvmOkio.kt:90)
>   at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
>   ... 26 common frames omitted
> {code}
> Works okay in standalone mode but not cluster mode. I have a 2-node cluster 
> 

[jira] [Updated] (NIFI-8644) Update Stateless so that a ParameterProviderDefinition is provided to dataflow instead of ParameterProvider

2021-07-06 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8644:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update Stateless so that a ParameterProviderDefinition is provided to 
> dataflow instead of ParameterProvider
> ---
>
> Key: NIFI-8644
> URL: https://issues.apache.org/jira/browse/NIFI-8644
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: NiFi Stateless
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Currently, when a stateless dataflow is created, a ParameterProvider is 
> supplied, which the engine can use to retrieve parameterized values for the 
> dataflow.
> However, it will make sense going forward to allow for a ParameterProvider to 
> be defined outside of the scope of what is provided by default in the 
> stateless api. For example, there may be a desire to integrate with a a 
> secret store provided by a public cloud service, etc. Such a capability would 
> need to be encapsulated in a NAR instead of added to the root class path.
> As such, we need to refactor the stateless api such that instead of providing 
> a ParameterProvider, we instead introduce a notion of a 
> ParameterProviderDefinition, and then include a List of 
> ParameterProviderDefinitions as part of the Dataflow Definition.
> In this way, we can make the parameter retrieval far more flexible and 
> powerful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #5113: NIFI-8644: Introduced a notion of ParameterProviderDefinition. Refact…

2021-07-06 Thread GitBox


asfgit closed pull request #5113:
URL: https://github.com/apache/nifi/pull/5113


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8644) Update Stateless so that a ParameterProviderDefinition is provided to dataflow instead of ParameterProvider

2021-07-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376047#comment-17376047
 ] 

ASF subversion and git services commented on NIFI-8644:
---

Commit 6df07df3b2d2dbabeb279ac749ed45c37efd5d01 in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=6df07df ]

NIFI-8644: Introduced a notion of ParameterProviderDefinition

- Refactored stateless to use this when creating a dataflow so that Parameter 
Provider implementations can be externalized into NARs. Also updated 
ExtensionDiscoveringManager such that callers are able to provide a new type of 
class to be discovered (e.g., ParameterProvider) so that the extensions will be 
automatically discovered
- Put specific command-line overrides as highest precedence for parameter 
overrides
- Make ParameterOverrideProvider valid by allowing for dynamically added 
parameters
- Fixed bug in validation logic, added new system tests to verify proper 
handling of Required and Optional properties
- Addressed review feedback and fixed some bugs. Also added system test to 
verify Parameter Providers are working as expected

This closes #5113

Signed-off-by: David Handermann 


> Update Stateless so that a ParameterProviderDefinition is provided to 
> dataflow instead of ParameterProvider
> ---
>
> Key: NIFI-8644
> URL: https://issues.apache.org/jira/browse/NIFI-8644
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: NiFi Stateless
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, when a stateless dataflow is created, a ParameterProvider is 
> supplied, which the engine can use to retrieve parameterized values for the 
> dataflow.
> However, it will make sense going forward to allow for a ParameterProvider to 
> be defined outside of the scope of what is provided by default in the 
> stateless api. For example, there may be a desire to integrate with a a 
> secret store provided by a public cloud service, etc. Such a capability would 
> need to be encapsulated in a NAR instead of added to the root class path.
> As such, we need to refactor the stateless api such that instead of providing 
> a ParameterProvider, we instead introduce a notion of a 
> ParameterProviderDefinition, and then include a List of 
> ParameterProviderDefinitions as part of the Dataflow Definition.
> In this way, we can make the parameter retrieval far more flexible and 
> powerful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 commented on pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


mattyb149 commented on pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#issuecomment-875101003


   Not sure if your latest WIP commit is the right approach, check my latest 
comment on StandardProcessGroupDAO


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on a change in pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


mattyb149 commented on a change in pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#discussion_r664895968



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/dao/impl/StandardProcessGroupDAO.java
##
@@ -373,6 +373,11 @@ public ProcessGroup updateProcessGroup(ProcessGroupDTO 
processGroupDTO) {
 if (flowFileOutboundPolicy != null) {
 group.setFlowFileOutboundPolicy(flowFileOutboundPolicy);
 }
+
+
group.setDefaultFlowFileExpiration(processGroupDTO.getDefaultFlowFileExpiration());

Review comment:
   When the position is changed, all other fields in the DTO are null. When 
`getDefaultFlowFileExpiration()` is null, `setDefaultFlowFileExpiration()` sets 
the system default. These method calls should check for null like the code 
right above it, and not call `set` if the value is null (but leave the 
defensive code in StandardProcessGroup)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8764) Refactor UnpackContent to use Commons Compress and Zip4j

2021-07-06 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8764:
---
Status: Patch Available  (was: In Progress)

> Refactor UnpackContent to use Commons Compress and Zip4j
> 
>
> Key: NIFI-8764
> URL: https://issues.apache.org/jira/browse/NIFI-8764
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.13.2, 1.13.1, 1.13.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Updating the {{UnpackContent}} processor to support optional decryption of 
> password-protected Zip files changed the implementation library from Apache 
> Commons Compress to Zip4j.  Although Zip4j supports most standard Zip 
> operations, it does not currently support certain alternative compression 
> algorithms, such as bzip2.  Apache Commons Compress does not support 
> decryption of password-protected Zip files.
> In order to support the widest range of available Zip formats, 
> {{UnpackContent}} should default to using Apache Commons Compress for Zip 
> files, and use Zip4j when the processor is configured with the Password 
> property.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory opened a new pull request #5201: NIFI-8764 Refactored UnpackContent to use Commons Compress and Zip4j

2021-07-06 Thread GitBox


exceptionfactory opened a new pull request #5201:
URL: https://github.com/apache/nifi/pull/5201


    Description of PR
   
   NIFI-8764 Refactors `UnpackContent` to use Apache Commons Compress for 
reading standard Zip files and Zip4j for reading encrypted Zip files.  The 
presence of a value in the `Password` property results in using Zip4j, 
otherwise Apache Commons Compress will be used.  This approach provides greater 
compatibility for compression algorithms supported in Apache Commons Compress 
that are not supported in Zip4j, while also maintaining support for decryption 
of password-protected Zip files as implemented for NIFI-.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] scottyaslan commented on pull request #5197: NIFI-8756 Upgraded AngularJS to 1.8.2 and JQuery to 3.6.0

2021-07-06 Thread GitBox


scottyaslan commented on pull request #5197:
URL: https://github.com/apache/nifi/pull/5197#issuecomment-875083501


   LGTM +1 Thanks @exceptionfactory!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8764) Refactor UnpackContent to use Commons Compress and Zip4j

2021-07-06 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8764:
---
Summary: Refactor UnpackContent to use Commons Compress and Zip4j  (was: 
Refactor UnpackContent to use Commons Compress)

> Refactor UnpackContent to use Commons Compress and Zip4j
> 
>
> Key: NIFI-8764
> URL: https://issues.apache.org/jira/browse/NIFI-8764
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.13.0, 1.13.1, 1.13.2
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>
> Updating the {{UnpackContent}} processor to support optional decryption of 
> password-protected Zip files changed the implementation library from Apache 
> Commons Compress to Zip4j.  Although Zip4j supports most standard Zip 
> operations, it does not currently support certain alternative compression 
> algorithms, such as bzip2.  Apache Commons Compress does not support 
> decryption of password-protected Zip files.
> In order to support the widest range of available Zip formats, 
> {{UnpackContent}} should default to using Apache Commons Compress for Zip 
> files, and use Zip4j when the processor is configured with the Password 
> property.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markobean commented on pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


markobean commented on pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#issuecomment-875080985


   Ok, good catch. I guess I never moved the PG. That's odd behavior to say the 
least. I don't know why simply moving the PG would cause it to revert. I'll 
look into it.
   
   Any ideas off the top of  your head why a placement change would cause 
properties to change?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


mattyb149 commented on pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#issuecomment-875066826


   I was able to reproduce with the following:
   
   1) Starting from a blank canvas/flow, create a new PG at the root level 
called 'test1'
   2) Right-click on the PG and change the values for default backpressure to 
2 and 2 GB. Click Apply
   3) Go into the PG and create a new PG called 'child1'
   4) Right-click and configure the child1 PG, verify the values were inherited 
from the test1 PG. Click Cancel
   5) Move the child PG on the canvas
   6) Right-click and configure the child1 PG, verify the values are the 
original defaults (1 and 1 GB)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8742) Unable to view FlowFile Content in cluster mode

2021-07-06 Thread Mark Payne (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376008#comment-17376008
 ] 

Mark Payne commented on NIFI-8742:
--

I found that when I run into this, it only happens when my browser is pointed 
at the Cluster Coordinator. If I point to any other node in the cluster, it 
works without issue. It also works fine when not in cluster mode. And if 
pointing at Cluster Coordinator I'm still able to download. Because of this, 
I've changed the Priority from Blocker to Critical.

> Unable to view FlowFile Content in cluster mode
> ---
>
> Key: NIFI-8742
> URL: https://issues.apache.org/jira/browse/NIFI-8742
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.14.0
>Reporter: Mark Payne
>Priority: Critical
> Fix For: 1.14.0
>
>
> When I create some content and List Queue I can see the FlowFile in the 
> queue. I can then download it. However, when I attempt to view it, I get a 
> TimeoutException:
> {code:java}
> 2021-06-25 18:08:55,958 WARN [Replicate Request Thread-1] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
> /nifi-api/flowfile-queues/452afb8c-017a-1000--46f5f263/flowfiles/907f11da-666f-428f-9582-b9afb0ac107a/content
>  to localhost:8481 due to java.net.SocketTimeoutException: timeout
> 2021-06-25 18:08:55,962 WARN [Replicate Request Thread-1] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator
> java.net.SocketTimeoutException: timeout
>   at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
>   at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
>   at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
>   at okio.RealBufferedSource.indexOf(RealBufferedSource.kt:427)
>   at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:320)
>   at okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29)
>   at 
> okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:178)
>   at 
> okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:106)
>   at 
> okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:79)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201)
>   at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:136)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:640)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:832)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
> 

[jira] [Updated] (NIFI-8742) Unable to view FlowFile Content in cluster mode

2021-07-06 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8742:
-
Priority: Critical  (was: Blocker)

> Unable to view FlowFile Content in cluster mode
> ---
>
> Key: NIFI-8742
> URL: https://issues.apache.org/jira/browse/NIFI-8742
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.14.0
>Reporter: Mark Payne
>Priority: Critical
> Fix For: 1.14.0
>
>
> When I create some content and List Queue I can see the FlowFile in the 
> queue. I can then download it. However, when I attempt to view it, I get a 
> TimeoutException:
> {code:java}
> 2021-06-25 18:08:55,958 WARN [Replicate Request Thread-1] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET 
> /nifi-api/flowfile-queues/452afb8c-017a-1000--46f5f263/flowfiles/907f11da-666f-428f-9582-b9afb0ac107a/content
>  to localhost:8481 due to java.net.SocketTimeoutException: timeout
> 2021-06-25 18:08:55,962 WARN [Replicate Request Thread-1] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator
> java.net.SocketTimeoutException: timeout
>   at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143)
>   at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:162)
>   at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:335)
>   at okio.RealBufferedSource.indexOf(RealBufferedSource.kt:427)
>   at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.kt:320)
>   at okhttp3.internal.http1.HeadersReader.readLine(HeadersReader.kt:29)
>   at 
> okhttp3.internal.http1.Http1ExchangeCodec.readResponseHeaders(Http1ExchangeCodec.kt:178)
>   at 
> okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:106)
>   at 
> okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:79)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
>   at 
> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
>   at 
> okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201)
>   at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:136)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:640)
>   at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:832)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
>   at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
>   at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
>   at okio.InputStreamSource.read(JvmOkio.kt:90)
>   at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:129)
>   ... 26 common frames omitted
> {code}
> Works okay in standalone mode but not cluster 

[jira] [Updated] (NIFI-8501) Add support for Azure Storage Client-Side Encryption

2021-07-06 Thread Joey Frazee (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Frazee updated NIFI-8501:
--
Fix Version/s: 1.14.0

> Add support for Azure Storage Client-Side Encryption
> 
>
> Key: NIFI-8501
> URL: https://issues.apache.org/jira/browse/NIFI-8501
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.13.2
>Reporter: Guillaume Schaer
>Assignee: Guillaume Schaer
>Priority: Major
>  Labels: AZURE
> Fix For: 1.14.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Microsoft allows for Blob stored on Azure to be encrypted client-side using 
> key wrapping algorithm. 
> Implementation details can be found here: 
> [https://docs.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption-java?tabs=java]
> Adding support for such encryption method would offer more compatibility with 
> the Azure ecosystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-8501) Add support for Azure Storage Client-Side Encryption

2021-07-06 Thread Joey Frazee (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Frazee resolved NIFI-8501.
---
Resolution: Fixed

> Add support for Azure Storage Client-Side Encryption
> 
>
> Key: NIFI-8501
> URL: https://issues.apache.org/jira/browse/NIFI-8501
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.13.2
>Reporter: Guillaume Schaer
>Assignee: Guillaume Schaer
>Priority: Major
>  Labels: AZURE
> Fix For: 1.14.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Microsoft allows for Blob stored on Azure to be encrypted client-side using 
> key wrapping algorithm. 
> Implementation details can be found here: 
> [https://docs.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption-java?tabs=java]
> Adding support for such encryption method would offer more compatibility with 
> the Azure ecosystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8501) Add support for Azure Storage Client-Side Encryption

2021-07-06 Thread Joey Frazee (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Frazee updated NIFI-8501:
--
Affects Version/s: 1.13.2

> Add support for Azure Storage Client-Side Encryption
> 
>
> Key: NIFI-8501
> URL: https://issues.apache.org/jira/browse/NIFI-8501
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.13.2
>Reporter: Guillaume Schaer
>Assignee: Guillaume Schaer
>Priority: Major
>  Labels: AZURE
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Microsoft allows for Blob stored on Azure to be encrypted client-side using 
> key wrapping algorithm. 
> Implementation details can be found here: 
> [https://docs.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption-java?tabs=java]
> Adding support for such encryption method would offer more compatibility with 
> the Azure ecosystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8501) Add support for Azure Storage Client-Side Encryption

2021-07-06 Thread Joey Frazee (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Frazee updated NIFI-8501:
--
Component/s: Extensions

> Add support for Azure Storage Client-Side Encryption
> 
>
> Key: NIFI-8501
> URL: https://issues.apache.org/jira/browse/NIFI-8501
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.13.2
>Reporter: Guillaume Schaer
>Assignee: Guillaume Schaer
>Priority: Major
>  Labels: AZURE
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Microsoft allows for Blob stored on Azure to be encrypted client-side using 
> key wrapping algorithm. 
> Implementation details can be found here: 
> [https://docs.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption-java?tabs=java]
> Adding support for such encryption method would offer more compatibility with 
> the Azure ecosystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8501) Add support for Azure Storage Client-Side Encryption

2021-07-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376006#comment-17376006
 ] 

ASF subversion and git services commented on NIFI-8501:
---

Commit ace27e5f693a0942afed1deaba0eca6aefe9077e in nifi's branch 
refs/heads/main from Guillaume Schaer
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ace27e5 ]

NIFI-8501 Added Azure blob client side encryption

This closes #5078

Signed-off-by: Joey Frazee 


> Add support for Azure Storage Client-Side Encryption
> 
>
> Key: NIFI-8501
> URL: https://issues.apache.org/jira/browse/NIFI-8501
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Guillaume Schaer
>Assignee: Guillaume Schaer
>Priority: Major
>  Labels: AZURE
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Microsoft allows for Blob stored on Azure to be encrypted client-side using 
> key wrapping algorithm. 
> Implementation details can be found here: 
> [https://docs.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption-java?tabs=java]
> Adding support for such encryption method would offer more compatibility with 
> the Azure ecosystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8759) ExecuteSQL and ExecuteSQLRecord unnecessarily fall back to default decimal scale

2021-07-06 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376004#comment-17376004
 ] 

Matt Burgess commented on NIFI-8759:


The comments in the original Jira NIFI-3958 seem to explain why the check > 0 
is used instead of >= 0:

// Oracle returns precision=0, scale=-127 for variable scale value such as 
ROWNUM or function result.
// Specifying 'oracle.jdbc.J2EE13Compliant' SystemProperty makes it to return 
scale=0 instead.
// Queries for example, 'SELECT 1.23 as v from DUAL' can be problematic because 
it can't be mapped with decimal with scale=0.
// Default scale is used to preserve decimals in such case.

So it seems like Oracle can step on itself if the system property is applied. 
If setting the property to be compliant also causes an issue with some queries, 
IMO that's an Oracle issue and not something we should allow for at the expense 
of other compliant DBs. [~ijokarumawak] Do you have thoughts here?

Is there a use case where scale=0 but the default scale should be something 
other than 0? If not the solution is just direct the user to set the default 
scale to zero; otherwise I think we should change the check to >= 0 before 
using the default scale. This line got copied from the negative-precision 
clause in NIFI-3958 to the positive-precision case in NIFI-6775, so would 
likely need to be changed in both places.

> ExecuteSQL and ExecuteSQLRecord unnecessarily fall back to default decimal 
> scale
> 
>
> Key: NIFI-8759
> URL: https://issues.apache.org/jira/browse/NIFI-8759
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Denes Arvay
>Assignee: Denes Arvay
>Priority: Major
>
> If the database returns 0 as scale of a decimal field ExecuteSQL and 
> ExecuteSQLRecord processors fall back to the default scale even though 0 
> should be a valid scale.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] jfrazee closed pull request #5078: NIFI-8501: Setup Azure blob client side encryption

2021-07-06 Thread GitBox


jfrazee closed pull request #5078:
URL: https://github.com/apache/nifi/pull/5078


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #5113: NIFI-8644: Introduced a notion of ParameterProviderDefinition. Refact…

2021-07-06 Thread GitBox


markap14 commented on a change in pull request #5113:
URL: https://github.com/apache/nifi/pull/5113#discussion_r664843927



##
File path: 
nifi-system-tests/nifi-stateless-system-test-suite/src/test/java/org/apache/nifi/stateless/parameters/ParameterContextIT.java
##
@@ -47,6 +50,174 @@
 
 public class ParameterContextIT extends StatelessSystemIT {
 
+@Test
+public void testCustomParameterProvider() throws IOException, 
StatelessConfigurationException, InterruptedException {
+final VersionedFlowBuilder flowBuilder = new VersionedFlowBuilder();
+final VersionedPort outPort = flowBuilder.createOutputPort("Out");
+final VersionedProcessor generate = 
flowBuilder.createSimpleProcessor("GenerateFlowFile");
+
+generate.setProperties(Collections.singletonMap("Batch Size", 
"#{three}"));
+flowBuilder.createConnection(generate, outPort, "success");
+
+final VersionedFlowSnapshot flowSnapshot = 
flowBuilder.getFlowSnapshot();
+
+// Define the Parameter Context to use
+final ParameterProviderDefinition numericParameterProvider = new 
ParameterProviderDefinition();
+numericParameterProvider.setName("Numeric Parameter Provider");
+
numericParameterProvider.setType("org.apache.nifi.stateless.parameters.NumericParameterProvider");
+final List parameterProviders = 
Collections.singletonList(numericParameterProvider);
+
+// Create a Parameter Context & set it on the root group.
+final VersionedParameterContext parameterContext = 
flowBuilder.createParameterContext("Context 1");
+parameterContext.getParameters().add(createVersionedParameter("three", 
"-1"));  // Set value to -1. This should be overridden by the Numeric Parameter 
Context.
+flowBuilder.getRootGroup().setParameterContextName("Context 1");
+
+// Startup the dataflow
+final StatelessDataflow dataflow = loadDataflow(flowSnapshot, 
Collections.emptyList(), parameterProviders, Collections.emptySet(), 
TransactionThresholds.SINGLE_FLOWFILE);
+
+final DataflowTrigger trigger = dataflow.trigger();
+final TriggerResult result = trigger.getResult();
+final List outputFlowFiles = 
result.getOutputFlowFiles().get("Out");
+assertEquals(3, outputFlowFiles.size());
+result.acknowledge();
+}
+
+
+@Test
+public void testInvalidParameterProvider() throws IOException, 
StatelessConfigurationException {
+final VersionedFlowBuilder flowBuilder = new VersionedFlowBuilder();
+final VersionedPort outPort = flowBuilder.createOutputPort("Out");
+final VersionedProcessor generate = 
flowBuilder.createSimpleProcessor("GenerateFlowFile");
+
+generate.setProperties(Collections.singletonMap("Batch Size", 
"#{three}"));
+flowBuilder.createConnection(generate, outPort, "success");
+
+final VersionedFlowSnapshot flowSnapshot = 
flowBuilder.getFlowSnapshot();
+
+// Define the Parameter Context to use
+final ParameterProviderDefinition numericParameterProvider = new 
ParameterProviderDefinition();
+numericParameterProvider.setName("Invalid Parameter Provider");
+
numericParameterProvider.setType("org.apache.nifi.stateless.parameters.InvalidParameterProvider");
+final List parameterProviders = 
Collections.singletonList(numericParameterProvider);
+
+// Create a Parameter Context & set it on the root group.
+final VersionedParameterContext parameterContext = 
flowBuilder.createParameterContext("Context 1");
+parameterContext.getParameters().add(createVersionedParameter("three", 
"-1"));  // Set value to -1. This should be overridden by the Numeric Parameter 
Context.
+flowBuilder.getRootGroup().setParameterContextName("Context 1");
+
+try {
+loadDataflow(flowSnapshot, Collections.emptyList(), 
parameterProviders, Collections.emptySet(), 
TransactionThresholds.SINGLE_FLOWFILE);
+Assert.fail("Expected to fail on startup because parameter 
provider is not valid");
+} catch (final IllegalStateException expected) {
+}

Review comment:
   Didn't realize that `assertThrows` had been added to JUnit 4. Will 
update that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markobean commented on pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


markobean commented on pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#issuecomment-875036011


   Awesome. Thanks @mattyb149 
   And, if you find a case where properties are inherited from nifi.properties 
(or anywhere other than the PG's immediate parent), please let me know.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


mattyb149 commented on pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#issuecomment-875028755


   I'm doing a final build/review/test before merging, hopefully merged in an 
hour or so


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8741) Update tests to use an available ports instead of choosing specific ports

2021-07-06 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8741:
---
Status: Patch Available  (was: Open)

> Update tests to use an available ports instead of choosing specific ports
> -
>
> Key: NIFI-8741
> URL: https://issues.apache.org/jira/browse/NIFI-8741
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.13.2
>Reporter: Nathan Gough
>Assignee: David Handermann
>Priority: Minor
>  Labels: port, test
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> JettyServerGroovyTest (which uses port 8443) failed on my machine because I 
> had a service running on that port. Update tests in NiFi to retrieve an 
> available port instead of attempting to bind to specific/common port values 
> which can cause intermittent issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory opened a new pull request #5200: NIFI-8741 Changed JettyServerGroovyTest to use Available Port

2021-07-06 Thread GitBox


exceptionfactory opened a new pull request #5200:
URL: https://github.com/apache/nifi/pull/5200


    Description of PR
   
   NIFI-8741 Updates `JettyServerGroovyTest` to use 
`NetworkUtils.getAvailableTcpPort()` instead of 8443.  This change avoids 
potential unit test failures when some other service is listening on port 8443.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-8741) Update tests to use an available ports instead of choosing specific ports

2021-07-06 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann reassigned NIFI-8741:
--

Assignee: David Handermann  (was: Nathan Gough)

> Update tests to use an available ports instead of choosing specific ports
> -
>
> Key: NIFI-8741
> URL: https://issues.apache.org/jira/browse/NIFI-8741
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.13.2
>Reporter: Nathan Gough
>Assignee: David Handermann
>Priority: Minor
>  Labels: port, test
>
> JettyServerGroovyTest (which uses port 8443) failed on my machine because I 
> had a service running on that port. Update tests in NiFi to retrieve an 
> available port instead of attempting to bind to specific/common port values 
> which can cause intermittent issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8764) Refactor UnpackContent to use Commons Compress

2021-07-06 Thread David Handermann (Jira)
David Handermann created NIFI-8764:
--

 Summary: Refactor UnpackContent to use Commons Compress
 Key: NIFI-8764
 URL: https://issues.apache.org/jira/browse/NIFI-8764
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.13.2, 1.13.1, 1.13.0
Reporter: David Handermann
Assignee: David Handermann


Updating the {{UnpackContent}} processor to support optional decryption of 
password-protected Zip files changed the implementation library from Apache 
Commons Compress to Zip4j.  Although Zip4j supports most standard Zip 
operations, it does not currently support certain alternative compression 
algorithms, such as bzip2.  Apache Commons Compress does not support decryption 
of password-protected Zip files.

In order to support the widest range of available Zip formats, 
{{UnpackContent}} should default to using Apache Commons Compress for Zip 
files, and use Zip4j when the processor is configured with the Password 
property.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7777) UnpackContent should accept password

2021-07-06 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-:
---
Description: As reported in a Stack Overflow question, some archive files 
are (or need to be) password-protected. The {{UnpackContent}} processor does 
not currently have any mechanism for specifying a password for 
compression/decompression.  (was: As reported in a Stack Overflow question, 
some archive files are (or need to be) password-protected. The 
{{CompressContent}} processor does not currently have any mechanism for 
specifying a password for compression/decompression. )

> UnpackContent should accept password
> 
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.12.0
>Reporter: Andy LoPresto
>Assignee: David Handermann
>Priority: Major
>  Labels: archive, compress, password, security
> Fix For: 1.13.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> As reported in a Stack Overflow question, some archive files are (or need to 
> be) password-protected. The {{UnpackContent}} processor does not currently 
> have any mechanism for specifying a password for compression/decompression.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7777) UnpackContent should accept password

2021-07-06 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-:
---
Summary: UnpackContent should accept password  (was: CompressContent should 
accept password)

> UnpackContent should accept password
> 
>
> Key: NIFI-
> URL: https://issues.apache.org/jira/browse/NIFI-
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.12.0
>Reporter: Andy LoPresto
>Assignee: David Handermann
>Priority: Major
>  Labels: archive, compress, password, security
> Fix For: 1.13.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> As reported in a Stack Overflow question, some archive files are (or need to 
> be) password-protected. The {{CompressContent}} processor does not currently 
> have any mechanism for specifying a password for compression/decompression. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8741) Update tests to use an available ports instead of choosing specific ports

2021-07-06 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17375902#comment-17375902
 ] 

Joe Witt commented on NIFI-8741:


once fixed can set fix version

> Update tests to use an available ports instead of choosing specific ports
> -
>
> Key: NIFI-8741
> URL: https://issues.apache.org/jira/browse/NIFI-8741
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.13.2
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Minor
>  Labels: port, test
>
> JettyServerGroovyTest (which uses port 8443) failed on my machine because I 
> had a service running on that port. Update tests in NiFi to retrieve an 
> available port instead of attempting to bind to specific/common port values 
> which can cause intermittent issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8741) Update tests to use an available ports instead of choosing specific ports

2021-07-06 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8741:
---
Fix Version/s: (was: 1.14.0)

> Update tests to use an available ports instead of choosing specific ports
> -
>
> Key: NIFI-8741
> URL: https://issues.apache.org/jira/browse/NIFI-8741
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.13.2
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Minor
>  Labels: port, test
>
> JettyServerGroovyTest (which uses port 8443) failed on my machine because I 
> had a service running on that port. Update tests in NiFi to retrieve an 
> available port instead of attempting to bind to specific/common port values 
> which can cause intermittent issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8611) GCP BigQuery processors support using designate project resource for ingestion

2021-07-06 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8611:
---
Fix Version/s: (was: 1.14.0)

> GCP BigQuery processors support using designate project resource for ingestion
> --
>
> Key: NIFI-8611
> URL: https://issues.apache.org/jira/browse/NIFI-8611
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Chih Han Yu
>Assignee: Joe Witt
>Priority: Major
>  Labels: GCP, bigquery
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> For now, *PutBigQueryBatch* processor and *PutBigQueryStreaming* processor 
> can only assign a single project id for consuming resources and do ingestion. 
> But in some business cases, the project providing resources and the project 
> which be inserted are not always the same. 
> src/main/java/org/apache/nifi/processors/gcp/AbstractGCPProcessor.java
>  
> {code:java}
> ..
> public static final PropertyDescriptor PROJECT_ID = new PropertyDescriptor
> .Builder().name("gcp-project-id")
> .displayName("Project ID")
> .description("Google Cloud Project ID")
> .required(false)
> 
> .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
> .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
> .build();
> ..{code}
>  
> We've test a solution which is workable, which is, adding another property 
> *DESIGNATE_PROJECT_ID* in *AbstractBigQueryProcessor*, it'll only impact 
> *PutBigQueryBatch* processor and *PutBigQueryStreaming* processor.
> If user provides designate project id:
>  * Use *PROJECT_ID* (defined in AbstractGCPProcessor) as resource consuming 
> project. 
>  * Put data into *DESIGNATE_PROJECT_ID*  (defined in 
> AbstractBigQueryProcessor). 
> If user does {color:#ff}not{color} provide designate project id:
>  * Use *PROJECT_ID* (defined in AbstractGCPProcessor) as resource consuming 
> project. 
>  * Put data into *PROJECT_ID*  (defined in AbstractGCPProcessor). 
> Since we already implemented this solution in production environment, I'll 
> submit a PR later for this improvement. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] joewitt commented on pull request #5135: NIFI-8611: GCP BigQuery processors support using designate project resource for ingestion

2021-07-06 Thread GitBox


joewitt commented on pull request #5135:
URL: https://github.com/apache/nifi/pull/5135#issuecomment-874955160


   i'm going to remove this from 1.14 for now so we can keep going on the RC.  
but by all means lets keep progressing this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664747093



##
File path: libminifi/include/utils/crypto/ciphers/Aes256Ecb.h
##
@@ -0,0 +1,73 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+#include "utils/crypto/EncryptionUtils.h"
+#include "Exception.h"
+#include "core/logging/Logger.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace crypto {
+
+class CipherError : public Exception {
+ public:
+  explicit CipherError(const std::string& error_msg) : 
Exception(ExceptionType::GENERAL_EXCEPTION, error_msg) {}
+};
+
+class Aes256EcbCipher {

Review comment:
   This and any ECB-related class needs a comment that it is unsafe to use 
for anything except with a CTR/counter mode implementation like the one in 
rocksdb. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1124: MINIFICPP-1367 Add option to disable NanoFi build

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1124:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1124#discussion_r664745535



##
File path: bstrp_functions.sh
##
@@ -368,6 +390,7 @@ show_supported_features() {
   echo "W. Openwsman Support ...$(print_feature_status 
OPENWSMAN_ENABLED)"
   echo "X. Azure Support ...$(print_feature_status AZURE_ENABLED)"
   echo "Y. Systemd Support .$(print_feature_status 
SYSTEMD_ENABLED)"
+  echo "Z. NanoFi Support ..$(print_feature_status NANOFI_ENABLED)"

Review comment:
   @lordgamez It should be no problem to introduce options with more than 
one character. We read and match a whole line.
   @fgerlits A bit challenging to read as well. :smile: 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1124: MINIFICPP-1367 Add option to disable NanoFi build

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1124:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1124#discussion_r664742667



##
File path: cmake/BuildTests.cmake
##
@@ -39,51 +39,54 @@ if(NOT EXCLUDE_BOOST)
 endif()
 
 function(appendIncludes testName)
-target_include_directories(${testName} SYSTEM BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/thirdparty/catch")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/include")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2/protocols")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/controller")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/repository")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/yaml")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement/metrics")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/io")
-if(WIN32)
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win")
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win/io")
-else()
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix")
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix/io")
-endif()
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/utils")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/processors")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/provenance")
+  target_include_directories(${testName} SYSTEM BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/thirdparty/catch")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/include")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2/protocols")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/controller")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/repository")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/yaml")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement/metrics")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/io")
+  if(WIN32)
+target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win")
+target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win/io")
+  else()
+target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix")
+target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix/io")
+  endif()
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/utils")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/processors")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/provenance")
 endfunction()
 
 function(createTests testName)
-message ("-- Adding test: ${testName}")
-appendIncludes("${testName}")
+  message ("-- Adding test: ${testName}")

Review comment:
   thanks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1124: MINIFICPP-1367 Add option to disable NanoFi build

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1124:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1124#discussion_r664740811



##
File path: CMakeLists.txt
##
@@ -50,6 +50,7 @@ option(ENABLE_OPS "Enable Operations/zlib Tools" ON)
 option(ENABLE_JNI "Instructs the build system to enable the JNI extension" OFF)
 option(ENABLE_OPENCV "Instructs the build system to enable the OpenCV 
extension" OFF)
 option(ENABLE_OPC "Instructs the build system to enable the OPC extension" OFF)
+option(ENABLE_NANOFI "Instructs the build system to enable nanofi library" ON)

Review comment:
   I think the current direction is to keep the build green and hope that 
someone takes interest in maintaining and developing it. We can also do that by 
only building it in the CI, so I'm fine with a default off. I don't think there 
are any users out there, but if there are and you're reading this, please 
comment or reply with a description of your use case.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1124: MINIFICPP-1367 Add option to disable NanoFi build

2021-07-06 Thread GitBox


fgerlits commented on a change in pull request #1124:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1124#discussion_r664737013



##
File path: bstrp_functions.sh
##
@@ -368,6 +390,7 @@ show_supported_features() {
   echo "W. Openwsman Support ...$(print_feature_status 
OPENWSMAN_ENABLED)"
   echo "X. Azure Support ...$(print_feature_status AZURE_ENABLED)"
   echo "Y. Systemd Support .$(print_feature_status 
SYSTEMD_ENABLED)"
+  echo "Z. NanoFi Support ..$(print_feature_status NANOFI_ENABLED)"

Review comment:
   or we could continue with Greek letters, like it is done with hurricanes 
:)  α βιτ ςΗαλλενγινγ το τΥπε, βυτ ιτ ιζ ποσσιβλε




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markobean commented on pull request #5192: NIFI-8195: add default connection settings to process group configura…

2021-07-06 Thread GitBox


markobean commented on pull request #5192:
URL: https://github.com/apache/nifi/pull/5192#issuecomment-874937062


   @mattyb149 Just wanted to clarify if your question was based on observed 
behavior (which I can't reproduce), or if it was a general question based on 
how you believed it would work?
   
   Also - hoping this PR can get across the finish line before the 1.14.0 RC is 
initiated.
   
   Thanks for your time and effort!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1124: MINIFICPP-1367 Add option to disable NanoFi build

2021-07-06 Thread GitBox


fgerlits commented on a change in pull request #1124:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1124#discussion_r664728209



##
File path: CMakeLists.txt
##
@@ -589,12 +592,12 @@ if (NOT DISABLE_CURL AND NOT DISABLE_CONTROLLER)
 endif()
 
 
-if (NOT DISABLE_CURL)
-  if (ENABLE_PYTHON)
-   if (NOT WIN32)
-   add_subdirectory(python/library)
-   endif()
-  endif(ENABLE_PYTHON)
+if (NOT DISABLE_CURL AND ENABLE_PYTHON AND NOT WIN32)
+   if (ENABLE_NANOFI)
+   add_subdirectory(python/library)
+   else()
+   message(FATAL_ERROR "Nanofi, a dependency of the python 
extension is disabled, therefore Python extension cannot be enabled.")

Review comment:
   Is there a real reason why Python needs to depend on nanofi?  If not, 
can you create a Jira to remove this dependency, please?

##
File path: CMakeLists.txt
##
@@ -50,6 +50,7 @@ option(ENABLE_OPS "Enable Operations/zlib Tools" ON)
 option(ENABLE_JNI "Instructs the build system to enable the JNI extension" OFF)
 option(ENABLE_OPENCV "Instructs the build system to enable the OpenCV 
extension" OFF)
 option(ENABLE_OPC "Instructs the build system to enable the OPC extension" OFF)
+option(ENABLE_NANOFI "Instructs the build system to enable nanofi library" ON)

Review comment:
   Should the default be OFF?  I don't know how usable nanofi is right now, 
and we may want to remove it later.

##
File path: bstrp_functions.sh
##
@@ -368,6 +390,7 @@ show_supported_features() {
   echo "W. Openwsman Support ...$(print_feature_status 
OPENWSMAN_ENABLED)"
   echo "X. Azure Support ...$(print_feature_status AZURE_ENABLED)"
   echo "Y. Systemd Support .$(print_feature_status 
SYSTEMD_ENABLED)"
+  echo "Z. NanoFi Support ..$(print_feature_status NANOFI_ENABLED)"

Review comment:
   or we could continue with Greek letters, like it is done with hurricanes 
:)  α βιτ ΣΗαλλενγινγ το τΥπε, βυτ ιτ ιζ ποσσιβλε




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664729323



##
File path: extensions/rocksdb-repos/encryption/RocksDbEncryptionProvider.cpp
##
@@ -0,0 +1,119 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RocksDbEncryptionProvider.h"
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+namespace repository {
+
+using utils::crypto::Bytes;
+using utils::crypto::Aes256EcbCipher;
+
+class AES256BlockCipher final : public rocksdb::BlockCipher {
+  static std::shared_ptr logger_;
+ public:
+  AES256BlockCipher(std::string database, Aes256EcbCipher cipher_impl)
+  : database_(std::move(database)),
+cipher_impl_(std::move(cipher_impl)) {}
+
+  const char *Name() const override {
+return "AES256BlockCipher";
+  }
+
+  size_t BlockSize() override {
+return Aes256EcbCipher::BLOCK_SIZE;
+  }
+
+  bool hasEqualKey(const AES256BlockCipher& other) const {
+return cipher_impl_.hasEqualKey(other.cipher_impl_);
+  }
+
+  rocksdb::Status Encrypt(char *data) override;
+
+  rocksdb::Status Decrypt(char *data) override;
+
+ private:
+  const std::string database_;
+  const Aes256EcbCipher cipher_impl_;
+};
+
+class EncryptingEnv : public rocksdb::EnvWrapper {
+ public:
+  EncryptingEnv(Env* target, std::shared_ptr cipher) : 
EnvWrapper(target), env_(target), cipher_(std::move(cipher)) {}
+
+  bool hasEqualKey(const EncryptingEnv& other) const {
+return cipher_->hasEqualKey(*other.cipher_);
+  }
+
+ private:
+  std::unique_ptr env_;
+  std::shared_ptr cipher_;
+};
+
+std::shared_ptr AES256BlockCipher::logger_ = 
logging::LoggerFactory::getLogger();
+
+std::shared_ptr createEncryptingEnv(const 
utils::crypto::EncryptionManager& manager, const DbEncryptionOptions& options) {
+  auto cipher_impl = 
manager.createAes256EcbCipher(options.encryption_key_name);

Review comment:
   Back to the first question, you mentioned CTR (a.k.a counter mode 
encryption) in a different answer. This makes me think that we are just using 
ECB functions to encrypt the nonce + counter as a block, but not really using 
ECB mode encryption. Am I correct with this assumption?
   
   Here's the rocksdb CTR function: 
https://github.com/facebook/rocksdb/commit/51778612c9cf4cb842eed10f270ec0ed29ff22f1#diff-31e5f5f565771d096323f879c644c2792db9bf42018b12ca0441f09b2c876b39R803
   
   Second (authentication): I agree, I can't see anything related to message 
authentication codes in the above commit or the latest rocksdb 
env/env_encryption* source files. This is a pity, because an attacker can 
easily flip bits in the plaintext by just flipping the corresponding bits in 
the ciphertext.
   
   disclaimer: I'm by no means an crypto expert, just researched the topic 
while reviewing this PR.

##
File path: extensions/rocksdb-repos/encryption/RocksDbEncryptionProvider.cpp
##
@@ -0,0 +1,119 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RocksDbEncryptionProvider.h"
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+namespace repository {
+
+using utils::crypto::Bytes;
+using utils::crypto::Aes256EcbCipher;
+
+class 

[GitHub] [nifi] joewitt commented on pull request #5135: NIFI-8611: GCP BigQuery processors support using designate project resource for ingestion

2021-07-06 Thread GitBox


joewitt commented on pull request #5135:
URL: https://github.com/apache/nifi/pull/5135#issuecomment-874928814


   @pvillard31 are we good to go on this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664729323



##
File path: extensions/rocksdb-repos/encryption/RocksDbEncryptionProvider.cpp
##
@@ -0,0 +1,119 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RocksDbEncryptionProvider.h"
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+namespace repository {
+
+using utils::crypto::Bytes;
+using utils::crypto::Aes256EcbCipher;
+
+class AES256BlockCipher final : public rocksdb::BlockCipher {
+  static std::shared_ptr logger_;
+ public:
+  AES256BlockCipher(std::string database, Aes256EcbCipher cipher_impl)
+  : database_(std::move(database)),
+cipher_impl_(std::move(cipher_impl)) {}
+
+  const char *Name() const override {
+return "AES256BlockCipher";
+  }
+
+  size_t BlockSize() override {
+return Aes256EcbCipher::BLOCK_SIZE;
+  }
+
+  bool hasEqualKey(const AES256BlockCipher& other) const {
+return cipher_impl_.hasEqualKey(other.cipher_impl_);
+  }
+
+  rocksdb::Status Encrypt(char *data) override;
+
+  rocksdb::Status Decrypt(char *data) override;
+
+ private:
+  const std::string database_;
+  const Aes256EcbCipher cipher_impl_;
+};
+
+class EncryptingEnv : public rocksdb::EnvWrapper {
+ public:
+  EncryptingEnv(Env* target, std::shared_ptr cipher) : 
EnvWrapper(target), env_(target), cipher_(std::move(cipher)) {}
+
+  bool hasEqualKey(const EncryptingEnv& other) const {
+return cipher_->hasEqualKey(*other.cipher_);
+  }
+
+ private:
+  std::unique_ptr env_;
+  std::shared_ptr cipher_;
+};
+
+std::shared_ptr AES256BlockCipher::logger_ = 
logging::LoggerFactory::getLogger();
+
+std::shared_ptr createEncryptingEnv(const 
utils::crypto::EncryptionManager& manager, const DbEncryptionOptions& options) {
+  auto cipher_impl = 
manager.createAes256EcbCipher(options.encryption_key_name);

Review comment:
   Back to the first question, you mentioned CTR (a.k.a counter mode 
encryption) in a different answer. This makes me think that we are just using 
ECB functions to encrypt the nonce + counter as a block, but not really using 
ECB mode encryption. Am I correct with this assumption?
   
   Here's the rocksdb CTR function: 
https://github.com/facebook/rocksdb/commit/51778612c9cf4cb842eed10f270ec0ed29ff22f1#diff-31e5f5f565771d096323f879c644c2792db9bf42018b12ca0441f09b2c876b39R803
   
   Second (authentication): I agree, I can't see anything related to message 
authentication codes in the above commit or the latest rocksdb 
env/env_encryption* source files. This is a pity, because an attacker can 
easily flip bits in the plaintext by just flipping the corresponding bits in 
the ciphertext.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8747) ISPEnrichIP with Records

2021-07-06 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8747:
---
Fix Version/s: (was: 1.14.0)

> ISPEnrichIP with Records 
> -
>
> Key: NIFI-8747
> URL: https://issues.apache.org/jira/browse/NIFI-8747
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.13.2
>Reporter: Floriane Allaire
>Assignee: Floriane Allaire
>Priority: Minor
>  Labels: enrichment
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Create a new process similar to ISPEnrichIP that allows users to process flow 
> file using records. Just like with GeoEnrichIP and GeoEnrichIPRecord, 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] thenatog commented on pull request #5069: NIFI-6685: Add align and distribute UI actions

2021-07-06 Thread GitBox


thenatog commented on pull request #5069:
URL: https://github.com/apache/nifi/pull/5069#issuecomment-874825013


   I believe the Hive errors in the github actions can be resolved by rebasing 
on main.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8763) Hide some properties in CSVReader when built-in CSV Format is selected

2021-07-06 Thread Peter Gyori (Jira)
Peter Gyori created NIFI-8763:
-

 Summary: Hide some properties in CSVReader when built-in CSV 
Format is selected
 Key: NIFI-8763
 URL: https://issues.apache.org/jira/browse/NIFI-8763
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Peter Gyori


In CSVReader controller service, the value of the CSV Format property can 
either be 'Custom Format' or some built-in format with predefined parameters 
(RFC 4180, Microsoft Excel, etc.). When 'Custom Format' is selected, the 
user-defined (or default) values of other properties, like 'Value Separator', 
'Record Separator', 'Escape Character', etc. are used. However, when one of the 
built-in formats is used, the user-defined (or default) values of these 
properties are ignored and the format-specific settings are used (e.g. with RFC 
4180, the delimiter is the comma, and the escape character is not set) yet, the 
user can still manipulate the values of these properties of the controller 
service but they will be ignored which might lead to confusion. It would make 
sense to hide the properties that cannot be overridden when a built-in format 
is selected.

Also, the settings of the built-in formats do not appear in the NiFi 
documentation. The built-in formats come from a 3rd party library 
org.apache.commons.csv.CSVFormat (in case of the Apache Commons parser). It 
would be useful to add the link to the CSVReader documentation that points to 
these settings.

[https://commons.apache.org/proper/commons-csv/apidocs/org/apache/commons/csv/CSVFormat.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8762) ADLSCredentialControllerService does not support EL for Storage Account name

2021-07-06 Thread Stan Antyufeev (Jira)
Stan Antyufeev created NIFI-8762:


 Summary: ADLSCredentialControllerService does not support EL for 
Storage Account name
 Key: NIFI-8762
 URL: https://issues.apache.org/jira/browse/NIFI-8762
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Stan Antyufeev


The ADLSCredentialsContollerService does not support EL for Storage Account 
name.

This support gives to Controller Service dynamically evaluates Storage account 
to make all operations list, read, write, delete where it uses SAS Token for 
authentication.

The one of the use cases to support EL for Storage Account name is dynamically 
uses generated SAS Token and Storage account as part of user request to 
manipulate data on Azure Blob ADLS Gen2 Storage Account.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-8738) Document NiFi Registry import/export from the UI

2021-07-06 Thread Andrew M. Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew M. Lim reassigned NIFI-8738:
---

Assignee: Andrew M. Lim

> Document NiFi Registry import/export from the UI
> 
>
> Key: NIFI-8738
> URL: https://issues.apache.org/jira/browse/NIFI-8738
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Affects Versions: 1.13.2
>Reporter: M Tien
>Assignee: Andrew M. Lim
>Priority: Minor
>
> NIFI-8637 added the capability to import and export flow definitions in the 
> UI. The following needs to be documented with screenshots:
>  * Import new flow
>  * Import new version
>  * Export version



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664556602



##
File path: libminifi/src/utils/crypto/ciphers/Aes256Ecb.cpp
##
@@ -0,0 +1,122 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "openssl/conf.h"
+#include "openssl/evp.h"
+#include "openssl/err.h"
+#include "openssl/rand.h"
+#include "core/logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace crypto {
+
+using EVP_CIPHER_CTX_ptr = std::unique_ptr;
+
+std::shared_ptr 
Aes256EcbCipher::logger_{core::logging::LoggerFactory::getLogger()};
+
+Aes256EcbCipher::Aes256EcbCipher(Bytes encryption_key) : 
encryption_key_(std::move(encryption_key)) {
+  if (encryption_key_.size() != KEY_SIZE) {
+handleError("Invalid key length %zu bytes, expected %zu bytes", 
encryption_key_.size(), static_cast(KEY_SIZE));
+  }
+}
+
+Bytes Aes256EcbCipher::generateKey() {
+  unsigned char key[KEY_SIZE];
+  if (1 != RAND_bytes(key, KEY_SIZE)) {
+handleError("Couldn't generate key");
+  }
+  return Bytes(key, key + KEY_SIZE);
+}
+
+void Aes256EcbCipher::encrypt(unsigned char *data) const {
+  EVP_CIPHER_CTX_ptr ctx(EVP_CIPHER_CTX_new(), EVP_CIPHER_CTX_free);
+  if (!ctx) {
+handleError("Could not create cipher context");
+  }
+
+  if (1 != EVP_EncryptInit_ex(ctx.get(), EVP_aes_256_ecb(), nullptr, 
encryption_key_.data(), nullptr)) {

Review comment:
   I've found CTREncryptionProvider in the rocksdb code, but couldn't find 
where we specify that we use it in this PR. Is it always used with rocksdb?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664555210



##
File path: extensions/rocksdb-repos/encryption/RocksDbEncryptionProvider.cpp
##
@@ -0,0 +1,123 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RocksDbEncryptionProvider.h"
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+namespace repository {
+
+using utils::crypto::Bytes;
+using utils::crypto::Aes256EcbCipher;
+
+namespace {
+
+class AES256BlockCipher final : public rocksdb::BlockCipher {
+  static std::shared_ptr logger_;
+ public:
+  AES256BlockCipher(std::string database, Aes256EcbCipher cipher_impl)
+  : database_(std::move(database)),
+cipher_impl_(std::move(cipher_impl)) {}
+
+  const char *Name() const override {
+return "AES256BlockCipher";
+  }
+
+  size_t BlockSize() override {
+return Aes256EcbCipher::BLOCK_SIZE;
+  }
+
+  bool equals(const AES256BlockCipher& other) const {
+return cipher_impl_.equals(other.cipher_impl_);
+  }

Review comment:
   ok, that makes sense




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8761) Enable not setting a value for Escape Character in CSVReader controller service

2021-07-06 Thread Peter Gyori (Jira)
Peter Gyori created NIFI-8761:
-

 Summary: Enable not setting a value for Escape Character in 
CSVReader controller service
 Key: NIFI-8761
 URL: https://issues.apache.org/jira/browse/NIFI-8761
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Peter Gyori


Currently Escape Character is a mandatory property in the CSVReader controller 
service. Whenever a custom CSV format is specified in the reader, it needs to 
be provided. If it is not set, it defaults to '\'. It would be useful to allow 
the users not to set this value in which case no character would be considered 
an escape character. CSVFormat class allows passing null value as the escape 
char (to the constructor) and some built-in formats (RFC 4180, Microsoft Excel, 
Tab-delimited, Informix Unload Escape Disabled) actually use this setting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] Lehel44 commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors

2021-07-06 Thread GitBox


Lehel44 commented on a change in pull request #4948:
URL: https://github.com/apache/nifi/pull/4948#discussion_r664551521



##
File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ScriptedPartitionRecord.java
##
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.script;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.Restricted;
+import org.apache.nifi.annotation.behavior.Restriction;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.RequiredPermission;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.PushBackRecordSet;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.RecordSet;
+
+import javax.script.ScriptEngine;
+import javax.script.ScriptException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.BiFunction;
+
+@EventDriven
+@SideEffectFree
+@Tags({"record", "partition", "script", "groovy", "jython", "python", 
"segment", "split", "group", "organize"})
+@CapabilityDescription("Receives Record-oriented data (i.e., data that can be 
read by the configured Record Reader) and evaluates the user provided script 
against "
++ "each record in the incoming flow file. Each record is then grouped 
with other records sharing the same partition and a FlowFile is created for 
each groups of records. " +
+"Two records shares the same partition if the evaluation of the script 
results the same return value for both. Those will be considered as part of the 
same partition.")
+@Restricted(restrictions = {
+@Restriction(requiredPermission = RequiredPermission.EXECUTE_CODE,
+explanation = "Provides operator the ability to execute 
arbitrary code assuming all permissions that NiFi has.")
+})
+@WritesAttributes({
+@WritesAttribute(attribute = "partition", description = "The partition 
of the outgoing flow file."),
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
mime.type attribute to the MIME Type specified by the Record Writer"),
+@WritesAttribute(attribute = "record.count", description = "The number 
of records within the flow file."),
+@WritesAttribute(attribute = "record.error.message", description = 
"This attribute provides on failure the error message encountered by the Reader 
or Writer."),
+@WritesAttribute(attribute = "fragment.index", description = "A one-up 
number that indicates the ordering of the partitioned FlowFiles that were 
created from a single parent FlowFile"),
+@WritesAttribute(attribute = "fragment.count", description = "The 
number of partitioned FlowFiles generated from the parent FlowFile")
+})
+@SeeAlso(classNames = {
+"org.apache.nifi.processors.script.ScriptedTransformRecord",
+

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


adamdebreceni commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664551154



##
File path: libminifi/src/utils/crypto/ciphers/Aes256Ecb.cpp
##
@@ -0,0 +1,122 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "openssl/conf.h"
+#include "openssl/evp.h"
+#include "openssl/err.h"
+#include "openssl/rand.h"
+#include "core/logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace crypto {
+
+using EVP_CIPHER_CTX_ptr = std::unique_ptr;
+
+std::shared_ptr 
Aes256EcbCipher::logger_{core::logging::LoggerFactory::getLogger()};
+
+Aes256EcbCipher::Aes256EcbCipher(Bytes encryption_key) : 
encryption_key_(std::move(encryption_key)) {
+  if (encryption_key_.size() != KEY_SIZE) {
+handleError("Invalid key length %zu bytes, expected %zu bytes", 
encryption_key_.size(), static_cast(KEY_SIZE));
+  }
+}
+
+Bytes Aes256EcbCipher::generateKey() {
+  unsigned char key[KEY_SIZE];
+  if (1 != RAND_bytes(key, KEY_SIZE)) {
+handleError("Couldn't generate key");
+  }
+  return Bytes(key, key + KEY_SIZE);
+}
+
+void Aes256EcbCipher::encrypt(unsigned char *data) const {
+  EVP_CIPHER_CTX_ptr ctx(EVP_CIPHER_CTX_new(), EVP_CIPHER_CTX_free);
+  if (!ctx) {
+handleError("Could not create cipher context");
+  }
+
+  if (1 != EVP_EncryptInit_ex(ctx.get(), EVP_aes_256_ecb(), nullptr, 
encryption_key_.data(), nullptr)) {
+handleError("Could not initialize encryption cipher context");
+  }
+
+  if (1 != EVP_CIPHER_CTX_set_padding(ctx.get(), 0)) {
+handleError("Could not disable padding for cipher");
+  }

Review comment:
   added comment in 
[2b211f45](https://github.com/apache/nifi-minifi-cpp/pull/1090/commits/2b211f4547de1e60e4b37bf8c86480e54d723d67)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


adamdebreceni commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664550892



##
File path: extensions/rocksdb-repos/encryption/RocksDbEncryptionProvider.cpp
##
@@ -0,0 +1,123 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RocksDbEncryptionProvider.h"
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+namespace repository {
+
+using utils::crypto::Bytes;
+using utils::crypto::Aes256EcbCipher;
+
+namespace {
+
+class AES256BlockCipher final : public rocksdb::BlockCipher {
+  static std::shared_ptr logger_;
+ public:
+  AES256BlockCipher(std::string database, Aes256EcbCipher cipher_impl)
+  : database_(std::move(database)),
+cipher_impl_(std::move(cipher_impl)) {}
+
+  const char *Name() const override {
+return "AES256BlockCipher";
+  }
+
+  size_t BlockSize() override {
+return Aes256EcbCipher::BLOCK_SIZE;
+  }
+
+  bool equals(const AES256BlockCipher& other) const {
+return cipher_impl_.equals(other.cipher_impl_);
+  }

Review comment:
   changed them to `operator==` for the ciphers, but left the `hasEqualKey` 
for the `EncryptingEnv` in 
[2b211f45](https://github.com/apache/nifi-minifi-cpp/pull/1090/commits/2b211f4547de1e60e4b37bf8c86480e54d723d67)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez opened a new pull request #1126: MINIFICPP-1355 Fix the initialization of ExecutePythonProcessor

2021-07-06 Thread GitBox


lordgamez opened a new pull request #1126:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1126


   Jira ticket: https://issues.apache.org/jira/browse/MINIFICPP-1355
   
   

   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664492706



##
File path: libminifi/src/utils/crypto/ciphers/Aes256Ecb.cpp
##
@@ -0,0 +1,122 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "openssl/conf.h"
+#include "openssl/evp.h"
+#include "openssl/err.h"
+#include "openssl/rand.h"
+#include "core/logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace crypto {
+
+using EVP_CIPHER_CTX_ptr = std::unique_ptr;
+
+std::shared_ptr 
Aes256EcbCipher::logger_{core::logging::LoggerFactory::getLogger()};
+
+Aes256EcbCipher::Aes256EcbCipher(Bytes encryption_key) : 
encryption_key_(std::move(encryption_key)) {
+  if (encryption_key_.size() != KEY_SIZE) {
+handleError("Invalid key length %zu bytes, expected %zu bytes", 
encryption_key_.size(), static_cast(KEY_SIZE));
+  }
+}
+
+Bytes Aes256EcbCipher::generateKey() {
+  unsigned char key[KEY_SIZE];
+  if (1 != RAND_bytes(key, KEY_SIZE)) {
+handleError("Couldn't generate key");
+  }
+  return Bytes(key, key + KEY_SIZE);
+}
+
+void Aes256EcbCipher::encrypt(unsigned char *data) const {
+  EVP_CIPHER_CTX_ptr ctx(EVP_CIPHER_CTX_new(), EVP_CIPHER_CTX_free);
+  if (!ctx) {
+handleError("Could not create cipher context");
+  }
+
+  if (1 != EVP_EncryptInit_ex(ctx.get(), EVP_aes_256_ecb(), nullptr, 
encryption_key_.data(), nullptr)) {
+handleError("Could not initialize encryption cipher context");
+  }
+
+  if (1 != EVP_CIPHER_CTX_set_padding(ctx.get(), 0)) {
+handleError("Could not disable padding for cipher");
+  }

Review comment:
   Could you add a comment about the 2*BLOCK_SIZE padding above these 
lines? This is useful context for the future reader IMO.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664492706



##
File path: libminifi/src/utils/crypto/ciphers/Aes256Ecb.cpp
##
@@ -0,0 +1,122 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "openssl/conf.h"
+#include "openssl/evp.h"
+#include "openssl/err.h"
+#include "openssl/rand.h"
+#include "core/logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace crypto {
+
+using EVP_CIPHER_CTX_ptr = std::unique_ptr;
+
+std::shared_ptr 
Aes256EcbCipher::logger_{core::logging::LoggerFactory::getLogger()};
+
+Aes256EcbCipher::Aes256EcbCipher(Bytes encryption_key) : 
encryption_key_(std::move(encryption_key)) {
+  if (encryption_key_.size() != KEY_SIZE) {
+handleError("Invalid key length %zu bytes, expected %zu bytes", 
encryption_key_.size(), static_cast(KEY_SIZE));
+  }
+}
+
+Bytes Aes256EcbCipher::generateKey() {
+  unsigned char key[KEY_SIZE];
+  if (1 != RAND_bytes(key, KEY_SIZE)) {
+handleError("Couldn't generate key");
+  }
+  return Bytes(key, key + KEY_SIZE);
+}
+
+void Aes256EcbCipher::encrypt(unsigned char *data) const {
+  EVP_CIPHER_CTX_ptr ctx(EVP_CIPHER_CTX_new(), EVP_CIPHER_CTX_free);
+  if (!ctx) {
+handleError("Could not create cipher context");
+  }
+
+  if (1 != EVP_EncryptInit_ex(ctx.get(), EVP_aes_256_ecb(), nullptr, 
encryption_key_.data(), nullptr)) {
+handleError("Could not initialize encryption cipher context");
+  }
+
+  if (1 != EVP_CIPHER_CTX_set_padding(ctx.get(), 0)) {
+handleError("Could not disable padding for cipher");
+  }

Review comment:
   Could you add a comment with this info above these lines? This is useful 
context for the future reader IMO.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664491131



##
File path: extensions/rocksdb-repos/encryption/RocksDbEncryptionProvider.cpp
##
@@ -0,0 +1,123 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RocksDbEncryptionProvider.h"
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+namespace repository {
+
+using utils::crypto::Bytes;
+using utils::crypto::Aes256EcbCipher;
+
+namespace {
+
+class AES256BlockCipher final : public rocksdb::BlockCipher {
+  static std::shared_ptr logger_;
+ public:
+  AES256BlockCipher(std::string database, Aes256EcbCipher cipher_impl)
+  : database_(std::move(database)),
+cipher_impl_(std::move(cipher_impl)) {}
+
+  const char *Name() const override {
+return "AES256BlockCipher";
+  }
+
+  size_t BlockSize() override {
+return Aes256EcbCipher::BLOCK_SIZE;
+  }
+
+  bool equals(const AES256BlockCipher& other) const {
+return cipher_impl_.equals(other.cipher_impl_);
+  }

Review comment:
   Since `database_` is only used for logging, I still think that what we 
now call `hasEqualKey` is semantically an equality comparison.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] simonbence commented on a change in pull request #5088: NIFI-3320: SendTrapSNMP and ListenTrapSNMP processors added.

2021-07-06 Thread GitBox


simonbence commented on a change in pull request #5088:
URL: https://github.com/apache/nifi/pull/5088#discussion_r662050885



##
File path: 
nifi-nar-bundles/nifi-snmp-bundle/nifi-snmp-processors/src/main/java/org/apache/nifi/snmp/configuration/TrapConfiguration.java
##
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.snmp.configuration;
+
+public interface TrapConfiguration {

Review comment:
   Please provide JavaDOC

##
File path: 
nifi-nar-bundles/nifi-snmp-bundle/nifi-snmp-processors/src/main/java/org/apache/nifi/snmp/configuration/TrapV2cV3Configuration.java
##
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.snmp.configuration;
+
+public class TrapV2cV3Configuration implements TrapConfiguration {
+
+private final String trapOidValue;
+private final int sysUpTime;
+
+public TrapV2cV3Configuration(final String trapOidValue, final int 
sysUpTime) {
+this.trapOidValue = trapOidValue;
+this.sysUpTime = sysUpTime;
+}
+
+@Override
+public String getTrapOidValue() {
+return trapOidValue;
+}
+
+@Override
+public int getSysUpTime() {
+return sysUpTime;
+}
+
+@Override
+public String getEnterpriseOid() {
+throw new UnsupportedOperationException("Enterprise OID is SNMPv1 
specific property.");

Review comment:
   This looks a pretty unlucky way to approach. If there are such a 
differences between the different versions, it would be better to avoid shared 
interface (or at least not using a facade interface tries to cover all)

##
File path: 
nifi-nar-bundles/nifi-snmp-bundle/nifi-snmp-processors/src/main/java/org/apache/nifi/snmp/configuration/SNMPConfiguration.java
##
@@ -104,4 +111,95 @@ public String getSecurityLevel() {
 public String getCommunityString() {
 return communityString;
 }
+
+public static Builder builder() {
+return new Builder();
+}
+
+public static class Builder {
+
+private String managerPort = "0";
+private String targetHost;
+private String targetPort;
+private int retries;
+private int timeout = 500;
+private int version;
+private String authProtocol;
+private String authPassphrase;
+private String privacyProtocol;
+private String privacyPassphrase;
+private String securityName;
+private String securityLevel;
+private String communityString;
+
+public Builder setManagerPort(final String managerPort) {
+this.managerPort = managerPort;
+return this;
+}
+
+public Builder setTargetHost(final String targetHost) {
+this.targetHost = targetHost;
+return this;
+}
+
+public Builder setTargetPort(final String targetPort) {
+this.targetPort = targetPort;
+return this;
+}
+
+public Builder setRetries(final int retries) {
+this.retries = retries;
+return this;
+}
+
+public Builder setTimeout(final int timeout) {
+this.timeout = timeout;
+return this;
+}
+
+public Builder setVersion(final int version) {
+this.version = version;
+return this;
+}
+
+public Builder setAuthProtocol(final 

[GitHub] [nifi] exceptionfactory commented on a change in pull request #5195: NIFI-8752: Automatic diagnostic at NiFi restart/stop

2021-07-06 Thread GitBox


exceptionfactory commented on a change in pull request #5195:
URL: https://github.com/apache/nifi/pull/5195#discussion_r664026479



##
File path: 
nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/util/DiagnosticProperties.java
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.bootstrap.util;
+
+import org.apache.nifi.properties.BootstrapProperties;
+import org.apache.nifi.util.NiFiBootstrapUtils;
+import org.apache.nifi.util.file.FileUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Comparator;
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+public class DiagnosticProperties {
+
+private static final Logger logger = 
LoggerFactory.getLogger(DiagnosticProperties.class);
+
+private static final String ALLOWED_PROP_NAME = "nifi.diag.allowed";
+private static final boolean ALLOWED_DEFAULT_VALUE = true;
+
+private static final String DIR_PROP_NAME = "nifi.diag.dir";
+private static final String DIR_DEFAULT_VALUE = "./diagnostics";
+
+private static final String MAX_FILE_COUNT_PROP_NAME = 
"nifi.diag.filecount.max";
+private static final int MAX_FILE_COUNT_DEFAULT_VALUE = Integer.MAX_VALUE;
+
+private static final String MAX_SIZE_PROP_NAME = "nifi.diag.size.max.byte";
+private static final int MAX_SIZE_DEFAULT_VALUE = Integer.MAX_VALUE;
+
+private static final String VERBOSE_PROP_NAME = "nifi.diag.verbose";
+private static final boolean VERBOSE_DEFAULT_VALUE = false;
+
+private final String dirPath;
+private final int maxFileCount;
+private final int maxSizeInBytes;
+private final boolean verbose;
+private final boolean allowed;
+
+public DiagnosticProperties() throws IOException {
+BootstrapProperties properties = 
NiFiBootstrapUtils.loadBootstrapProperties();
+this.dirPath = properties.getProperty(DIR_PROP_NAME, 
DIR_DEFAULT_VALUE);
+this.maxFileCount = 
getPropertyAsInt(properties.getProperty(MAX_FILE_COUNT_PROP_NAME), 
MAX_FILE_COUNT_DEFAULT_VALUE);
+this.maxSizeInBytes = 
getPropertyAsInt(properties.getProperty(MAX_SIZE_PROP_NAME), 
MAX_SIZE_DEFAULT_VALUE);
+this.verbose = 
getPropertyAsBoolean(properties.getProperty(VERBOSE_PROP_NAME), 
VERBOSE_DEFAULT_VALUE);
+this.allowed = 
getPropertyAsBoolean(properties.getProperty(ALLOWED_PROP_NAME), 
ALLOWED_DEFAULT_VALUE);
+createDiagDir();
+}
+
+public Path getOldestFile() throws IOException {

Review comment:
   This class combines both configuration properties and evaluation 
methods, which does not follow the pattern of most existing NiFi properties.  
Recommend refactoring the approach to separate file and directory evaluation 
from configuration properties.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1124: MINIFICPP-1367 Add option to disable NanoFi build

2021-07-06 Thread GitBox


lordgamez commented on a change in pull request #1124:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1124#discussion_r664301389



##
File path: CMakeLists.txt
##
@@ -589,12 +592,12 @@ if (NOT DISABLE_CURL AND NOT DISABLE_CONTROLLER)
 endif()
 
 
-if (NOT DISABLE_CURL)
-  if (ENABLE_PYTHON)
-   if (NOT WIN32)
-   add_subdirectory(python/library)
-   endif()
-  endif(ENABLE_PYTHON)
+if (NOT DISABLE_CURL AND ENABLE_PYTHON AND NOT WIN32)
+  if (ENABLE_NANOFI)
+   add_subdirectory(python/library)
+   else()
+   message(FATAL_ERROR "Nanofi, a dependency of the python 
extension is disabled, therefore Python extension cannot be enabled.")
+   endif()

Review comment:
   Fixed in 2847fc59b50d13188dd9aa9ddf1c1fd5aad4905f

##
File path: bstrp_functions.sh
##
@@ -368,6 +390,7 @@ show_supported_features() {
   echo "W. Openwsman Support ...$(print_feature_status 
OPENWSMAN_ENABLED)"
   echo "X. Azure Support ...$(print_feature_status AZURE_ENABLED)"
   echo "Y. Systemd Support .$(print_feature_status 
SYSTEMD_ENABLED)"
+  echo "Z. NanoFi Support ..$(print_feature_status NANOFI_ENABLED)"

Review comment:
   We could discuss this when the next feature comes along, maybe we could 
continue with double letter menu entries like AA.

##
File path: cmake/BuildTests.cmake
##
@@ -39,51 +39,54 @@ if(NOT EXCLUDE_BOOST)
 endif()
 
 function(appendIncludes testName)
-target_include_directories(${testName} SYSTEM BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/thirdparty/catch")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/include")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2/protocols")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/controller")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/repository")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/yaml")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement/metrics")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/io")
-if(WIN32)
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win")
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win/io")
-else()
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix")
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix/io")
-endif()
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/utils")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/processors")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/provenance")
+  target_include_directories(${testName} SYSTEM BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/thirdparty/catch")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/include")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2/protocols")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/controller")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/repository")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/yaml")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement/metrics")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/io")
+  if(WIN32)
+

[GitHub] [nifi] simonbence commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors

2021-07-06 Thread GitBox


simonbence commented on a change in pull request #4948:
URL: https://github.com/apache/nifi/pull/4948#discussion_r664276040



##
File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/RecordBatchingProcessorFlowFileBuilder.java
##
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.script;
+
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.BiFunction;
+import java.util.stream.Collectors;
+
+/**
+ * Helper class contains all the information necessary to prepare an outgoing 
flow file.
+ */
+final class RecordBatchingProcessorFlowFileBuilder {
+private final ProcessSession session;
+private final FlowFile incomingFlowFile;
+final private FlowFile outgoingFlowFile;
+private final OutputStream out;
+private final RecordSetWriter writer;
+private final List> attributes = new LinkedList<>();
+
+private int recordCount = 0;
+
+RecordBatchingProcessorFlowFileBuilder(
+final FlowFile incomingFlowFile,
+final ProcessSession session,
+final BiFunction 
recordSetWriterSupplier
+) throws IOException {
+this.session = session;
+this.incomingFlowFile = incomingFlowFile;
+this.outgoingFlowFile = session.create(incomingFlowFile);
+this.out = session.write(outgoingFlowFile);
+this.writer = recordSetWriterSupplier.apply(outgoingFlowFile, out);
+this.writer.beginRecordSet();
+}
+
+int addRecord(final Record record) throws IOException {
+final WriteResult writeResult = writer.write(record);
+attributes.add(writeResult.getAttributes());
+recordCount += writeResult.getRecordCount();
+return recordCount;
+}
+
+private Map getWriteAttributes() {
+final Map result = new HashMap<>();
+final Set attributeNames = attributes.stream().map(a -> 
a.keySet()).flatMap(x -> x.stream()).collect(Collectors.toSet());

Review comment:
   Yes, good idea. (But for to make it easier to follow, I will use the 
`Set::stream` instead of `Collection::stream`

##
File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ScriptedPartitionRecord.java
##
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.script;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.Restricted;
+import org.apache.nifi.annotation.behavior.Restriction;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import 

[GitHub] [nifi] joewitt commented on pull request #5198: NIFI-8757 Upgraded MiNiFi docker-compose-rule-junit4 to 1.5.0

2021-07-06 Thread GitBox


joewitt commented on pull request #5198:
URL: https://github.com/apache/nifi/pull/5198#issuecomment-874265035


   Thanks for fixing this!  +1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1124: MINIFICPP-1367 Add option to disable NanoFi build

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1124:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1124#discussion_r664080797



##
File path: CMakeLists.txt
##
@@ -589,12 +592,12 @@ if (NOT DISABLE_CURL AND NOT DISABLE_CONTROLLER)
 endif()
 
 
-if (NOT DISABLE_CURL)
-  if (ENABLE_PYTHON)
-   if (NOT WIN32)
-   add_subdirectory(python/library)
-   endif()
-  endif(ENABLE_PYTHON)
+if (NOT DISABLE_CURL AND ENABLE_PYTHON AND NOT WIN32)
+  if (ENABLE_NANOFI)
+   add_subdirectory(python/library)
+   else()
+   message(FATAL_ERROR "Nanofi, a dependency of the python 
extension is disabled, therefore Python extension cannot be enabled.")
+   endif()

Review comment:
   We have some strange indentation here.

##
File path: cmake/BuildTests.cmake
##
@@ -39,51 +39,54 @@ if(NOT EXCLUDE_BOOST)
 endif()
 
 function(appendIncludes testName)
-target_include_directories(${testName} SYSTEM BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/thirdparty/catch")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/include")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2/protocols")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/controller")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/repository")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/yaml")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement/metrics")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/io")
-if(WIN32)
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win")
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win/io")
-else()
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix")
-   target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix/io")
-endif()
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/utils")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/processors")
-target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/provenance")
+  target_include_directories(${testName} SYSTEM BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/thirdparty/catch")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/include")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2/protocols")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/c2")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/controller")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/repository")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/yaml")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/core/statemanagement/metrics")
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/io")
+  if(WIN32)
+target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win")
+target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/win/io")
+  else()
+target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix")
+target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/opsys/posix/io")
+  endif()
+  target_include_directories(${testName} BEFORE PRIVATE 
"${CMAKE_SOURCE_DIR}/libminifi/include/utils")
+  target_include_directories(${testName} BEFORE PRIVATE 

[GitHub] [nifi] asfgit closed pull request #5199: NIFI-8758 Increased GitHub build timeout to 120 minutes

2021-07-06 Thread GitBox


asfgit closed pull request #5199:
URL: https://github.com/apache/nifi/pull/5199


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] asfgit closed pull request #5198: NIFI-8757 Upgraded MiNiFi docker-compose-rule-junit4 to 1.5.0

2021-07-06 Thread GitBox


asfgit closed pull request #5198:
URL: https://github.com/apache/nifi/pull/5198


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


fgerlits commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664074774



##
File path: extensions/rocksdb-repos/database/RocksDbUtils.h
##
@@ -38,19 +38,14 @@ class Writable {
  public:
   explicit Writable(T& target) : target_(target) {}
 
-  template
-  void set(F T::* member, typename utils::type_identity::type value) {
-if (!(target_.*member == value)) {
+  template>
+  void set(F T::* member, typename utils::type_identity::type value, const 
Comparator& comparator = Comparator{}) {
+if (!comparator(target_.*member, value)) {
   target_.*member = value;
   is_modified_ = true;
 }
   }
 
-  template
-  void transform(F T::* member) {
-set(member, Transformer::transform(target_.*member));
-  }

Review comment:
   can `StringAppender::transform()` be removed, too?

##
File path: extensions/rocksdb-repos/database/RocksDbInstance.cpp
##
@@ -99,6 +112,7 @@ utils::optional RocksDbInstance::open(const 
std::string& column, co
   return utils::nullopt;
 }
 gsl_Expects(db_instance);
+db_options_patch_ = db_options_patch;

Review comment:
   this seems to be the only place where we use 
`RocksDbInstance::db_options_patch_`; can it be removed?

##
File path: libminifi/test/rocksdb-tests/EncryptionTests.cpp
##
@@ -0,0 +1,108 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "../TestBase.h"
+#include "utils/TestUtils.h"
+#include "FlowFileRepository.h"
+#include "utils/IntegrationTestUtils.h"
+
+using utils::Path;
+using core::repository::FlowFileRepository;
+
+class FFRepoFixture : public TestController {
+ public:
+  FFRepoFixture() {
+LogTestController::getInstance().setDebug();
+LogTestController::getInstance().setDebug();
+LogTestController::getInstance().setTrace();
+home_ = createTempDirectory("/var/tmp/testRepo.XX");
+repo_dir_ = home_ / "flowfile_repo";
+checkpoint_dir_ = home_ / "checkpoint_dir";
+config_ = std::make_shared();
+config_->setHome(home_.str());
+container_ = std::make_shared(nullptr, nullptr, 
"container");
+content_repo_ = 
std::make_shared();
+content_repo_->initialize(config_);
+  }
+
+  static void putFlowFile(const std::shared_ptr& 
flowfile, const std::shared_ptr& repo) {
+minifi::io::BufferStream buffer;
+flowfile->Serialize(buffer);
+REQUIRE(repo->Put(flowfile->getUUIDStr(), buffer.getBuffer(), 
buffer.size()));
+  }
+
+  template
+  void runWithNewRepository(Fn&& fn) {
+auto repository = std::make_shared("ff", 
checkpoint_dir_.str(), repo_dir_.str());
+repository->initialize(config_);
+std::map> container_map;
+container_map[container_->getUUIDStr()] = container_;
+repository->setContainers(container_map);
+repository->loadComponent(content_repo_);
+repository->start();
+std::forward(fn)(repository);
+repository->stop();
+  }
+
+ protected:
+  std::shared_ptr container_;
+  Path home_;
+  Path repo_dir_;
+  Path checkpoint_dir_;
+  std::shared_ptr config_;
+  std::shared_ptr content_repo_;
+};
+
+TEST_CASE_METHOD(FFRepoFixture, "FlowFileRepository creates checkpoint and 
loads flowfiles") {
+  SECTION("Without encryption") {
+// pass
+  }
+  SECTION("With encryption") {
+utils::file::FileUtils::create_dir((home_ / "conf").str());
+std::ofstream{(home_ / "conf" / "bootstrap.conf").str()}
+  << static_cast(FlowFileRepository::ENCRYPTION_KEY_NAME) << 
"="

Review comment:
   why is this cast needed?  `FlowFileRepository::ENCRYPTION_KEY_NAME` is 
already a `const char*`

##
File path: libminifi/src/utils/crypto/ciphers/Aes256Ecb.cpp
##
@@ -0,0 +1,122 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * 

[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1116: MINIFICPP-1573 Make AppendHostInfo platform independent

2021-07-06 Thread GitBox


martinzink commented on a change in pull request #1116:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1116#discussion_r664301931



##
File path: libminifi/src/utils/NetworkInterfaceInfo.cpp
##
@@ -0,0 +1,155 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "utils/NetworkInterfaceInfo.h"
+
+#ifdef WIN32
+#include 
+#include 
+#include 
+#include 
+#pragma comment(lib, "IPHLPAPI.lib")
+#else
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#endif
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+#ifdef WIN32
+std::string utf8_encode(const std::wstring& wstr) {
+  if (wstr.empty())
+return std::string();
+  int size_needed = WideCharToMultiByte(CP_UTF8, 0, [0], wstr.size(), 
nullptr, 0, nullptr, nullptr);
+  std::string result_string(size_needed, 0);
+  WideCharToMultiByte(CP_UTF8, 0, [0], wstr.size(), _string[0], 
size_needed, nullptr, nullptr);
+  return result_string;
+}

Review comment:
   moved it to anonymous namespace in 
https://github.com/apache/nifi-minifi-cpp/pull/1116/commits/d84894db14625f14cdef9fdf92754bed8ea4ac49

##
File path: libminifi/src/utils/NetworkInterfaceInfo.cpp
##
@@ -0,0 +1,155 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "utils/NetworkInterfaceInfo.h"
+
+#ifdef WIN32
+#include 
+#include 
+#include 
+#include 
+#pragma comment(lib, "IPHLPAPI.lib")
+#else
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#endif
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+#ifdef WIN32
+std::string utf8_encode(const std::wstring& wstr) {
+  if (wstr.empty())
+return std::string();
+  int size_needed = WideCharToMultiByte(CP_UTF8, 0, [0], wstr.size(), 
nullptr, 0, nullptr, nullptr);
+  std::string result_string(size_needed, 0);
+  WideCharToMultiByte(CP_UTF8, 0, [0], wstr.size(), _string[0], 
size_needed, nullptr, nullptr);
+  return result_string;
+}
+
+NetworkInterfaceInfo::NetworkInterfaceInfo(const IP_ADAPTER_ADDRESSES* 
adapter) {
+  name_ = utf8_encode(adapter->FriendlyName);
+  for (auto unicast_address = adapter->FirstUnicastAddress; unicast_address != 
nullptr; unicast_address = unicast_address->Next) {
+if (unicast_address->Address.lpSockaddr->sa_family == AF_INET) {
+  char address_buffer[INET_ADDRSTRLEN];
+  void* sin_address = 
&(reinterpret_cast(unicast_address->Address.lpSockaddr)->sin_addr);
+  InetNtopA(AF_INET, sin_address, address_buffer, INET_ADDRSTRLEN);
+  ip_v4_addresses_.push_back(address_buffer);
+} else if (unicast_address->Address.lpSockaddr->sa_family == AF_INET6) {
+  char address_buffer[INET6_ADDRSTRLEN];
+  void* sin_address = 
&(reinterpret_cast(unicast_address->Address.lpSockaddr)->sin_addr);
+  InetNtopA(AF_INET6, sin_address, address_buffer, INET6_ADDRSTRLEN);
+  ip_v6_addresses_.push_back(address_buffer);
+}

Review comment:
   good idea, I moved the ClientSocket's implementation to OSUtils and used 
that here and in clientsocket as well. 
https://github.com/apache/nifi-minifi-cpp/pull/1116/commits/d84894db14625f14cdef9fdf92754bed8ea4ac49

##
File path: libminifi/src/utils/NetworkInterfaceInfo.cpp
##
@@ -0,0 +1,155 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this 

[GitHub] [nifi] exceptionfactory commented on pull request #5198: NIFI-8757 Upgraded MiNiFi docker-compose-rule-junit4 to 1.5.0

2021-07-06 Thread GitBox


exceptionfactory commented on pull request #5198:
URL: https://github.com/apache/nifi/pull/5198#issuecomment-874265527


   Thanks @joewitt! Merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #5195: NIFI-8752: Automatic diagnostic at NiFi restart/stop

2021-07-06 Thread GitBox


ChrisSamo632 commented on a change in pull request #5195:
URL: https://github.com/apache/nifi/pull/5195#discussion_r663966104



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/bootstrap.conf
##
@@ -105,3 +105,10 @@ notification.max.attempts=5
 # The first curator connection issue is logged as ERROR, for example when NiFi 
cannot connect to one of the Zookeeper nodes.
 # Additional connection issues are logged as DEBUG until the connection is 
restored.
 
java.arg.curator.supress.excessive.logs=-Dcurator-log-only-first-connection-issue-as-error-level=true
+
+# Diagnostics
+nifi.diag.allowed=true

Review comment:
   Creating diagnostics for everyone by default seems the wrong change to 
make - I'd default this to `false` otherwise existing installations will 
suddenly find their filesystems filling up with unexpected files whenever their 
NiFi instance restarts (the default settings also allow *many* files to be 
retained, so I could easily see people getting a bit confused/frustrated with 
the fact that diagnostics files are filling up their filesystems)
   
   If this is left as `true` I'd greatly reduce the default `max` retention 
settings (e.g. keep up to the last 2 or 10 files)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8760) VolatileContentRepository fails to retrieve content from claims with several processors

2021-07-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthieu RÉ updated NIFI-8760:
--
Description: 
For several processors such as MergeRecord, QueryRecord, SplitJson, the use of 
VolatileContentRepository implementation infers errors while retrieving 
Flowfiles from claims. The following logs are generated using NiFi 1.13.1 from 
Docker and the flow.xml.gz and nifi.properties file attached.

MergeRecord (with JsonTreeReader, JsonRecordSetWriter with default 
configuration):

{{2021-07-06 10:15:09,170 ERROR [Timer-Driven Process Thread-1] 
o.a.nifi.processors.standard.MergeRecord 
MergeRecord[id=7b425cff-017a-1000-6a20-58c4e064df3d] Failed to bin 
StandardFlowFileRecord[uuid=3e894a96-883a-4ac2-8121-b8200964cf20,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, 
length=5655],offset=0,name=b2c7cf61-b421-477d-902e-daeb2ed58f0d,size=5655] due 
to org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, length=-1]: 
org.apache.nifi.controller.repository.ContentNotFoundException: Could not find 
content for StandardContentClaim [resourceClaim=StandardResourceClaim[id=6, 
container=in-memory, section=section], offset=0, length=-1]}}
 {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, length=-1]}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
 {{at 
org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
 {{at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
 {{at 
org.apache.nifi.processors.standard.MergeRecord.binFlowFile(MergeRecord.java:383)}}
 {{at 
org.apache.nifi.processors.standard.MergeRecord.onTrigger(MergeRecord.java:346)}}
 {{at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)}}
 {{at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)}}
 {{at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}}
 {{at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)}}
 {{at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
 {{at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
 {{at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
 {{at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
 {{at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
 {{at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
 {{at java.lang.Thread.run(Thread.java:748)}}

QueryRecord:

{{2021-07-06 10:15:09,174 ERROR [Timer-Driven Process Thread-4] 
o.a.nifi.processors.standard.QueryRecord 
QueryRecord[id=673fe9f6-017a-1000-8041-dfde9d02d976] Failed to determine Record 
Schema from 
StandardFlowFileRecord[uuid=090e3058-67e6-4436-bea9-d511132848e3,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
section=section], offset=0, 
length=5655],offset=0,name=090e3058-67e6-4436-bea9-d511132848e3,size=5655]; 
routing to failure: 
org.apache.nifi.controller.repository.ContentNotFoundException: Could not find 
content for StandardContentClaim [resourceClaim=StandardResourceClaim[id=2, 
container=in-memory, section=section], offset=0, length=-1]}}
 {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
section=section], offset=0, length=-1]}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
 {{at 
org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
 {{at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
 {{at 

[jira] [Updated] (NIFI-8760) VolatileContentRepository fails to retrieve content from claims with several processors

2021-07-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthieu RÉ updated NIFI-8760:
--
Environment: (was: Linux, Docker, not tested on Windows and others)

> VolatileContentRepository fails to retrieve content from claims with several 
> processors
> ---
>
> Key: NIFI-8760
> URL: https://issues.apache.org/jira/browse/NIFI-8760
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.1, 1.13.2
>Reporter: Matthieu RÉ
>Priority: Major
>  Labels: content-repository, volatile
> Attachments: flow.xml.gz, nifi.properties
>
>
> For several processors such as MergeRecord, QueryRecord, SplitJson, the use 
> of Volatile implementations infers errors retrieving Flowfiles from claims. 
> The following logs are generated using NiFi 1.13.1 from Docker and the 
> flow.xml.gz and nifi.properties file attached.
> MergeRecord (with JsonTreeReader, JsonRecordSetWriter with default 
> configuration):
> {{2021-07-06 10:15:09,170 ERROR [Timer-Driven Process Thread-1] 
> o.a.nifi.processors.standard.MergeRecord 
> MergeRecord[id=7b425cff-017a-1000-6a20-58c4e064df3d] Failed to bin 
> StandardFlowFileRecord[uuid=3e894a96-883a-4ac2-8121-b8200964cf20,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, 
> length=5655],offset=0,name=b2c7cf61-b421-477d-902e-daeb2ed58f0d,size=5655] 
> due to org.apache.nifi.controller.repository.ContentNotFoundException: Could 
> not find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]: 
> org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
>  {{at 
> org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
>  {{at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
>  {{at 
> org.apache.nifi.processors.standard.MergeRecord.binFlowFile(MergeRecord.java:383)}}
>  {{at 
> org.apache.nifi.processors.standard.MergeRecord.onTrigger(MergeRecord.java:346)}}
>  {{at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)}}
>  {{at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)}}
>  {{at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}}
>  {{at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)}}
>  {{at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
>  {{at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
>  {{at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
>  {{at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
>  {{at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
>  {{at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
>  {{at java.lang.Thread.run(Thread.java:748)}}
> QueryRecord:
> {{2021-07-06 10:15:09,174 ERROR [Timer-Driven Process Thread-4] 
> o.a.nifi.processors.standard.QueryRecord 
> QueryRecord[id=673fe9f6-017a-1000-8041-dfde9d02d976] Failed to determine 
> Record Schema from 
> StandardFlowFileRecord[uuid=090e3058-67e6-4436-bea9-d511132848e3,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
> section=section], offset=0, 
> length=5655],offset=0,name=090e3058-67e6-4436-bea9-d511132848e3,size=5655]; 
> routing to failure: 
> org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
> find content for StandardContentClaim 
> [resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
> section=section], offset=0, length=-1]}}
>  

[jira] [Updated] (NIFI-8760) VolatileContentRepository fails to retrieve content from claims with several processors

2021-07-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthieu RÉ updated NIFI-8760:
--
Description: 
For several processors such as MergeRecord, QueryRecord, SplitJson, the use of 
Volatile implementations infers errors retrieving Flowfiles from claims. The 
following logs are generated using NiFi 1.13.1 from Docker and the flow.xml.gz 
and nifi.properties file attached.

MergeRecord (with JsonTreeReader, JsonRecordSetWriter with default 
configuration):

{{2021-07-06 10:15:09,170 ERROR [Timer-Driven Process Thread-1] 
o.a.nifi.processors.standard.MergeRecord 
MergeRecord[id=7b425cff-017a-1000-6a20-58c4e064df3d] Failed to bin 
StandardFlowFileRecord[uuid=3e894a96-883a-4ac2-8121-b8200964cf20,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, 
length=5655],offset=0,name=b2c7cf61-b421-477d-902e-daeb2ed58f0d,size=5655] due 
to org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, length=-1]: 
org.apache.nifi.controller.repository.ContentNotFoundException: Could not find 
content for StandardContentClaim [resourceClaim=StandardResourceClaim[id=6, 
container=in-memory, section=section], offset=0, length=-1]}}
 {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, length=-1]}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
 {{at 
org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
 {{at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
 {{at 
org.apache.nifi.processors.standard.MergeRecord.binFlowFile(MergeRecord.java:383)}}
 {{at 
org.apache.nifi.processors.standard.MergeRecord.onTrigger(MergeRecord.java:346)}}
 {{at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)}}
 {{at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)}}
 {{at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}}
 {{at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)}}
 {{at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
 {{at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
 {{at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
 {{at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
 {{at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
 {{at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
 {{at java.lang.Thread.run(Thread.java:748)}}

QueryRecord:

{{2021-07-06 10:15:09,174 ERROR [Timer-Driven Process Thread-4] 
o.a.nifi.processors.standard.QueryRecord 
QueryRecord[id=673fe9f6-017a-1000-8041-dfde9d02d976] Failed to determine Record 
Schema from 
StandardFlowFileRecord[uuid=090e3058-67e6-4436-bea9-d511132848e3,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
section=section], offset=0, 
length=5655],offset=0,name=090e3058-67e6-4436-bea9-d511132848e3,size=5655]; 
routing to failure: 
org.apache.nifi.controller.repository.ContentNotFoundException: Could not find 
content for StandardContentClaim [resourceClaim=StandardResourceClaim[id=2, 
container=in-memory, section=section], offset=0, length=-1]}}
 {{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
section=section], offset=0, length=-1]}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
 {{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
 {{at 
org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
 {{at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
 {{at 

[jira] [Created] (NIFI-8760) VolatileContentRepository fails to retrieve content from claims with several processors

2021-07-06 Thread Jira
Matthieu RÉ created NIFI-8760:
-

 Summary: VolatileContentRepository fails to retrieve content from 
claims with several processors
 Key: NIFI-8760
 URL: https://issues.apache.org/jira/browse/NIFI-8760
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.13.2, 1.13.1
 Environment: Linux, Docker, not tested on Windows and others
Reporter: Matthieu RÉ
 Attachments: flow.xml.gz, nifi.properties

For several processors such as MergeRecord, QueryRecord, SplitJson, the use of 
Volatile implementations infers errors retrieving Flowfiles from claims. The 
following logs are generated using NiFi 1.13.1 from Docker and the flow.xml.gz 
and nifi.properties file attached.

MergeRecord (with JsonTreeReader, JsonRecordSetWriter with default 
configuration):

{{2021-07-06 10:15:09,170 ERROR [Timer-Driven Process Thread-1] 
o.a.nifi.processors.standard.MergeRecord 
MergeRecord[id=7b425cff-017a-1000-6a20-58c4e064df3d] Failed to bin 
StandardFlowFileRecord[uuid=3e894a96-883a-4ac2-8121-b8200964cf20,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, 
length=5655],offset=0,name=b2c7cf61-b421-477d-902e-daeb2ed58f0d,size=5655] due 
to org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, length=-1]: 
org.apache.nifi.controller.repository.ContentNotFoundException: Could not find 
content for StandardContentClaim [resourceClaim=StandardResourceClaim[id=6, 
container=in-memory, section=section], offset=0, length=-1]}}
{{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=6, container=in-memory, 
section=section], offset=0, length=-1]}}
{{at 
org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
{{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
{{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:473)}}
{{at 
org.apache.nifi.controller.repository.StandardProcessSession.getInputStream(StandardProcessSession.java:2302)}}
{{at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2409)}}
{{at 
org.apache.nifi.processors.standard.MergeRecord.binFlowFile(MergeRecord.java:383)}}
{{at 
org.apache.nifi.processors.standard.MergeRecord.onTrigger(MergeRecord.java:346)}}
{{at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)}}
{{at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)}}
{{at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}}
{{at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)}}
{{at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
{{at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
{{at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
{{at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
{{at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
{{at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
{{at java.lang.Thread.run(Thread.java:748)}}

QueryRecord:

{{2021-07-06 10:15:09,174 ERROR [Timer-Driven Process Thread-4] 
o.a.nifi.processors.standard.QueryRecord 
QueryRecord[id=673fe9f6-017a-1000-8041-dfde9d02d976] Failed to determine Record 
Schema from 
StandardFlowFileRecord[uuid=090e3058-67e6-4436-bea9-d511132848e3,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
section=section], offset=0, 
length=5655],offset=0,name=090e3058-67e6-4436-bea9-d511132848e3,size=5655]; 
routing to failure: 
org.apache.nifi.controller.repository.ContentNotFoundException: Could not find 
content for StandardContentClaim [resourceClaim=StandardResourceClaim[id=2, 
container=in-memory, section=section], offset=0, length=-1]}}
{{org.apache.nifi.controller.repository.ContentNotFoundException: Could not 
find content for StandardContentClaim 
[resourceClaim=StandardResourceClaim[id=2, container=in-memory, 
section=section], offset=0, length=-1]}}
{{at 
org.apache.nifi.controller.repository.VolatileContentRepository.getContent(VolatileContentRepository.java:445)}}
{{at 
org.apache.nifi.controller.repository.VolatileContentRepository.read(VolatileContentRepository.java:468)}}
{{at 

[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


lordgamez commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r663918197



##
File path: libminifi/test/TestBase.h
##
@@ -445,6 +445,13 @@ class TestController {
 return dir;
   }
 
+  template
+  utils::Path createTempDirectory(const char ()[N]) {

Review comment:
   This simplifies things a bit :+1: 

##
File path: extensions/rocksdb-repos/FlowFileRepository.cpp
##
@@ -220,17 +240,21 @@ void FlowFileRepository::initialize_repository() {
 logger_->log_trace("Do not need checkpoint");
 return;
   }
-  rocksdb::Checkpoint *checkpoint;
   // delete any previous copy
-  if (utils::file::FileUtils::delete_dir(checkpoint_dir_) >= 0 && 
opendb->NewCheckpoint().ok()) {
-if (checkpoint->CreateCheckpoint(checkpoint_dir_).ok()) {
+  if (utils::file::FileUtils::delete_dir(checkpoint_dir_) >= 0) {
+rocksdb::Checkpoint* checkpoint = nullptr;
+rocksdb::Status checkpoint_status = opendb->NewCheckpoint();
+if (checkpoint_status.ok()) {
+  checkpoint_status = checkpoint->CreateCheckpoint(checkpoint_dir_);

Review comment:
   I may go with an early return and a separate error message here, just to 
know which call failed in case of failure.

##
File path: libminifi/test/TestBase.h
##
@@ -445,6 +445,13 @@ class TestController {
 return dir;
   }
 
+  template
+  utils::Path createTempDirectory(const char ()[N]) {

Review comment:
   Yes I thought the same, it should definitely be in a separate PR.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


szaszm commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r663928848



##
File path: extensions/rocksdb-repos/encryption/RocksDbEncryptionProvider.cpp
##
@@ -0,0 +1,123 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RocksDbEncryptionProvider.h"
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+namespace repository {
+
+using utils::crypto::Bytes;
+using utils::crypto::Aes256EcbCipher;
+
+namespace {
+
+class AES256BlockCipher final : public rocksdb::BlockCipher {
+  static std::shared_ptr logger_;
+ public:
+  AES256BlockCipher(std::string database, Aes256EcbCipher cipher_impl)
+  : database_(std::move(database)),
+cipher_impl_(std::move(cipher_impl)) {}
+
+  const char *Name() const override {
+return "AES256BlockCipher";
+  }
+
+  size_t BlockSize() override {
+return Aes256EcbCipher::BLOCK_SIZE;
+  }
+
+  bool equals(const AES256BlockCipher& other) const {
+return cipher_impl_.equals(other.cipher_impl_);
+  }
+
+  rocksdb::Status Encrypt(char *data) override;
+
+  rocksdb::Status Decrypt(char *data) override;

Review comment:
   I prefer using separate buffers for input and output, unless we can 
realize significant efficiency gains by reusing mutable memory.
   How do we ensure that the buffer size is sufficient for the output? 
Plaintext and cipher size can be different in either direction.
   Tip: use `gsl::span` for the input

##
File path: extensions/rocksdb-repos/DatabaseContentRepository.cpp
##
@@ -42,14 +43,31 @@ bool DatabaseContentRepository::initialize(const 
std::shared_ptrgetHome() + "/dbcontentrepository";
   }
-  auto set_db_opts = [] (internal::Writable& db_opts) {
+  std::shared_ptr encrypted_env = [&] {
+DbEncryptionOptions encryption_opts;
+encryption_opts.database = directory_;
+encryption_opts.encryption_key_name = ENCRYPTION_KEY_NAME;
+auto env = 
createEncryptingEnv(utils::crypto::EncryptionManager{configuration->getHome()}, 
encryption_opts);
+if (env) {
+  logger_->log_info("Using encrypted DatabaseContentRepository");
+} else {
+  logger_->log_info("Using plaintext DatabaseContentRepository");
+}
+return env;
+  }();

Review comment:
   This looks like a lot of boilerplate for two lines of meaning. My 
preference is more dense code, but admittedly it results in longer lines. I 
find 2 lines repeated 5x less scary than 5 lines repeated 5x.
   ```suggestion
 const auto encrypted_env = 
createEncryptingEnv(utils::crypto::EncryptionManager{configuration->getHome()}, 
DbEncryptionOptions{directory_, ENCRYPTION_KEY_NAME});
 logger_->log_info("Using %s DatabaseContentRepository", env ? "encrypted" 
: "plaintext");
   ```

##
File path: extensions/rocksdb-repos/FlowFileRepository.cpp
##
@@ -220,17 +240,21 @@ void FlowFileRepository::initialize_repository() {
 logger_->log_trace("Do not need checkpoint");
 return;
   }
-  rocksdb::Checkpoint *checkpoint;
   // delete any previous copy
-  if (utils::file::FileUtils::delete_dir(checkpoint_dir_) >= 0 && 
opendb->NewCheckpoint().ok()) {
-if (checkpoint->CreateCheckpoint(checkpoint_dir_).ok()) {
+  if (utils::file::FileUtils::delete_dir(checkpoint_dir_) >= 0) {
+rocksdb::Checkpoint* checkpoint = nullptr;
+rocksdb::Status checkpoint_status = opendb->NewCheckpoint();
+if (checkpoint_status.ok()) {
+  checkpoint_status = checkpoint->CreateCheckpoint(checkpoint_dir_);
+}
+if (checkpoint_status.ok()) {
   checkpoint_ = std::unique_ptr(checkpoint);
-  logger_->log_trace("Created checkpoint directory");
+  logger_->log_trace("Created checkpoint in directory '%s'", 
checkpoint_dir_);
 } else {
-  logger_->log_trace("Could not create checkpoint. Corrupt?");
+  logger_->log_error("Could not create checkpoint: %s", 
checkpoint_status.ToString());
 }
   } else
-logger_->log_trace("Could not create checkpoint directory. Not properly 
deleted?");
+logger_->log_error("Could not delete 

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


adamdebreceni commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r663937231



##
File path: extensions/rocksdb-repos/FlowFileRepository.cpp
##
@@ -220,17 +240,21 @@ void FlowFileRepository::initialize_repository() {
 logger_->log_trace("Do not need checkpoint");
 return;
   }
-  rocksdb::Checkpoint *checkpoint;
   // delete any previous copy
-  if (utils::file::FileUtils::delete_dir(checkpoint_dir_) >= 0 && 
opendb->NewCheckpoint().ok()) {
-if (checkpoint->CreateCheckpoint(checkpoint_dir_).ok()) {
+  if (utils::file::FileUtils::delete_dir(checkpoint_dir_) >= 0) {
+rocksdb::Checkpoint* checkpoint = nullptr;
+rocksdb::Status checkpoint_status = opendb->NewCheckpoint();
+if (checkpoint_status.ok()) {
+  checkpoint_status = checkpoint->CreateCheckpoint(checkpoint_dir_);

Review comment:
   done

##
File path: libminifi/test/TestBase.h
##
@@ -445,6 +445,13 @@ class TestController {
 return dir;
   }
 
+  template
+  utils::Path createTempDirectory(const char ()[N]) {

Review comment:
   if only there wasn't now three different ways to create a temporary 
directory (now 2 in TestController, and 1 standalone in TestUtils.h) but I felt 
like removing all the single-use `format` variables in all the tests deserves 
its own PR

##
File path: libminifi/test/TestBase.h
##
@@ -445,6 +445,13 @@ class TestController {
 return dir;
   }
 
+  template
+  utils::Path createTempDirectory(const char ()[N]) {

Review comment:
   https://issues.apache.org/jira/browse/MINIFICPP-1600

##
File path: extensions/rocksdb-repos/FlowFileRepository.cpp
##
@@ -220,17 +240,21 @@ void FlowFileRepository::initialize_repository() {
 logger_->log_trace("Do not need checkpoint");
 return;
   }
-  rocksdb::Checkpoint *checkpoint;
   // delete any previous copy
-  if (utils::file::FileUtils::delete_dir(checkpoint_dir_) >= 0 && 
opendb->NewCheckpoint().ok()) {
-if (checkpoint->CreateCheckpoint(checkpoint_dir_).ok()) {
+  if (utils::file::FileUtils::delete_dir(checkpoint_dir_) >= 0) {
+rocksdb::Checkpoint* checkpoint = nullptr;
+rocksdb::Status checkpoint_status = opendb->NewCheckpoint();
+if (checkpoint_status.ok()) {
+  checkpoint_status = checkpoint->CreateCheckpoint(checkpoint_dir_);
+}
+if (checkpoint_status.ok()) {
   checkpoint_ = std::unique_ptr(checkpoint);
-  logger_->log_trace("Created checkpoint directory");
+  logger_->log_trace("Created checkpoint in directory '%s'", 
checkpoint_dir_);
 } else {
-  logger_->log_trace("Could not create checkpoint. Corrupt?");
+  logger_->log_error("Could not create checkpoint: %s", 
checkpoint_status.ToString());
 }
   } else
-logger_->log_trace("Could not create checkpoint directory. Not properly 
deleted?");
+logger_->log_error("Could not delete existing checkpoint directory '%s'", 
checkpoint_dir_);

Review comment:
   @lordgamez had the same concern, done in commit 
[3db64f](https://github.com/apache/nifi-minifi-cpp/pull/1090/commits/3db64f47d25f9067fd23ade4e1100a666c542efa)

##
File path: extensions/rocksdb-repos/DatabaseContentRepository.cpp
##
@@ -42,14 +43,31 @@ bool DatabaseContentRepository::initialize(const 
std::shared_ptrgetHome() + "/dbcontentrepository";
   }
-  auto set_db_opts = [] (internal::Writable& db_opts) {
+  std::shared_ptr encrypted_env = [&] {
+DbEncryptionOptions encryption_opts;
+encryption_opts.database = directory_;
+encryption_opts.encryption_key_name = ENCRYPTION_KEY_NAME;
+auto env = 
createEncryptingEnv(utils::crypto::EncryptionManager{configuration->getHome()}, 
encryption_opts);
+if (env) {
+  logger_->log_info("Using encrypted DatabaseContentRepository");
+} else {
+  logger_->log_info("Using plaintext DatabaseContentRepository");
+}
+return env;
+  }();

Review comment:
   done in 
[9d07c8e](https://github.com/apache/nifi-minifi-cpp/pull/1090/commits/9d07c8e9c534a87ab31acd5d7dbf843ce047da04)
 (also for other applicable occurrences) 

##
File path: extensions/rocksdb-repos/encryption/RocksDbEncryptionProvider.cpp
##
@@ -0,0 +1,123 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See 

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


adamdebreceni commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664399347



##
File path: libminifi/test/rocksdb-tests/EncryptionTests.cpp
##
@@ -0,0 +1,108 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "../TestBase.h"
+#include "utils/TestUtils.h"
+#include "FlowFileRepository.h"
+#include "utils/IntegrationTestUtils.h"
+
+using utils::Path;
+using core::repository::FlowFileRepository;
+
+class FFRepoFixture : public TestController {
+ public:
+  FFRepoFixture() {
+LogTestController::getInstance().setDebug();
+LogTestController::getInstance().setDebug();
+LogTestController::getInstance().setTrace();
+home_ = createTempDirectory("/var/tmp/testRepo.XX");
+repo_dir_ = home_ / "flowfile_repo";
+checkpoint_dir_ = home_ / "checkpoint_dir";
+config_ = std::make_shared();
+config_->setHome(home_.str());
+container_ = std::make_shared(nullptr, nullptr, 
"container");
+content_repo_ = 
std::make_shared();
+content_repo_->initialize(config_);
+  }
+
+  static void putFlowFile(const std::shared_ptr& 
flowfile, const std::shared_ptr& repo) {
+minifi::io::BufferStream buffer;
+flowfile->Serialize(buffer);
+REQUIRE(repo->Put(flowfile->getUUIDStr(), buffer.getBuffer(), 
buffer.size()));
+  }
+
+  template
+  void runWithNewRepository(Fn&& fn) {
+auto repository = std::make_shared("ff", 
checkpoint_dir_.str(), repo_dir_.str());
+repository->initialize(config_);
+std::map> container_map;
+container_map[container_->getUUIDStr()] = container_;
+repository->setContainers(container_map);
+repository->loadComponent(content_repo_);
+repository->start();
+std::forward(fn)(repository);
+repository->stop();
+  }
+
+ protected:
+  std::shared_ptr container_;
+  Path home_;
+  Path repo_dir_;
+  Path checkpoint_dir_;
+  std::shared_ptr config_;
+  std::shared_ptr content_repo_;
+};
+
+TEST_CASE_METHOD(FFRepoFixture, "FlowFileRepository creates checkpoint and 
loads flowfiles") {
+  SECTION("Without encryption") {
+// pass
+  }
+  SECTION("With encryption") {
+utils::file::FileUtils::create_dir((home_ / "conf").str());
+std::ofstream{(home_ / "conf" / "bootstrap.conf").str()}
+  << static_cast(FlowFileRepository::ENCRYPTION_KEY_NAME) << 
"="

Review comment:
   since we have an rvalue ref stream, `operator<<` will resolve to a 
forwarding template function
   ```
   template 
   inline _LIBCPP_INLINE_VISIBILITY
   typename enable_if
   <
   !is_lvalue_reference<_Stream>::value &&
   is_base_of::value,
   _Stream&&
   >::type
   operator<<(_Stream&& __os, const _Tp& __x)
   {
   __os << __x;
   return _VSTD::move(__os);
   }
   ```
   
   which takes the argument by reference, which makes the static member 
[odr-used](https://en.cppreference.com/w/cpp/language/definition#ODR-use), 
adding a `static_cast` makes an lvalue-to-rvalue conversion which evaluates to 
a constant expression, and no longer odr-used, from `c++17` we won't need this 
cast




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


adamdebreceni commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664399347



##
File path: libminifi/test/rocksdb-tests/EncryptionTests.cpp
##
@@ -0,0 +1,108 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "../TestBase.h"
+#include "utils/TestUtils.h"
+#include "FlowFileRepository.h"
+#include "utils/IntegrationTestUtils.h"
+
+using utils::Path;
+using core::repository::FlowFileRepository;
+
+class FFRepoFixture : public TestController {
+ public:
+  FFRepoFixture() {
+LogTestController::getInstance().setDebug();
+LogTestController::getInstance().setDebug();
+LogTestController::getInstance().setTrace();
+home_ = createTempDirectory("/var/tmp/testRepo.XX");
+repo_dir_ = home_ / "flowfile_repo";
+checkpoint_dir_ = home_ / "checkpoint_dir";
+config_ = std::make_shared();
+config_->setHome(home_.str());
+container_ = std::make_shared(nullptr, nullptr, 
"container");
+content_repo_ = 
std::make_shared();
+content_repo_->initialize(config_);
+  }
+
+  static void putFlowFile(const std::shared_ptr& 
flowfile, const std::shared_ptr& repo) {
+minifi::io::BufferStream buffer;
+flowfile->Serialize(buffer);
+REQUIRE(repo->Put(flowfile->getUUIDStr(), buffer.getBuffer(), 
buffer.size()));
+  }
+
+  template
+  void runWithNewRepository(Fn&& fn) {
+auto repository = std::make_shared("ff", 
checkpoint_dir_.str(), repo_dir_.str());
+repository->initialize(config_);
+std::map> container_map;
+container_map[container_->getUUIDStr()] = container_;
+repository->setContainers(container_map);
+repository->loadComponent(content_repo_);
+repository->start();
+std::forward(fn)(repository);
+repository->stop();
+  }
+
+ protected:
+  std::shared_ptr container_;
+  Path home_;
+  Path repo_dir_;
+  Path checkpoint_dir_;
+  std::shared_ptr config_;
+  std::shared_ptr content_repo_;
+};
+
+TEST_CASE_METHOD(FFRepoFixture, "FlowFileRepository creates checkpoint and 
loads flowfiles") {
+  SECTION("Without encryption") {
+// pass
+  }
+  SECTION("With encryption") {
+utils::file::FileUtils::create_dir((home_ / "conf").str());
+std::ofstream{(home_ / "conf" / "bootstrap.conf").str()}
+  << static_cast(FlowFileRepository::ENCRYPTION_KEY_NAME) << 
"="

Review comment:
   since we have an rvalue ref stream `operator<<` will resolve to a 
forwarding template function
   ```
   template 
   inline _LIBCPP_INLINE_VISIBILITY
   typename enable_if
   <
   !is_lvalue_reference<_Stream>::value &&
   is_base_of::value,
   _Stream&&
   >::type
   operator<<(_Stream&& __os, const _Tp& __x)
   {
   __os << __x;
   return _VSTD::move(__os);
   }
   ```
   
   which takes the argument by reference, which makes the static member 
[odr-used](https://en.cppreference.com/w/cpp/language/definition#ODR-use), 
adding a `static_cast` makes an lvalue-to-rvalue conversion which evaluates to 
a constant expression, and no longer odr-used, from `c++17` we won't need this 
cast




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors

2021-07-06 Thread GitBox


Lehel44 commented on a change in pull request #4948:
URL: https://github.com/apache/nifi/pull/4948#discussion_r664395395



##
File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ScriptedTransformRecord.java
##
@@ -17,7 +17,6 @@
 

Review comment:
   My IDEA actually is showing it to me: "Redundant array creation for 
calling varargs method ". Yours isn't?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors

2021-07-06 Thread GitBox


Lehel44 commented on a change in pull request #4948:
URL: https://github.com/apache/nifi/pull/4948#discussion_r664394408



##
File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ScriptedTransformRecord.java
##
@@ -17,7 +17,6 @@
 

Review comment:
   3. Since _void log(String msg, Object... os)_ methods are added, _void 
trace(String msg, Object[] os)_ should be deprecated. The object array wrapping 
has become redundant and shall be removed with no remorse and forever, whenever 
seen.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8758) Increase Timeouts for GitHub Workflows

2021-07-06 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-8758:
-
Fix Version/s: 1.14.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Increase Timeouts for GitHub Workflows
> --
>
> Key: NIFI-8758
> URL: https://issues.apache.org/jira/browse/NIFI-8758
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.14.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The addition of NiFi Registry and the inclusion of JUnit test artifacts have 
> added several minutes to GitHub automated builds, resulting in more frequent 
> failures due to 90 minute timeouts. GitHub builds on Ubuntu with JDK 11 often 
> hit the current timeout of 90 minutes while Ubuntu with JDK 8 and Windows 
> with JDK 8 run successfully on a more frequent basis.  Increasing the timeout 
> should allow more builds to complete successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8758) Increase Timeouts for GitHub Workflows

2021-07-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17375394#comment-17375394
 ] 

ASF subversion and git services commented on NIFI-8758:
---

Commit 54624bc26df7e3978b6da52cb11332e7592f9968 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=54624bc ]

NIFI-8758 Increased GitHub build timeout to 120 minutes

Signed-off-by: Pierre Villard 

This closes #5199.


> Increase Timeouts for GitHub Workflows
> --
>
> Key: NIFI-8758
> URL: https://issues.apache.org/jira/browse/NIFI-8758
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The addition of NiFi Registry and the inclusion of JUnit test artifacts have 
> added several minutes to GitHub automated builds, resulting in more frequent 
> failures due to 90 minute timeouts. GitHub builds on Ubuntu with JDK 11 often 
> hit the current timeout of 90 minutes while Ubuntu with JDK 8 and Windows 
> with JDK 8 run successfully on a more frequent basis.  Increasing the timeout 
> should allow more builds to complete successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #5199: NIFI-8758 Increased GitHub build timeout to 120 minutes

2021-07-06 Thread GitBox


asfgit closed pull request #5199:
URL: https://github.com/apache/nifi/pull/5199


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors

2021-07-06 Thread GitBox


Lehel44 commented on a change in pull request #4948:
URL: https://github.com/apache/nifi/pull/4948#discussion_r664390340



##
File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ScriptedRouteRecord.java
##
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.script;
+
+import org.apache.nifi.annotation.behavior.DynamicRelationship;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.stream.Collectors;
+
+@Tags({"record", "routing", "script", "groovy", "jython", "python", "segment", 
"split", "group", "organize"})
+@CapabilityDescription(
+"This processor provides the ability to route the records of the 
incoming FlowFile using an user-provided script. " +
+"The script is expected to handle a record as argument and return with 
a string value. " +
+"The returned value defines a route. All routes are bounded to an 
outgoing relationship where the record will be transferred to. " +
+"Relationships are defined as dynamic properties: dynamic property 
names are serving as the name of the route. " +
+"The value of a dynamic property defines the relationship the given 
record will be routed into. Multiple routes might point to the same 
relationship. " +
+"Creation of these dynamic relationship is managed by the processor. " 
+
+"The records, which for the script returned with an unknown 
relationship name are routed to the \"unmatched\" relationship. " +
+"The records are batched: for an incoming FlowFile, all the records 
routed towards a given relationship are batched into one single FlowFile."
+)
+@SeeAlso(classNames = {
+"org.apache.nifi.processors.script.ScriptedTransformRecord",
+"org.apache.nifi.processors.script.ScriptedPartitionRecord",
+"org.apache.nifi.processors.script.ScriptedValidateRecord",
+"org.apache.nifi.processors.script.ScriptedFilterRecord"
+})
+@DynamicRelationship(name = "Name from Dynamic Property", description = 
"FlowFiles that match the Dynamic Property's Attribute Expression Language")
+public class ScriptedRouteRecord extends ScriptedRouterProcessor {
+
+static final Relationship RELATIONSHIP_ORIGINAL = new 
Relationship.Builder()
+.name("original")
+.description(
+"After successful procession, the incoming FlowFile will be 
transferred to this relationship. " +
+"This happens regardless the records are matching to a 
relationship or not.")
+.build();
+
+static final Relationship RELATIONSHIP_FAILURE = new Relationship.Builder()
+.name("failed")
+.description("In case of any issue during processing the incoming 
FlowFile, the incoming FlowFile will be routed to this relationship.")
+.build();
+
+static final Relationship RELATIONSHIP_UNMATCHED = new 
Relationship.Builder()
+.name("unmatched")
+.description("Records where the script evaluation returns with an 
unknown partition are routed to this relationship.")
+.build();
+
+private static Set RELATIONSHIPS = new HashSet<>();
+
+static {
+RELATIONSHIPS.add(RELATIONSHIP_ORIGINAL);
+RELATIONSHIPS.add(RELATIONSHIP_FAILURE);
+RELATIONSHIPS.add(RELATIONSHIP_UNMATCHED);
+}
+
+private final AtomicReference> relationships 

[GitHub] [nifi] Lehel44 commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors

2021-07-06 Thread GitBox


Lehel44 commented on a change in pull request #4948:
URL: https://github.com/apache/nifi/pull/4948#discussion_r664384302



##
File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ScriptedPartitionRecord.java
##
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.script;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.Restricted;
+import org.apache.nifi.annotation.behavior.Restriction;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.RequiredPermission;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.PushBackRecordSet;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.RecordSet;
+
+import javax.script.ScriptEngine;
+import javax.script.ScriptException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.BiFunction;
+
+@EventDriven
+@SideEffectFree
+@Tags({"record", "partition", "script", "groovy", "jython", "python", 
"segment", "split", "group", "organize"})
+@CapabilityDescription("Receives Record-oriented data (i.e., data that can be 
read by the configured Record Reader) and evaluates the user provided script 
against "
++ "each record in the incoming flow file. Each record is then grouped 
with other records sharing the same partition and a FlowFile is created for 
each groups of records. " +
+"Two records shares the same partition if the evaluation of the script 
results the same return value for both. Those will be considered as part of the 
same partition.")
+@Restricted(restrictions = {
+@Restriction(requiredPermission = RequiredPermission.EXECUTE_CODE,
+explanation = "Provides operator the ability to execute 
arbitrary code assuming all permissions that NiFi has.")
+})
+@WritesAttributes({
+@WritesAttribute(attribute = "partition", description = "The partition 
of the outgoing flow file."),
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
mime.type attribute to the MIME Type specified by the Record Writer"),
+@WritesAttribute(attribute = "record.count", description = "The number 
of records within the flow file."),
+@WritesAttribute(attribute = "record.error.message", description = 
"This attribute provides on failure the error message encountered by the Reader 
or Writer."),
+@WritesAttribute(attribute = "fragment.index", description = "A one-up 
number that indicates the ordering of the partitioned FlowFiles that were 
created from a single parent FlowFile"),
+@WritesAttribute(attribute = "fragment.count", description = "The 
number of partitioned FlowFiles generated from the parent FlowFile")
+})
+@SeeAlso(classNames = {
+"org.apache.nifi.processors.script.ScriptedTransformRecord",
+

[GitHub] [nifi] Lehel44 commented on a change in pull request #4948: NIFI-8273 Adding Scripted Record processors

2021-07-06 Thread GitBox


Lehel44 commented on a change in pull request #4948:
URL: https://github.com/apache/nifi/pull/4948#discussion_r664384302



##
File path: 
nifi-nar-bundles/nifi-scripting-bundle/nifi-scripting-processors/src/main/java/org/apache/nifi/processors/script/ScriptedPartitionRecord.java
##
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.script;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.Restricted;
+import org.apache.nifi.annotation.behavior.Restriction;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.RequiredPermission;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.PushBackRecordSet;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.RecordSet;
+
+import javax.script.ScriptEngine;
+import javax.script.ScriptException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.BiFunction;
+
+@EventDriven
+@SideEffectFree
+@Tags({"record", "partition", "script", "groovy", "jython", "python", 
"segment", "split", "group", "organize"})
+@CapabilityDescription("Receives Record-oriented data (i.e., data that can be 
read by the configured Record Reader) and evaluates the user provided script 
against "
++ "each record in the incoming flow file. Each record is then grouped 
with other records sharing the same partition and a FlowFile is created for 
each groups of records. " +
+"Two records shares the same partition if the evaluation of the script 
results the same return value for both. Those will be considered as part of the 
same partition.")
+@Restricted(restrictions = {
+@Restriction(requiredPermission = RequiredPermission.EXECUTE_CODE,
+explanation = "Provides operator the ability to execute 
arbitrary code assuming all permissions that NiFi has.")
+})
+@WritesAttributes({
+@WritesAttribute(attribute = "partition", description = "The partition 
of the outgoing flow file."),
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
mime.type attribute to the MIME Type specified by the Record Writer"),
+@WritesAttribute(attribute = "record.count", description = "The number 
of records within the flow file."),
+@WritesAttribute(attribute = "record.error.message", description = 
"This attribute provides on failure the error message encountered by the Reader 
or Writer."),
+@WritesAttribute(attribute = "fragment.index", description = "A one-up 
number that indicates the ordering of the partitioned FlowFiles that were 
created from a single parent FlowFile"),
+@WritesAttribute(attribute = "fragment.count", description = "The 
number of partitioned FlowFiles generated from the parent FlowFile")
+})
+@SeeAlso(classNames = {
+"org.apache.nifi.processors.script.ScriptedTransformRecord",
+

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


adamdebreceni commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664378631



##
File path: libminifi/src/utils/crypto/ciphers/Aes256Ecb.cpp
##
@@ -0,0 +1,122 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "utils/crypto/ciphers/Aes256Ecb.h"
+#include "openssl/conf.h"
+#include "openssl/evp.h"
+#include "openssl/err.h"
+#include "openssl/rand.h"
+#include "core/logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace crypto {
+
+using EVP_CIPHER_CTX_ptr = std::unique_ptr;
+
+std::shared_ptr 
Aes256EcbCipher::logger_{core::logging::LoggerFactory::getLogger()};
+
+Aes256EcbCipher::Aes256EcbCipher(Bytes encryption_key) : 
encryption_key_(std::move(encryption_key)) {
+  if (encryption_key_.size() != KEY_SIZE) {
+handleError("Invalid key length %zu bytes, expected %zu bytes", 
encryption_key_.size(), static_cast(KEY_SIZE));
+  }
+}
+
+Bytes Aes256EcbCipher::generateKey() {

Review comment:
   changed it to use `utils::crypto::randomBytes` in 
[b13b5b7d](https://github.com/apache/nifi-minifi-cpp/pull/1090/commits/b13b5b7d24b681c15e15850d36ce2922b5f2b9e9)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


adamdebreceni commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664378255



##
File path: extensions/rocksdb-repos/database/RocksDbUtils.h
##
@@ -38,19 +38,14 @@ class Writable {
  public:
   explicit Writable(T& target) : target_(target) {}
 
-  template
-  void set(F T::* member, typename utils::type_identity::type value) {
-if (!(target_.*member == value)) {
+  template>
+  void set(F T::* member, typename utils::type_identity::type value, const 
Comparator& comparator = Comparator{}) {
+if (!comparator(target_.*member, value)) {
   target_.*member = value;
   is_modified_ = true;
 }
   }
 
-  template
-  void transform(F T::* member) {
-set(member, Transformer::transform(target_.*member));
-  }

Review comment:
   yes, removed in 
[b13b5b7](https://github.com/apache/nifi-minifi-cpp/pull/1090/commits/b13b5b7d24b681c15e15850d36ce2922b5f2b9e9)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


adamdebreceni commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664378028



##
File path: extensions/rocksdb-repos/database/RocksDbInstance.cpp
##
@@ -99,6 +112,7 @@ utils::optional RocksDbInstance::open(const 
std::string& column, co
   return utils::nullopt;
 }
 gsl_Expects(db_instance);
+db_options_patch_ = db_options_patch;

Review comment:
   added comment why we must store this object in 
[b13b5b7d](https://github.com/apache/nifi-minifi-cpp/pull/1090/commits/b13b5b7d24b681c15e15850d36ce2922b5f2b9e9)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1090: MINIFICPP-1561 - Allow rocksdb encryption

2021-07-06 Thread GitBox


fgerlits commented on a change in pull request #1090:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1090#discussion_r664074774



##
File path: extensions/rocksdb-repos/database/RocksDbUtils.h
##
@@ -38,19 +38,14 @@ class Writable {
  public:
   explicit Writable(T& target) : target_(target) {}
 
-  template
-  void set(F T::* member, typename utils::type_identity::type value) {
-if (!(target_.*member == value)) {
+  template>
+  void set(F T::* member, typename utils::type_identity::type value, const 
Comparator& comparator = Comparator{}) {
+if (!comparator(target_.*member, value)) {
   target_.*member = value;
   is_modified_ = true;
 }
   }
 
-  template
-  void transform(F T::* member) {
-set(member, Transformer::transform(target_.*member));
-  }

Review comment:
   can `StringAppender::transform()` be removed, too?

##
File path: extensions/rocksdb-repos/database/RocksDbInstance.cpp
##
@@ -99,6 +112,7 @@ utils::optional RocksDbInstance::open(const 
std::string& column, co
   return utils::nullopt;
 }
 gsl_Expects(db_instance);
+db_options_patch_ = db_options_patch;

Review comment:
   this seems to be the only place where we use 
`RocksDbInstance::db_options_patch_`; can it be removed?

##
File path: libminifi/test/rocksdb-tests/EncryptionTests.cpp
##
@@ -0,0 +1,108 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "../TestBase.h"
+#include "utils/TestUtils.h"
+#include "FlowFileRepository.h"
+#include "utils/IntegrationTestUtils.h"
+
+using utils::Path;
+using core::repository::FlowFileRepository;
+
+class FFRepoFixture : public TestController {
+ public:
+  FFRepoFixture() {
+LogTestController::getInstance().setDebug();
+LogTestController::getInstance().setDebug();
+LogTestController::getInstance().setTrace();
+home_ = createTempDirectory("/var/tmp/testRepo.XX");
+repo_dir_ = home_ / "flowfile_repo";
+checkpoint_dir_ = home_ / "checkpoint_dir";
+config_ = std::make_shared();
+config_->setHome(home_.str());
+container_ = std::make_shared(nullptr, nullptr, 
"container");
+content_repo_ = 
std::make_shared();
+content_repo_->initialize(config_);
+  }
+
+  static void putFlowFile(const std::shared_ptr& 
flowfile, const std::shared_ptr& repo) {
+minifi::io::BufferStream buffer;
+flowfile->Serialize(buffer);
+REQUIRE(repo->Put(flowfile->getUUIDStr(), buffer.getBuffer(), 
buffer.size()));
+  }
+
+  template
+  void runWithNewRepository(Fn&& fn) {
+auto repository = std::make_shared("ff", 
checkpoint_dir_.str(), repo_dir_.str());
+repository->initialize(config_);
+std::map> container_map;
+container_map[container_->getUUIDStr()] = container_;
+repository->setContainers(container_map);
+repository->loadComponent(content_repo_);
+repository->start();
+std::forward(fn)(repository);
+repository->stop();
+  }
+
+ protected:
+  std::shared_ptr container_;
+  Path home_;
+  Path repo_dir_;
+  Path checkpoint_dir_;
+  std::shared_ptr config_;
+  std::shared_ptr content_repo_;
+};
+
+TEST_CASE_METHOD(FFRepoFixture, "FlowFileRepository creates checkpoint and 
loads flowfiles") {
+  SECTION("Without encryption") {
+// pass
+  }
+  SECTION("With encryption") {
+utils::file::FileUtils::create_dir((home_ / "conf").str());
+std::ofstream{(home_ / "conf" / "bootstrap.conf").str()}
+  << static_cast(FlowFileRepository::ENCRYPTION_KEY_NAME) << 
"="

Review comment:
   why is this cast needed?  `FlowFileRepository::ENCRYPTION_KEY_NAME` is 
already a `const char*`

##
File path: libminifi/src/utils/crypto/ciphers/Aes256Ecb.cpp
##
@@ -0,0 +1,122 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * 

[jira] [Commented] (NIFI-7333) OIDC provider should use NiFi keystore & truststore

2021-07-06 Thread Rene Weidlinger (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17375351#comment-17375351
 ] 

Rene Weidlinger commented on NIFI-7333:
---

This is also causing problems with nifi in docker! We need to import the 
certificate to trust in the java cacerts, but every time the container is 
destroyed the import is lost, and we need to re-import before nifi starts.

This Problem seems to have more negative impact with dockerized-nifi.

> OIDC provider should use NiFi keystore & truststore
> ---
>
> Key: NIFI-7333
> URL: https://issues.apache.org/jira/browse/NIFI-7333
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Security
>Affects Versions: 1.11.4
>Reporter: Andy LoPresto
>Assignee: M Tien
>Priority: Major
>  Labels: keystore, oidc, security, tls
>
> The OIDC provider uses generic HTTPS requests to the OIDC IdP, but does not 
> configure these requests to use the NiFi keystore or truststore. Rather, it 
> uses the default JVM keystore and truststore, which leads to difficulty 
> debugging PKIX and other TLS negotiation errors. It should be switched to use 
> the NiFi keystore and truststore as other NiFi framework services do. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >