[jira] [Commented] (NIFI-8050) Custom Groovy writer breaks during upgrade

2020-12-02 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242810#comment-17242810
 ] 

Matt Burgess commented on NIFI-8050:


I believe the issue with the CS enabling even if the script doesn't compile 
successfully was fixed by NIFI-7260.

> Custom Groovy writer breaks during upgrade
> --
>
> Key: NIFI-8050
> URL: https://issues.apache.org/jira/browse/NIFI-8050
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.11.0, 1.12.0, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 
> 1.12.1
>Reporter: Pierre Villard
>Priority: Major
>
> A couple of issues when upgrading NiFi and using a custom scripted writer 
> with Groovy.
> The scripted writer was something like: 
> {code:java}
> import ...
> class GroovyRecordSetWriter implements RecordSetWriter {
> ...
> @Override
> WriteResult write(Record r) throws IOException {
> ...
> }
> @Override
> String getMimeType() { ... }
> @Override
> WriteResult write(final RecordSet rs) throws IOException {
> ...
> }
> public void beginRecordSet() throws IOException { ... }
> @Override
> public WriteResult finishRecordSet() throws IOException { ... }
> @Override
> public void close() throws IOException {}
> @Override
> public void flush() throws IOException {}
> }
> class GroovyRecordSetWriterFactory extends AbstractControllerService 
> implements RecordSetWriterFactory {
> @Override
> RecordSchema getSchema(Map variables, RecordSchema 
> readSchema) throws SchemaNotFoundException, IOException {
>null
> }
> @Override
> RecordSetWriter createWriter(ComponentLog logger, RecordSchema schema, 
> OutputStream out) throws SchemaNotFoundException, IOException {
>new GroovyRecordSetWriter(out)
> }
> }
> writer = new GroovyRecordSetWriterFactory()
> {code}
> With NIFI-6318 we changed a method in the interface RecordSetWriterFactory.
> When using the above code in NiFi 1.9.2, it works fine but after an upgrade 
> on 1.11.4, this breaks. The Controller Service, when enabled, is throwing the 
> below message:
> {quote}Can't have an abstract method in a non-abstract class. The class 
> 'GroovyRecordSetWriterFactory' must be declared abstract or the method 
> 'org.apache.nifi.serialization.RecordSetWriter 
> createWriter(org.apache.nifi.logging.ComponentLog, 
> org.apache.nifi.serialization.record.RecordSchema, java.io.OutputStream, 
> java.util.Map)' must be implemented.
> {quote}
> However the controller service is successfully enabled and the processors 
> referencing it can be started. When using the ConvertRecord processor with 
> the problematic controller service, it will throw the below NPE:
> {code:java}
> 2020-11-26 15:46:13,876 ERROR [Timer-Driven Process Thread-25] 
> o.a.n.processors.standard.ConvertRecord 
> ConvertRecord[id=8b5456ae-71dc-3bd3-d0c0-df50d196fc00] Failed to process 
> StandardFlowFileRecord[uuid=adebfcf6-b449-4d01-90a7-0463930aade0,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1606401933295-1, container=default, 
> section=1], offset=80, 
> length=296],offset=0,name=adebfcf6-b449-4d01-90a7-0463930aade0,size=296]; 
> will route to failure: java.lang.NullPointerException 
> java.lang.NullPointerException: null at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:151)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2986)
>  at 
> org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>  at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> The fix is quite simple, it's just re

[GitHub] [nifi] mattyb149 commented on a change in pull request #4204: NIFI-7355 Added Gremlin bytecode client service.

2020-12-02 Thread GitBox


mattyb149 commented on a change in pull request #4204:
URL: https://github.com/apache/nifi/pull/4204#discussion_r534549148



##
File path: 
nifi-nar-bundles/nifi-graph-bundle/nifi-other-graph-services/src/main/java/org/apache/nifi/graph/GremlinBytecodeClientService.java
##
@@ -0,0 +1,365 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.graph;
+
+import org.apache.commons.codec.digest.DigestUtils;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.graph.gremlin.SimpleEntry;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StringUtils;
+import org.apache.tinkerpop.gremlin.driver.Cluster;
+import org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection;
+import org.apache.tinkerpop.gremlin.process.traversal.AnonymousTraversalSource;
+import 
org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversalSource;
+
+import javax.script.Bindings;
+import javax.script.Compilable;
+import javax.script.CompiledScript;
+import javax.script.ScriptEngine;
+import javax.script.ScriptEngineManager;
+import javax.script.ScriptException;
+import java.io.File;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.stream.Collectors;
+
+import static org.apache.nifi.graph.GremlinClientService.NOT_SUPPORTED;
+
+@CapabilityDescription("A client service that provides a scriptable interface 
to open a remote connection/travseral " +
+"against a Gremlin Server and execute operations against it.")
+@Tags({"graph", "database", "gremlin", "tinkerpop"})
+public class GremlinBytecodeClientService extends 
AbstractTinkerpopClientService implements GraphClientService {
+private static final List NEW_DESCRIPTORS;
+
+public static final PropertyDescriptor REMOTE_OBJECTS_FILE = new 
PropertyDescriptor.Builder()

Review comment:
   Would this apply to GremlinClientService as well? If so we should have a 
base or util class with this property (and appropriate handling) that 
GremlinClientService and GremlinBytecodeClientService could both use. Also see 
my comment below about combining the processors if it makes it easier on the 
user.

##
File path: 
nifi-nar-bundles/nifi-graph-bundle/nifi-other-graph-services/src/main/resources/docs/org.apache.nifi.graph.GremlinBytecodeClientService/additionalDetails.html
##
@@ -0,0 +1,41 @@
+
+
+
+
+
+GremlinBytecodeClientservice
+
+
+
+
+
+Description:
+
+This client service configures a remote connection/traversal and allows a 
script to execute operations against it. The key
+difference between this and GremlinClientService is that this version does 
not run the script on the Gremlin Server, but rather on
+the client side (NiFi). For more details, see the Tinkerpop documentation

Review comment:
   Is it easier for the user to have two different processors, or a 
property that decides where to run the script (server or client)?

##
File path: 
nifi-nar-bundles/nifi-graph-bundle/nifi-other-graph-services/src/main/resources/docs/org.apache.nifi.graph.GremlinBytecodeClientService/additionalDetails.html
##
@@ -0,0 +1,41 @@
+
+
+
+
+
+GremlinBytecodeClientservice
+
+
+
+
+
+Description:
+
+This client service configures a remote connection/traversal and allows a 
script to execute operations against it. The key
+d

[GitHub] [nifi] mattyb149 opened a new pull request #4704: NIFI-7906: Implemented RecordSetWriter support for ExecuteGraphQueryRecord

2020-12-02 Thread GitBox


mattyb149 opened a new pull request #4704:
URL: https://github.com/apache/nifi/pull/4704


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   ExecuteGraphQueryRecord specifies a Failed Record Writer property that is 
meant to write out any records that fail during processing of the graph query. 
However the current implementation doesn't use it and instead sends the entire 
input flowfile to failure, and can result in a flowfile that doesn't get 
transferred (causing an exception). This PR refactors the processor to use the 
writer to write any failed records to the failure relationship.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #4685: NIFI-8042: Fixed bug that was escaping Expression Language references…

2020-12-02 Thread GitBox


markap14 commented on a change in pull request #4685:
URL: https://github.com/apache/nifi/pull/4685#discussion_r534527090



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java
##
@@ -602,8 +602,7 @@ public boolean isAllDataBufferedForEntireText() {
 @Override
 public FlowFile replace(FlowFile flowFile, final ProcessSession 
session, final ProcessContext context, final String evaluateMode, final Charset 
charset, final int maxBufferSize) {

Review comment:
   Yeah, I cannot argue that point with you. Like many processors, this one 
started pretty simple, once upon a time. And a new feature was added. And 
another. And it's become quite the beast. Definitely wouldn't hurt to updates 
with some docs. And probably would help to add some additionalDetails.html, 
too, to be honest, because there are a lot of options here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on pull request #4701: NIFI-7906 fixed windows build

2020-12-02 Thread GitBox


MikeThomsen commented on pull request #4701:
URL: https://github.com/apache/nifi/pull/4701#issuecomment-737524052


   @joewitt looks gtg



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


fgerlits commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534274245



##
File path: libminifi/include/utils/CollectionUtils.h
##
@@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+namespace internal {
+
+template
+struct find_in_range {
+  static auto call(const T& range, const Arg& arg) -> 
decltype(std::find(range.begin(), range.end(), arg)) {
+return std::find(range.begin(), range.end(), arg);
+  }
+};
+
+template
+struct find_in_range().find(std::declval()), void())> {
+  static auto call(const T& range, const Arg& arg) -> 
decltype(range.find(arg)) {
+return range.find(arg);
+  }
+};
+
+}  // namespace internal
+
+template
+bool haveCommonItem(const T& a, const U& b) {
+  using Item = typename T::value_type;
+  return std::any_of(a.begin(), a.end(), [&] (const Item& item) {
+return internal::find_in_range::call(b, item) != b.end();

Review comment:
   I am hesitant to comment as this PR already has a record number of 
comments, but why do we need the two versions?  If we always do `std::find`, 
we'll have 1/3 the amount of code.  Is that less efficient in some cases?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8064) Convert TestSecureClientZooKeeperFactory to integration test

2020-12-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-8064:
--
Status: Patch Available  (was: Open)

> Convert TestSecureClientZooKeeperFactory to integration test
> 
>
> Key: NIFI-8064
> URL: https://issues.apache.org/jira/browse/NIFI-8064
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test starts an embedded ZK which should be considered more of an 
> integration tests. It sometimes fails through GH actions due to timeouts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende opened a new pull request #4703: NIFI-8064 Convert TestSecureClientZooKeeperFactory to integration test

2020-12-02 Thread GitBox


bbende opened a new pull request #4703:
URL: https://github.com/apache/nifi/pull/4703


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8064) Convert TestSecureClientZooKeeperFactory to integration test

2020-12-02 Thread Bryan Bende (Jira)
Bryan Bende created NIFI-8064:
-

 Summary: Convert TestSecureClientZooKeeperFactory to integration 
test
 Key: NIFI-8064
 URL: https://issues.apache.org/jira/browse/NIFI-8064
 Project: Apache NiFi
  Issue Type: Task
Reporter: Bryan Bende
Assignee: Bryan Bende


This test starts an embedded ZK which should be considered more of an 
integration tests. It sometimes fails through GH actions due to timeouts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ottobackwards commented on pull request #4685: NIFI-8042: Fixed bug that was escaping Expression Language references…

2020-12-02 Thread GitBox


ottobackwards commented on pull request #4685:
URL: https://github.com/apache/nifi/pull/4685#issuecomment-737511510


   +1 fwiw



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ottobackwards commented on a change in pull request #4685: NIFI-8042: Fixed bug that was escaping Expression Language references…

2020-12-02 Thread GitBox


ottobackwards commented on a change in pull request #4685:
URL: https://github.com/apache/nifi/pull/4685#discussion_r534497643



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java
##
@@ -602,8 +602,7 @@ public boolean isAllDataBufferedForEntireText() {
 @Override
 public FlowFile replace(FlowFile flowFile, final ProcessSession 
session, final ProcessContext context, final String evaluateMode, final Charset 
charset, final int maxBufferSize) {

Review comment:
   fair enough.  In general, I think better dev comments about what is 
going on in this class would help.  Point taken on this specific ask

##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestReplaceText.java
##
@@ -53,6 +54,66 @@ public TestRunner getRunner() {
 return runner;
 }
 

Review comment:
   Sorry, please disregard.  I literally skipped past the LITERAL part of 
this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8063) Add profile to Maven POM to enable NAR exclusion

2020-12-02 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-8063:
---
Status: Patch Available  (was: In Progress)

> Add profile to Maven POM to enable NAR exclusion
> 
>
> Key: NIFI-8063
> URL: https://issues.apache.org/jira/browse/NIFI-8063
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Sometimes a bare-bones NiFi is all that is needed to test systems, 
> integrations, etc. It would be nice to be able to build a version of NiFi 
> without all the NARs in it (currently 1.5+ GB in total size). If this is done 
> as a profile in the assembly POM, the resulting artifacts can also be 
> dockerized (using the dockermaven module in the codebase).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 opened a new pull request #4702: NIFI-8063: Added profile (enabled) to include most NARs, can be disabled

2020-12-02 Thread GitBox


mattyb149 opened a new pull request #4702:
URL: https://github.com/apache/nifi/pull/4702


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Moves most NARs in the assembly POM to an enabled-by-default profile. It can 
be disabled from the command line by deactivating the profile:
   
   `-P\!most-nars`
   
   This results in an assembly package roughly 1/4 of the total size.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8063) Add profile to Maven POM to enable NAR exclusion

2020-12-02 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242729#comment-17242729
 ] 

Matt Burgess commented on NIFI-8063:


The PR enables a "most-nars" profile by default, if building without most NARs 
you can disable the profile: -P\!most-nars or a headless version 
-Pheadless,\!most-nars

Happy to discuss on the PR if more NARs should be included/excluded.

> Add profile to Maven POM to enable NAR exclusion
> 
>
> Key: NIFI-8063
> URL: https://issues.apache.org/jira/browse/NIFI-8063
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Sometimes a bare-bones NiFi is all that is needed to test systems, 
> integrations, etc. It would be nice to be able to build a version of NiFi 
> without all the NARs in it (currently 1.5+ GB in total size). If this is done 
> as a profile in the assembly POM, the resulting artifacts can also be 
> dockerized (using the dockermaven module in the codebase).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-8063) Add profile to Maven POM to enable NAR exclusion

2020-12-02 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-8063:
--

Assignee: Matt Burgess

> Add profile to Maven POM to enable NAR exclusion
> 
>
> Key: NIFI-8063
> URL: https://issues.apache.org/jira/browse/NIFI-8063
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Sometimes a bare-bones NiFi is all that is needed to test systems, 
> integrations, etc. It would be nice to be able to build a version of NiFi 
> without all the NARs in it (currently 1.5+ GB in total size). If this is done 
> as a profile in the assembly POM, the resulting artifacts can also be 
> dockerized (using the dockermaven module in the codebase).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8063) Add profile to Maven POM to enable NAR exclusion

2020-12-02 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-8063:
--

 Summary: Add profile to Maven POM to enable NAR exclusion
 Key: NIFI-8063
 URL: https://issues.apache.org/jira/browse/NIFI-8063
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Reporter: Matt Burgess


Sometimes a bare-bones NiFi is all that is needed to test systems, 
integrations, etc. It would be nice to be able to build a version of NiFi 
without all the NARs in it (currently 1.5+ GB in total size). If this is done 
as a profile in the assembly POM, the resulting artifacts can also be 
dockerized (using the dockermaven module in the codebase).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] joewitt commented on pull request #4701: NIFI-7906 fixed windows build

2020-12-02 Thread GitBox


joewitt commented on pull request #4701:
URL: https://github.com/apache/nifi/pull/4701#issuecomment-737483420


   watching to see if build gets happy.  if so i'll merge



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #4700: NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateles…

2020-12-02 Thread GitBox


markap14 commented on a change in pull request #4700:
URL: https://github.com/apache/nifi/pull/4700#discussion_r534466759



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/repository/StatelessProvenanceRepository.java
##
@@ -0,0 +1,380 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.repository;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.provenance.AsyncLineageSubmission;
+import org.apache.nifi.provenance.IdentifierLookup;
+import org.apache.nifi.provenance.ProvenanceAuthorizableFactory;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.provenance.lineage.ComputeLineageSubmission;
+import org.apache.nifi.provenance.search.Query;
+import org.apache.nifi.provenance.search.QuerySubmission;
+import org.apache.nifi.provenance.search.SearchableField;
+import org.apache.nifi.util.RingBuffer;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+
+public class StatelessProvenanceRepository implements ProvenanceRepository {
+
+public static String CONTAINER_NAME = "in-memory";
+
+private final RingBuffer ringBuffer;
+private final int maxSize;
+
+private final AtomicLong idGenerator = new AtomicLong(0L);
+
+public StatelessProvenanceRepository(final int maxEvents) {
+maxSize = maxEvents;
+ringBuffer = new RingBuffer<>(maxSize);
+}
+
+@Override
+public void initialize(final EventReporter eventReporter, final Authorizer 
authorizer, final ProvenanceAuthorizableFactory resourceFactory,
+   final IdentifierLookup idLookup) throws IOException 
{
+
+}
+
+@Override
+public ProvenanceEventRepository getProvenanceEventRepository() {
+return this;
+}
+
+@Override
+public ProvenanceEventBuilder eventBuilder() {
+return new StandardProvenanceEventRecord.Builder();
+}
+
+@Override
+public void registerEvent(final ProvenanceEventRecord event) {
+final long id = idGenerator.getAndIncrement();
+ringBuffer.add(new IdEnrichedProvEvent(event, id));
+}
+
+@Override
+public void registerEvents(final Iterable events) {
+for (final ProvenanceEventRecord event : events) {
+registerEvent(event);
+}
+}
+
+@Override
+public List getEvents(final long firstRecordId, 
final int maxRecords) throws IOException {
+return getEvents(firstRecordId, maxRecords, null);
+}
+
+@Override
+public List getEvents(final long firstRecordId, 
final int maxRecords, final NiFiUser user) throws IOException {
+return ringBuffer.getSelectedElements(new 
RingBuffer.Filter() {
+@Override
+public boolean select(final ProvenanceEventRecord value) {
+return value.getEventId() >= firstRecordId;
+}
+}, maxRecords);
+}
+
+@Override
+public Long getMaxEventId() {
+final ProvenanceEventRecord newest = ringBuffer.getNewestElement();
+return (newest == null) ? null : newest.getEventId();
+}
+
+public ProvenanceEventRecord getEvent(final String identifier) throws 
IOException {
+final List records = 
ringBuffer.getSelectedElements(new RingBuffer.Filter() {
+@Override
+public boolean select(final ProvenanceEventRecord event) {
+return identifier.equals(event.getFlowFileUuid());
+}
+}, 1);
+return records.isEmpty() ? null : records.get(0);
+}
+
+@Override
+public ProvenanceEventRecord getEvent(final long i

[GitHub] [nifi] MikeThomsen opened a new pull request #4701: NIFI-7906 fixed windows build

2020-12-02 Thread GitBox


MikeThomsen opened a new pull request #4701:
URL: https://github.com/apache/nifi/pull/4701


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #4685: NIFI-8042: Fixed bug that was escaping Expression Language references…

2020-12-02 Thread GitBox


markap14 commented on a change in pull request #4685:
URL: https://github.com/apache/nifi/pull/4685#discussion_r534465100



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestReplaceText.java
##
@@ -53,6 +54,66 @@ public TestRunner getRunner() {
 return runner;
 }
 

Review comment:
   I'm not sure that I follow your suggestion. I feel it is quite heavily 
tested at this point. Can you provide an example configuration that you think 
would be worthwhile testing? As in, provide a set of:
   Search Value = ___
   Replacement Value = ___
   Replacement Strategy = ___
   Sample input = 
   Expected output = 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende commented on pull request #4700: NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateles…

2020-12-02 Thread GitBox


bbende commented on pull request #4700:
URL: https://github.com/apache/nifi/pull/4700#issuecomment-737480941


   @markap14 I pushed a commit to address the comments, thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende commented on a change in pull request #4700: NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateles…

2020-12-02 Thread GitBox


bbende commented on a change in pull request #4700:
URL: https://github.com/apache/nifi/pull/4700#discussion_r534464510



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/repository/VolatileProvenanceRepository.java
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.repository;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.provenance.AsyncLineageSubmission;
+import org.apache.nifi.provenance.IdentifierLookup;
+import org.apache.nifi.provenance.ProvenanceAuthorizableFactory;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.provenance.lineage.ComputeLineageSubmission;
+import org.apache.nifi.provenance.search.Query;
+import org.apache.nifi.provenance.search.QuerySubmission;
+import org.apache.nifi.provenance.search.SearchableField;
+import org.apache.nifi.util.RingBuffer;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+public class VolatileProvenanceRepository implements ProvenanceRepository {

Review comment:
   Agree

##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/repository/VolatileProvenanceRepository.java
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.repository;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.provenance.AsyncLineageSubmission;
+import org.apache.nifi.provenance.IdentifierLookup;
+import org.apache.nifi.provenance.ProvenanceAuthorizableFactory;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.provenance.lineage.ComputeLineageSubmission;
+import org.apache.nifi.provenance.search.Query;
+import org.apache.nifi.provenance.search.QuerySubmission;
+import org.apache.nifi.provenance.search.SearchableField;
+import org.apache.nifi.util.RingBuffer;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+public class VolatileProvenanceRepository implements ProvenanceRepository {
+
+// default property values
+public static final int DEFAULT_BUFFER_SIZE = 1;
+
+public static String CONTA

[GitHub] [nifi] bbende commented on a change in pull request #4700: NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateles…

2020-12-02 Thread GitBox


bbende commented on a change in pull request #4700:
URL: https://github.com/apache/nifi/pull/4700#discussion_r534464332



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/repository/VolatileProvenanceRepository.java
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.repository;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.provenance.AsyncLineageSubmission;
+import org.apache.nifi.provenance.IdentifierLookup;
+import org.apache.nifi.provenance.ProvenanceAuthorizableFactory;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.provenance.lineage.ComputeLineageSubmission;
+import org.apache.nifi.provenance.search.Query;
+import org.apache.nifi.provenance.search.QuerySubmission;
+import org.apache.nifi.provenance.search.SearchableField;
+import org.apache.nifi.util.RingBuffer;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+public class VolatileProvenanceRepository implements ProvenanceRepository {
+
+// default property values
+public static final int DEFAULT_BUFFER_SIZE = 1;
+
+public static String CONTAINER_NAME = "in-memory";
+
+private final RingBuffer ringBuffer;
+private final int maxSize;
+
+private final AtomicLong idGenerator = new AtomicLong(0L);
+private final AtomicBoolean initialized = new AtomicBoolean(false);
+
+/**
+ * Default no args constructor for service loading only
+ */
+public VolatileProvenanceRepository() {

Review comment:
   Agree

##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/repository/VolatileProvenanceRepository.java
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.repository;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.provenance.AsyncLineageSubmission;
+import org.apache.nifi.provenance.IdentifierLookup;
+import org.apache.nifi.provenance.ProvenanceAuthorizableFactory;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.provenance.lineage.ComputeLineageSubmission;
+import org.apache.nifi.provenance.search.Query;
+import org.apache.nifi.provenance.search.QuerySubmission;
+import org.apache.nifi.provenance.search.Se

[GitHub] [nifi] markap14 commented on a change in pull request #4685: NIFI-8042: Fixed bug that was escaping Expression Language references…

2020-12-02 Thread GitBox


markap14 commented on a change in pull request #4685:
URL: https://github.com/apache/nifi/pull/4685#discussion_r534464347



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java
##
@@ -602,8 +602,7 @@ public boolean isAllDataBufferedForEntireText() {
 @Override
 public FlowFile replace(FlowFile flowFile, final ProcessSession 
session, final ProcessContext context, final String evaluateMode, final Charset 
charset, final int maxBufferSize) {

Review comment:
   I can understand why the existence of it and the removal of it would be 
confusing. I don't think a comment in the code would be helpful though. It 
never should have been escaped. I think it was escaped as a result of 
refactoring the code, or perhaps because it is escaped when using a Regular 
Expression as the search value but if we added an inline comment about why 
we're not arbitrarily escaping something that doesn't need escaping, outside of 
the scope of this PR, I think it would result in more confusion than 
clarification :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #4700: NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateles…

2020-12-02 Thread GitBox


markap14 commented on a change in pull request #4700:
URL: https://github.com/apache/nifi/pull/4700#discussion_r534455443



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/repository/VolatileProvenanceRepository.java
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.repository;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.provenance.AsyncLineageSubmission;
+import org.apache.nifi.provenance.IdentifierLookup;
+import org.apache.nifi.provenance.ProvenanceAuthorizableFactory;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.provenance.lineage.ComputeLineageSubmission;
+import org.apache.nifi.provenance.search.Query;
+import org.apache.nifi.provenance.search.QuerySubmission;
+import org.apache.nifi.provenance.search.SearchableField;
+import org.apache.nifi.util.RingBuffer;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+public class VolatileProvenanceRepository implements ProvenanceRepository {
+
+// default property values
+public static final int DEFAULT_BUFFER_SIZE = 1;
+
+public static String CONTAINER_NAME = "in-memory";
+
+private final RingBuffer ringBuffer;
+private final int maxSize;
+
+private final AtomicLong idGenerator = new AtomicLong(0L);
+private final AtomicBoolean initialized = new AtomicBoolean(false);
+
+/**
+ * Default no args constructor for service loading only
+ */
+public VolatileProvenanceRepository() {

Review comment:
   Given the usage pattern, that this will only be used in stateless, and 
it's not loaded via the service loader, I don't think we even need the no-arg 
constructor.

##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-stateless-bundle/nifi-stateless-engine/src/main/java/org/apache/nifi/stateless/repository/VolatileProvenanceRepository.java
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.stateless.repository;
+
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.events.EventReporter;
+import org.apache.nifi.provenance.AsyncLineageSubmission;
+import org.apache.nifi.provenance.IdentifierLookup;
+import org.apache.nifi.provenance.ProvenanceAuthorizableFactory;
+import org.apache.nifi.provenance.ProvenanceEventBuilder;
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+import org.apache.nifi.provenance.ProvenanceEventRepository;
+import org.apache.nifi.provenance.ProvenanceEventType;
+import org.apache.nifi.provenance.ProvenanceRepository;
+import org.apache.nifi.provenance.StandardProvenanceEventRecord;
+import org.apache.nifi.provenance.lineage.ComputeLineageSubm

[jira] [Updated] (NIFI-8060) Remove dependency on volatile provenance repo from stateless NAR

2020-12-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-8060:
--
Fix Version/s: 1.13.0

> Remove dependency on volatile provenance repo from stateless NAR
> 
>
> Key: NIFI-8060
> URL: https://issues.apache.org/jira/browse/NIFI-8060
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Instead of sharing the volatile prov repo between nifi and stateless, we 
> should add a minimal volatile impl to stateless.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8060) Remove dependency on volatile provenance repo from stateless NAR

2020-12-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-8060:
--
Status: Patch Available  (was: Open)

> Remove dependency on volatile provenance repo from stateless NAR
> 
>
> Key: NIFI-8060
> URL: https://issues.apache.org/jira/browse/NIFI-8060
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Instead of sharing the volatile prov repo between nifi and stateless, we 
> should add a minimal volatile impl to stateless.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8062) Tabbs plugin is not accessible by keyboard

2020-12-02 Thread Shane Ardell (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Ardell updated NIFI-8062:
---
Description: 
As a keyboard user, I cannot access tabs (via the custom tabbs plugin) to 
select different tabs and switch views.



> Tabbs plugin is not accessible by keyboard
> --
>
> Key: NIFI-8062
> URL: https://issues.apache.org/jira/browse/NIFI-8062
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Shane Ardell
>Priority: Major
>
> As a keyboard user, I cannot access tabs (via the custom tabbs plugin) to 
> select different tabs and switch views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8062) Tabbs plugin is not accessible by keyboard

2020-12-02 Thread Shane Ardell (Jira)
Shane Ardell created NIFI-8062:
--

 Summary: Tabbs plugin is not accessible by keyboard
 Key: NIFI-8062
 URL: https://issues.apache.org/jira/browse/NIFI-8062
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core UI
Reporter: Shane Ardell






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende opened a new pull request #4700: NIFI-8060 Added minimal VolatileProvenanceRepository to nifi-stateles…

2020-12-02 Thread GitBox


bbende opened a new pull request #4700:
URL: https://github.com/apache/nifi/pull/4700


   …s-engine and remove dependency on nifi-volatile-provenance-repo module
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7829) Combobox plugin is not accessible by keyboard

2020-12-02 Thread Shane Ardell (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Ardell updated NIFI-7829:
---
Description: As a keyboard user, I cannot access the combobox dropdown to 
see and select an option.  (was: As a keyboard user, I cannot access the combo 
box dropdown to see and select an option.)

> Combobox plugin is not accessible by keyboard
> -
>
> Key: NIFI-7829
> URL: https://issues.apache.org/jira/browse/NIFI-7829
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Shane Ardell
>Priority: Major
>
> As a keyboard user, I cannot access the combobox dropdown to see and select 
> an option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7829) Combo box plugin is not accessible by keyboard

2020-12-02 Thread Shane Ardell (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Ardell updated NIFI-7829:
---
Summary: Combo box plugin is not accessible by keyboard  (was: Combobox 
plugin is not accessible by keyboard)

> Combo box plugin is not accessible by keyboard
> --
>
> Key: NIFI-7829
> URL: https://issues.apache.org/jira/browse/NIFI-7829
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Shane Ardell
>Priority: Major
>
> As a keyboard user, I cannot access the combo-box dropdown to see and select 
> an option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7829) Combobox plugin is not accessible by keyboard

2020-12-02 Thread Shane Ardell (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Ardell updated NIFI-7829:
---
Summary: Combobox plugin is not accessible by keyboard  (was: Combo box 
plugin is not accessible by keyboard)

> Combobox plugin is not accessible by keyboard
> -
>
> Key: NIFI-7829
> URL: https://issues.apache.org/jira/browse/NIFI-7829
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Shane Ardell
>Priority: Major
>
> As a keyboard user, I cannot access the combo box dropdown to see and select 
> an option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7911) Groovy Executescript: fail to convert timezone

2020-12-02 Thread DEOM Damien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DEOM Damien resolved NIFI-7911.
---
Resolution: Fixed

> Groovy Executescript: fail to convert timezone
> --
>
> Key: NIFI-7911
> URL: https://issues.apache.org/jira/browse/NIFI-7911
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.11.4
>Reporter: DEOM Damien
>Priority: Major
>
> Nifi's execute script  fails to properly convert timezones in Groovy
> The following code gives the correct result on my local machine (groovy 
> console 3.0.1): *2020-10-05T21:10:00.000Z*
>  
>  
> {code:java}
> import java.text.DateFormat
> import java.text.SimpleDateFormat
> import org.apache.commons.io.IOUtils
> import java.nio.charset.StandardCharsets
> SimpleDateFormat date_format=new SimpleDateFormat("dd-MM- HH:mm:ss", 
> Locale.FRANCE);
> def date_str = date_format.parse("05-10-2020 
> 23:10:00").format("-MM-dd'T'HH:mm:ss.SSS'Z'", TimeZone.getTimeZone("UTC"))
> log.error(date_str)
> {code}
>  
>  
> On Nifi, I get *2020-10-05T23:10:00.000Z*
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7911) Groovy Executescript: fail to convert timezone

2020-12-02 Thread DEOM Damien (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242676#comment-17242676
 ] 

DEOM Damien commented on NIFI-7911:
---

I've found the solution,:

 

import java.text.DateFormat
import java.text.SimpleDateFormat
import org.apache.commons.io.IOUtils
import java.nio.charset.StandardCharsets

SimpleDateFormat date_format=new SimpleDateFormat("dd-MM- HH:mm:ss", 
Locale.FRANCE);
*{color:#ff8b00}date_format.setTimeZone(TimeZone.getTimeZone("Europe/Paris")){color}*

def date_str = date_format.parse("05-10-2020 
23:10:00").format("-MM-dd'T'HH:mm:ss.SSS'Z'", TimeZone.getTimeZone("UTC"))
log.error(date_str)

> Groovy Executescript: fail to convert timezone
> --
>
> Key: NIFI-7911
> URL: https://issues.apache.org/jira/browse/NIFI-7911
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.11.4
>Reporter: DEOM Damien
>Priority: Major
>
> Nifi's execute script  fails to properly convert timezones in Groovy
> The following code gives the correct result on my local machine (groovy 
> console 3.0.1): *2020-10-05T21:10:00.000Z*
>  
>  
> {code:java}
> import java.text.DateFormat
> import java.text.SimpleDateFormat
> import org.apache.commons.io.IOUtils
> import java.nio.charset.StandardCharsets
> SimpleDateFormat date_format=new SimpleDateFormat("dd-MM- HH:mm:ss", 
> Locale.FRANCE);
> def date_str = date_format.parse("05-10-2020 
> 23:10:00").format("-MM-dd'T'HH:mm:ss.SSS'Z'", TimeZone.getTimeZone("UTC"))
> log.error(date_str)
> {code}
>  
>  
> On Nifi, I get *2020-10-05T23:10:00.000Z*
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7829) Combo box plugin is not accessible by keyboard

2020-12-02 Thread Shane Ardell (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Ardell updated NIFI-7829:
---
Description: As a keyboard user, I cannot access the combo box dropdown to 
see and select an option.  (was: As a keyboard user, I cannot access the 
combo-box dropdown to see and select an option.)

> Combo box plugin is not accessible by keyboard
> --
>
> Key: NIFI-7829
> URL: https://issues.apache.org/jira/browse/NIFI-7829
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Shane Ardell
>Priority: Major
>
> As a keyboard user, I cannot access the combo box dropdown to see and select 
> an option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7829) Combobox plugin is not accessible by keyboard

2020-12-02 Thread Shane Ardell (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Ardell updated NIFI-7829:
---
Description: As a keyboard user, I cannot access the combo-box dropdown to 
see and select an option.

> Combobox plugin is not accessible by keyboard
> -
>
> Key: NIFI-7829
> URL: https://issues.apache.org/jira/browse/NIFI-7829
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Shane Ardell
>Priority: Major
>
> As a keyboard user, I cannot access the combo-box dropdown to see and select 
> an option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7829) Combobox plugin is not accessible by keyboard

2020-12-02 Thread Shane Ardell (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Ardell updated NIFI-7829:
---
Summary: Combobox plugin is not accessible by keyboard  (was: Filter type 
dropdown in UI is not accessible by keyboard)

> Combobox plugin is not accessible by keyboard
> -
>
> Key: NIFI-7829
> URL: https://issues.apache.org/jira/browse/NIFI-7829
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Shane Ardell
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7906) Add graph processor with flexibility to query graph database conditioned on flowfile content and attirbutes

2020-12-02 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-7906:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add graph processor with flexibility to query graph database conditioned on 
> flowfile content and attirbutes
> ---
>
> Key: NIFI-7906
> URL: https://issues.apache.org/jira/browse/NIFI-7906
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Levi Lentz
>Assignee: Levi Lentz
>Priority: Minor
>  Labels: graph
> Fix For: 1.13.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> The current graph bundle currently does not allow you to query the graph 
> database (as defined in the GraphClientService) with attributes or content 
> available in the flow file.
>  
> This functionality would allow uses to perform dynamic queries/mutations of 
> the underlying graph data based. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (NIFI-7906) Add graph processor with flexibility to query graph database conditioned on flowfile content and attirbutes

2020-12-02 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reopened NIFI-7906:


Reopening due to bug in error handling, PR to follow

> Add graph processor with flexibility to query graph database conditioned on 
> flowfile content and attirbutes
> ---
>
> Key: NIFI-7906
> URL: https://issues.apache.org/jira/browse/NIFI-7906
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Levi Lentz
>Assignee: Levi Lentz
>Priority: Minor
>  Labels: graph
> Fix For: 1.13.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> The current graph bundle currently does not allow you to query the graph 
> database (as defined in the GraphClientService) with attributes or content 
> available in the flow file.
>  
> This functionality would allow uses to perform dynamic queries/mutations of 
> the underlying graph data based. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8060) Remove dependency on volatile provenance repo from stateless NAR

2020-12-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-8060:
--
Description: Instead of sharing the volatile prov repo between nifi and 
stateless, we should add a minimal volatile impl to stateless.  (was: In order 
to share the volatile provenance repo between regular nifi and stateless nifi, 
it would be helpful to separate the volatile implementation into a separate NAR 
so that stateless does not have to depend on the full prov repo NAR.)

> Remove dependency on volatile provenance repo from stateless NAR
> 
>
> Key: NIFI-8060
> URL: https://issues.apache.org/jira/browse/NIFI-8060
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> Instead of sharing the volatile prov repo between nifi and stateless, we 
> should add a minimal volatile impl to stateless.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8060) Remove dependency on volatile provenance repo from stateless NAR

2020-12-02 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-8060:
--
Summary: Remove dependency on volatile provenance repo from stateless NAR  
(was: Create separate NAR for volatile provenance repo )

> Remove dependency on volatile provenance repo from stateless NAR
> 
>
> Key: NIFI-8060
> URL: https://issues.apache.org/jira/browse/NIFI-8060
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> In order to share the volatile provenance repo between regular nifi and 
> stateless nifi, it would be helpful to separate the volatile implementation 
> into a separate NAR so that stateless does not have to depend on the full 
> prov repo NAR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 commented on pull request #4204: NIFI-7355 Added Gremlin bytecode client service.

2020-12-02 Thread GitBox


mattyb149 commented on pull request #4204:
URL: https://github.com/apache/nifi/pull/4204#issuecomment-737444733


   I'm getting an error when trying to send a Gremlin script with double-quoted 
strings (GStrings) to JanusGraph: 
   
   ```
   org.apache.tinkerpop.gremlin.driver.ser.SerializationException: 
java.lang.IllegalArgumentException: Class is not registered: 
org.codehaus.groovy.runtime.GStringImpl
   Note: To register this class use: 
kryo.register(org.codehaus.groovy.runtime.GStringImpl.class);
   ```
   
   We should probably register this automatically? Otherwise the user would 
have to point at the class and JAR just to use normal Groovy objects.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8061) Update Jackson Datablind Version

2020-12-02 Thread David Borncamp (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Borncamp updated NIFI-8061:
-
Description: Upgrade {{com.fasterxml.jackson.core:jackson-databind}} in 
NiFi's maven poms to version 2.9.10.6. This resolves [CVE 
2020-25649|http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] 
[http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] with this [issue 
fixed|https://issues.apache.org/jira/secure/[https://github.com/FasterXML/jackson-databind/issues/2826](https://github.com/FasterXML/jackson-databind/issues/2826]
 [https://github.com/FasterXML/jackson-databind/issues/2826] .  (was: Upgrade 
{{com.fasterxml.jackson.core:jackson-databind}} in NiFi's maven poms to version 
2.9.10.6. This resolves [CVE 
2020-25649|http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] 
[http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] with this [issue 
fixed|https://issues.apache.org/jira/secure/[https://github.com/FasterXML/jackson-databind/issues/2826](https://github.com/FasterXML/jackson-databind/issues/2826].)

> Update Jackson Datablind Version
> 
>
> Key: NIFI-8061
> URL: https://issues.apache.org/jira/browse/NIFI-8061
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.11.0, 1.12.0, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.13.0, 
> 1.12.1
>Reporter: David Borncamp
>Priority: Major
>  Labels: dependancy, dependencies
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Upgrade {{com.fasterxml.jackson.core:jackson-databind}} in NiFi's maven poms 
> to version 2.9.10.6. This resolves [CVE 
> 2020-25649|http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] 
> [http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] with this [issue 
> fixed|https://issues.apache.org/jira/secure/[https://github.com/FasterXML/jackson-databind/issues/2826](https://github.com/FasterXML/jackson-databind/issues/2826]
>  [https://github.com/FasterXML/jackson-databind/issues/2826] .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8061) Update Jackson Datablind Version

2020-12-02 Thread David Borncamp (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Borncamp updated NIFI-8061:
-
Description: Upgrade {{com.fasterxml.jackson.core:jackson-databind}} in 
NiFi's maven poms to version 2.9.10.6. This resolves [CVE 
2020-25649|http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] 
[http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] with this [issue 
fixed|https://issues.apache.org/jira/secure/[https://github.com/FasterXML/jackson-databind/issues/2826](https://github.com/FasterXML/jackson-databind/issues/2826].
  (was: Upgrade {{com.fasterxml.jackson.core:jackson-databind}} in NiFi's maven 
poms to version 2.9.10.6. This resolves [CVE 
2020-25649|http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] with this 
[issue 
fixed|https://issues.apache.org/jira/secure/[https://github.com/FasterXML/jackson-databind/issues/2826](https://github.com/FasterXML/jackson-databind/issues/2826].)

> Update Jackson Datablind Version
> 
>
> Key: NIFI-8061
> URL: https://issues.apache.org/jira/browse/NIFI-8061
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.11.0, 1.12.0, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.13.0, 
> 1.12.1
>Reporter: David Borncamp
>Priority: Major
>  Labels: dependancy, dependencies
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Upgrade {{com.fasterxml.jackson.core:jackson-databind}} in NiFi's maven poms 
> to version 2.9.10.6. This resolves [CVE 
> 2020-25649|http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] 
> [http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] with this [issue 
> fixed|https://issues.apache.org/jira/secure/[https://github.com/FasterXML/jackson-databind/issues/2826](https://github.com/FasterXML/jackson-databind/issues/2826].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8061) Update Jackson Datablind Version

2020-12-02 Thread David Borncamp (Jira)
David Borncamp created NIFI-8061:


 Summary: Update Jackson Datablind Version
 Key: NIFI-8061
 URL: https://issues.apache.org/jira/browse/NIFI-8061
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.12.1, 1.11.4, 1.11.3, 1.11.2, 1.11.1, 1.12.0, 1.11.0, 
1.13.0
Reporter: David Borncamp


Upgrade {{com.fasterxml.jackson.core:jackson-databind}} in NiFi's maven poms to 
version 2.9.10.6. This resolves [CVE 
2020-25649|http://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-25649] with this 
[issue 
fixed|https://issues.apache.org/jira/secure/[https://github.com/FasterXML/jackson-databind/issues/2826](https://github.com/FasterXML/jackson-databind/issues/2826].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] dependabot[bot] opened a new pull request #4699: Bump jetty.version from 9.4.34.v20201102 to 9.4.35.v20201120

2020-12-02 Thread GitBox


dependabot[bot] opened a new pull request #4699:
URL: https://github.com/apache/nifi/pull/4699


   Bumps `jetty.version` from 9.4.34.v20201102 to 9.4.35.v20201120.
   Updates `jetty-server` from 9.4.34.v20201102 to 9.4.35.v20201120
   
   Release notes
   Sourced from https://github.com/eclipse/jetty.project/releases";>jetty-server's 
releases.
   
   9.4.35.v20201120
   Important Change
   
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5605";>#5605
 : java.io.IOException: unconsumed input during http request 
parsing
   
   Bugs
   
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/4711";>#4711
 : Reset trailers on recycled response
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5486";>#5486
 : PropertyFileLoginModule retains PropertyUserStores
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5562";>#5562
 : ArrayTernaryTrie consumes too much memory
   
   Enhancements
   
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5539";>#5539
 : StatisticsServlet output now available in json, xml, text, and html
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5575";>#5575
 : Add SEARCH as a known HttpMethod
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5633";>#5633
 : Allow to configure HttpClient request authority (even on HTTP/2)
   
   
   
   
   Commits
   
   https://github.com/eclipse/jetty.project/commit/bdc54f03a5e0a7e280fab27f55c3c75ee8da89fb";>bdc54f0
 Updating to version 9.4.35.v20201120
   https://github.com/eclipse/jetty.project/commit/41bf9534eba774b669a01aa3cc102e4690e3e6c9";>41bf953
 Issue https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5603";>#5603
 - Single page documentation (https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5636";>#5636)
   https://github.com/eclipse/jetty.project/commit/fcfa72ee400fec42c88d86454cdc09c0d1761b30";>fcfa72e
 Bump javax.servlet.jsp.jstl from 1.2.2 to 1.2.5 (https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5673";>#5673)
   https://github.com/eclipse/jetty.project/commit/901a17d197fd9ab7f0bbcf01faf9969668a8ee7f";>901a17d
 Issue https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5605";>#5605
 - Adding more comments
   https://github.com/eclipse/jetty.project/commit/a6d432e9e3031c5dec0712b9b44a80ff63d2da1c";>a6d432e
 Issue https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5605";>#5605
 - Adding more comments and fixing logging
   https://github.com/eclipse/jetty.project/commit/d4feb4f29dea48f70585a07af9a3ed06067f639f";>d4feb4f
 Removed unused code.
   https://github.com/eclipse/jetty.project/commit/248779e1958214a7d5ca842d1f88972aba7c5ca9";>248779e
 Bump grpc-core from 1.33.0 to 1.33.1 (https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5623";>#5623)
   https://github.com/eclipse/jetty.project/commit/5f6e72d2ed099ea72b4f665d7aac2c34282762a3";>5f6e72d
 Issue https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5605";>#5605
 - Adding more gzip consume all tests
   https://github.com/eclipse/jetty.project/commit/14f94f738df5adf68549749d01d4f9d97ea51452";>14f94f7
 Issue https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5605";>#5605
 unconsumed input on sendError (https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5637";>#5637)
   https://github.com/eclipse/jetty.project/commit/1448444c657e4aa49cdafd372e943679e945a581";>1448444
 Merge pull request https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5560";>#5560
 from eclipse/jetty-9.4.x-5539-statisticsservlet-output
   Additional commits viewable in https://github.com/eclipse/jetty.project/compare/jetty-9.4.34.v20201102...jetty-9.4.35.v20201120";>compare
 view
   
   
   
   
   Updates `jetty-servlet` from 9.4.34.v20201102 to 9.4.35.v20201120
   
   Release notes
   Sourced from https://github.com/eclipse/jetty.project/releases";>jetty-servlet's 
releases.
   
   9.4.35.v20201120
   Important Change
   
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5605";>#5605
 : java.io.IOException: unconsumed input during http request 
parsing
   
   Bugs
   
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/4711";>#4711
 : Reset trailers on recycled response
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5486";>#5486
 : PropertyFileLoginModule retains PropertyUserStores
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5562";>#5562
 : ArrayTernaryTrie consumes too much memory
   
   Enhancements
   
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5539";>#5539
 : StatisticsServlet output now available in json, xml, text, and html
   https://github-redirect.dependabot.com/eclipse/jetty.project/issues/5575";>#5575
 : Add SEARCH as a known HttpMethod
   https://github-redirect.dependabot.com/e

[jira] [Assigned] (NIFI-7795) Clarify that filesystem name is container name in Azure Data Lake processors

2020-12-02 Thread Muazma Zahid (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muazma Zahid reassigned NIFI-7795:
--

Assignee: Muazma Zahid

> Clarify that filesystem name is container name in Azure Data Lake processors
> 
>
> Key: NIFI-7795
> URL: https://issues.apache.org/jira/browse/NIFI-7795
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.12.0
>Reporter: Joey Frazee
>Assignee: Muazma Zahid
>Priority: Trivial
>
> Some users don't know what to enter for the Filesystem Name when using the 
> Azure Data Lake processors. It'd be helpful if the description indicated that 
> it should be populated with the container name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8059) Add a new processor or modify the existing the PutEmail processor to be able to sign emails using a certificate

2020-12-02 Thread Cory Wixom (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cory Wixom updated NIFI-8059:
-
Summary: Add a new processor or modify the existing the PutEmail processor 
to be able to sign emails using a certificate  (was: As a user of the PutEmail 
processor I would like to be able to sign emails using a certificate)

> Add a new processor or modify the existing the PutEmail processor to be able 
> to sign emails using a certificate
> ---
>
> Key: NIFI-8059
> URL: https://issues.apache.org/jira/browse/NIFI-8059
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Cory Wixom
>Assignee: Cory Wixom
>Priority: Minor
>  Labels: security
>
> The current PutEmail processor is missing a couple important features to make 
> it usable in certain secure environments. The most important of which is that 
> it doesn't allow you to digitally sign an email with a certificate.
> This ticket is to modify the processor to allow taking an SSLContext and sign 
> the email with the certificate.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-8059) As a user of the PutEmail processor I would like to be able to sign emails using a certificate

2020-12-02 Thread Cory Wixom (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cory Wixom reassigned NIFI-8059:


Assignee: Cory Wixom

> As a user of the PutEmail processor I would like to be able to sign emails 
> using a certificate
> --
>
> Key: NIFI-8059
> URL: https://issues.apache.org/jira/browse/NIFI-8059
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Cory Wixom
>Assignee: Cory Wixom
>Priority: Minor
>  Labels: security
>
> The current PutEmail processor is missing a couple important features to make 
> it usable in certain secure environments. The most important of which is that 
> it doesn't allow you to digitally sign an email with a certificate.
> This ticket is to modify the processor to allow taking an SSLContext and sign 
> the email with the certificate.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 commented on a change in pull request #4651: NIFI-7988 Prometheus Remote Write Processor

2020-12-02 Thread GitBox


mattyb149 commented on a change in pull request #4651:
URL: https://github.com/apache/nifi/pull/4651#discussion_r534304482



##
File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-processors/pom.xml
##
@@ -0,0 +1,133 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
https://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+org.apache.nifi
+nifi-prometheus-bundle
+1.13.0-SNAPSHOT
+
+
+nifi-prometheus-processors
+jar
+
+
+9.3.9.v20160517
+3.13.0
+1.1.7.1
+0.6.1
+1.6.2
+
+
+
+
+org.apache.nifi
+nifi-api
+1.13.0-SNAPSHOT
+
+
+org.apache.nifi
+nifi-utils
+1.13.0-SNAPSHOT
+
+
+org.apache.nifi
+nifi-mock
+1.13.0-SNAPSHOT
+test
+
+
+org.slf4j
+slf4j-simple
+test
+
+
+junit
+junit
+test
+
+
+com.google.protobuf
+protobuf-java
+${protobuf.version}
+
+
+com.google.protobuf
+protobuf-java-util
+${protobuf.version}
+
+
+org.xerial.snappy
+snappy-java

Review comment:
   This needs an entry in the NOTICE file for the NAR, see examples in 
other NARs (such as nifi-avro-nar)

##
File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-processors/pom.xml
##
@@ -0,0 +1,133 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
https://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+org.apache.nifi
+nifi-prometheus-bundle
+1.13.0-SNAPSHOT
+
+
+nifi-prometheus-processors
+jar
+
+
+9.3.9.v20160517
+3.13.0
+1.1.7.1
+0.6.1
+1.6.2
+
+
+
+
+org.apache.nifi
+nifi-api
+1.13.0-SNAPSHOT
+
+
+org.apache.nifi
+nifi-utils
+1.13.0-SNAPSHOT
+
+
+org.apache.nifi
+nifi-mock
+1.13.0-SNAPSHOT
+test
+
+
+org.slf4j
+slf4j-simple
+test
+
+
+junit
+junit
+test
+
+
+com.google.protobuf
+protobuf-java
+${protobuf.version}
+
+
+com.google.protobuf
+protobuf-java-util
+${protobuf.version}
+
+
+org.xerial.snappy
+snappy-java
+${snappy.version}
+
+
+org.eclipse.jetty
+jetty-server
+${jetty.version}
+
+
+org.eclipse.jetty
+jetty-servlet
+${jetty.version}
+
+   
+org.eclipse.jetty
+jetty-client
+${jetty.version}
+test
+
+
+
+
+
+  
+kr.motd.maven
+os-maven-plugin
+${maven.os.version}
+  
+
+
+
+org.xolstice.maven.plugins
+protobuf-maven-plugin
+${maven.protobuf.version}
+
+
com.google.protobuf:protoc:3.13.0:exe:${os.detected.classifier}
+
${basedir}/src/main/resources/proto
+
+
+
+
+compile
+test-compile
+
+
+
+
+
+org.apache.rat
+apache-rat-plugin
+
+
+
**/src/main/resources/proto/gogoproto/gogo.proto

Review comment:
   gogo.proto has a license header, does it not pass the Rat check?

##
File path: 
nifi-nar-bundles/nifi-prometheus-bundle/nifi-prometheus-processors/src/main/java/org/apache/nifi/processors/prometheus/PrometheusRemoteWrite.java
##
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *   

[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-02 Thread GitBox


lordgamez commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534279245



##
File path: extensions/librdkafka/ConsumeKafka.cpp
##
@@ -0,0 +1,522 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ConsumeKafka.h"
+
+#include 
+#include 
+
+#include "core/PropertyValidation.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/gsl.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+constexpr const std::size_t ConsumeKafka::DEFAULT_MAX_POLL_RECORDS;
+constexpr char const* ConsumeKafka::DEFAULT_MAX_POLL_TIME;
+
+core::Property 
ConsumeKafka::KafkaBrokers(core::PropertyBuilder::createProperty("Kafka 
Brokers")
+  ->withDescription("A comma-separated list of known Kafka Brokers in the 
format :.")
+  ->withDefaultValue("localhost:9092", 
core::StandardValidators::get().NON_BLANK_VALIDATOR)
+  ->supportsExpressionLanguage(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::SecurityProtocol(core::PropertyBuilder::createProperty("Security 
Protocol")
+  ->withDescription("This property is currently not supported. Protocol used 
to communicate with brokers. Corresponds to Kafka's 'security.protocol' 
property.")
+  ->withAllowableValues({SECURITY_PROTOCOL_PLAINTEXT/*, 
SECURITY_PROTOCOL_SSL, SECURITY_PROTOCOL_SASL_PLAINTEXT, 
SECURITY_PROTOCOL_SASL_SSL*/ })
+  ->withDefaultValue(SECURITY_PROTOCOL_PLAINTEXT)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNames(core::PropertyBuilder::createProperty("Topic Names")
+  ->withDescription("The name of the Kafka Topic(s) to pull from. More than 
one can be supplied if comma separated.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNameFormat(core::PropertyBuilder::createProperty("Topic Name 
Format")
+  ->withDescription("Specifies whether the Topic(s) provided are a comma 
separated list of names or a single regular expression.")
+  ->withAllowableValues({TOPIC_FORMAT_NAMES, 
TOPIC_FORMAT_PATTERNS})
+  ->withDefaultValue(TOPIC_FORMAT_NAMES)
+  ->build());
+
+core::Property 
ConsumeKafka::HonorTransactions(core::PropertyBuilder::createProperty("Honor 
Transactions")
+  ->withDescription(
+  "Specifies whether or not NiFi should honor transactional guarantees 
when communicating with Kafka. If false, the Processor will use an \"isolation 
level\" of "
+  "read_uncomitted. This means that messages will be received as soon as 
they are written to Kafka but will be pulled, even if the producer cancels the 
transactions. "
+  "If this value is true, NiFi will not receive any messages for which the 
producer's transaction was canceled, but this can result in some latency since 
the consumer "
+  "must wait for the producer to finish its entire transaction instead of 
pulling as the messages become available.")
+  ->withDefaultValue(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::GroupID(core::PropertyBuilder::createProperty("Group ID")
+  ->withDescription("A Group ID is used to identify consumers that are within 
the same consumer group. Corresponds to Kafka's 'group.id' property.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::OffsetReset(core::PropertyBuilder::createProperty("Offset Reset")
+  ->withDescription("Allows you to manage the condition when there is no 
initial offset in Kafka or if the current offset does not exist any more on the 
server (e.g. because that "
+  "data has been deleted). Corresponds to Kafka's 'auto.offset.reset' 
property.")
+  ->withAllowableValues({OFFSET_RESET_EARLIEST, 
OFFSET_RESET_LATEST, OFFSET_RESET_NONE})
+  ->withDefaultValue(OFFSET_RESET_LATEST)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::KeyAttributeEncoding(core::PropertyBuilder::createProperty("Key 
Attribute Encoding")
+  ->withDescription("FlowFiles that are emitted have an attribute named 
'kafka.key'. This property dictates how the value of the attribute should be 
encoded.")
+  ->withAllowableValues({KEY_ATTR_ENCODING_UTF_8, 
KEY_ATTR_ENCODING_HEX})
+  ->withDefaul

[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


fgerlits commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534274245



##
File path: libminifi/include/utils/CollectionUtils.h
##
@@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+namespace internal {
+
+template
+struct find_in_range {
+  static auto call(const T& range, const Arg& arg) -> 
decltype(std::find(range.begin(), range.end(), arg)) {
+return std::find(range.begin(), range.end(), arg);
+  }
+};
+
+template
+struct find_in_range().find(std::declval()), void())> {
+  static auto call(const T& range, const Arg& arg) -> 
decltype(range.find(arg)) {
+return range.find(arg);
+  }
+};
+
+}  // namespace internal
+
+template
+bool haveCommonItem(const T& a, const U& b) {
+  using Item = typename T::value_type;
+  return std::any_of(a.begin(), a.end(), [&] (const Item& item) {
+return internal::find_in_range::call(b, item) != b.end();

Review comment:
   I am hesitant to comment as this PR already has a record number of 
comments, but why do we need the two versions?  If we always do `std::find`, 
we'll have 1/3 the amount of code.  Is that less efficient in some cases?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-02 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534272416



##
File path: extensions/librdkafka/ConsumeKafka.cpp
##
@@ -0,0 +1,522 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ConsumeKafka.h"
+
+#include 
+#include 
+
+#include "core/PropertyValidation.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/gsl.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+constexpr const std::size_t ConsumeKafka::DEFAULT_MAX_POLL_RECORDS;
+constexpr char const* ConsumeKafka::DEFAULT_MAX_POLL_TIME;
+
+core::Property 
ConsumeKafka::KafkaBrokers(core::PropertyBuilder::createProperty("Kafka 
Brokers")
+  ->withDescription("A comma-separated list of known Kafka Brokers in the 
format :.")
+  ->withDefaultValue("localhost:9092", 
core::StandardValidators::get().NON_BLANK_VALIDATOR)
+  ->supportsExpressionLanguage(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::SecurityProtocol(core::PropertyBuilder::createProperty("Security 
Protocol")
+  ->withDescription("This property is currently not supported. Protocol used 
to communicate with brokers. Corresponds to Kafka's 'security.protocol' 
property.")
+  ->withAllowableValues({SECURITY_PROTOCOL_PLAINTEXT/*, 
SECURITY_PROTOCOL_SSL, SECURITY_PROTOCOL_SASL_PLAINTEXT, 
SECURITY_PROTOCOL_SASL_SSL*/ })
+  ->withDefaultValue(SECURITY_PROTOCOL_PLAINTEXT)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNames(core::PropertyBuilder::createProperty("Topic Names")
+  ->withDescription("The name of the Kafka Topic(s) to pull from. More than 
one can be supplied if comma separated.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNameFormat(core::PropertyBuilder::createProperty("Topic Name 
Format")
+  ->withDescription("Specifies whether the Topic(s) provided are a comma 
separated list of names or a single regular expression.")
+  ->withAllowableValues({TOPIC_FORMAT_NAMES, 
TOPIC_FORMAT_PATTERNS})
+  ->withDefaultValue(TOPIC_FORMAT_NAMES)
+  ->build());
+
+core::Property 
ConsumeKafka::HonorTransactions(core::PropertyBuilder::createProperty("Honor 
Transactions")
+  ->withDescription(
+  "Specifies whether or not NiFi should honor transactional guarantees 
when communicating with Kafka. If false, the Processor will use an \"isolation 
level\" of "
+  "read_uncomitted. This means that messages will be received as soon as 
they are written to Kafka but will be pulled, even if the producer cancels the 
transactions. "
+  "If this value is true, NiFi will not receive any messages for which the 
producer's transaction was canceled, but this can result in some latency since 
the consumer "
+  "must wait for the producer to finish its entire transaction instead of 
pulling as the messages become available.")
+  ->withDefaultValue(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::GroupID(core::PropertyBuilder::createProperty("Group ID")
+  ->withDescription("A Group ID is used to identify consumers that are within 
the same consumer group. Corresponds to Kafka's 'group.id' property.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::OffsetReset(core::PropertyBuilder::createProperty("Offset Reset")
+  ->withDescription("Allows you to manage the condition when there is no 
initial offset in Kafka or if the current offset does not exist any more on the 
server (e.g. because that "
+  "data has been deleted). Corresponds to Kafka's 'auto.offset.reset' 
property.")
+  ->withAllowableValues({OFFSET_RESET_EARLIEST, 
OFFSET_RESET_LATEST, OFFSET_RESET_NONE})
+  ->withDefaultValue(OFFSET_RESET_LATEST)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::KeyAttributeEncoding(core::PropertyBuilder::createProperty("Key 
Attribute Encoding")
+  ->withDescription("FlowFiles that are emitted have an attribute named 
'kafka.key'. This property dictates how the value of the attribute should be 
encoded.")
+  ->withAllowableValues({KEY_ATTR_ENCODING_UTF_8, 
KEY_ATTR_ENCODING_HEX})
+  ->withDefa

[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-02 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534270988



##
File path: extensions/librdkafka/ConsumeKafka.cpp
##
@@ -0,0 +1,522 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ConsumeKafka.h"
+
+#include 
+#include 
+
+#include "core/PropertyValidation.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/gsl.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+constexpr const std::size_t ConsumeKafka::DEFAULT_MAX_POLL_RECORDS;
+constexpr char const* ConsumeKafka::DEFAULT_MAX_POLL_TIME;
+
+core::Property 
ConsumeKafka::KafkaBrokers(core::PropertyBuilder::createProperty("Kafka 
Brokers")
+  ->withDescription("A comma-separated list of known Kafka Brokers in the 
format :.")
+  ->withDefaultValue("localhost:9092", 
core::StandardValidators::get().NON_BLANK_VALIDATOR)
+  ->supportsExpressionLanguage(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::SecurityProtocol(core::PropertyBuilder::createProperty("Security 
Protocol")
+  ->withDescription("This property is currently not supported. Protocol used 
to communicate with brokers. Corresponds to Kafka's 'security.protocol' 
property.")
+  ->withAllowableValues({SECURITY_PROTOCOL_PLAINTEXT/*, 
SECURITY_PROTOCOL_SSL, SECURITY_PROTOCOL_SASL_PLAINTEXT, 
SECURITY_PROTOCOL_SASL_SSL*/ })
+  ->withDefaultValue(SECURITY_PROTOCOL_PLAINTEXT)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNames(core::PropertyBuilder::createProperty("Topic Names")
+  ->withDescription("The name of the Kafka Topic(s) to pull from. More than 
one can be supplied if comma separated.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNameFormat(core::PropertyBuilder::createProperty("Topic Name 
Format")
+  ->withDescription("Specifies whether the Topic(s) provided are a comma 
separated list of names or a single regular expression.")
+  ->withAllowableValues({TOPIC_FORMAT_NAMES, 
TOPIC_FORMAT_PATTERNS})
+  ->withDefaultValue(TOPIC_FORMAT_NAMES)
+  ->build());
+
+core::Property 
ConsumeKafka::HonorTransactions(core::PropertyBuilder::createProperty("Honor 
Transactions")
+  ->withDescription(
+  "Specifies whether or not NiFi should honor transactional guarantees 
when communicating with Kafka. If false, the Processor will use an \"isolation 
level\" of "
+  "read_uncomitted. This means that messages will be received as soon as 
they are written to Kafka but will be pulled, even if the producer cancels the 
transactions. "
+  "If this value is true, NiFi will not receive any messages for which the 
producer's transaction was canceled, but this can result in some latency since 
the consumer "
+  "must wait for the producer to finish its entire transaction instead of 
pulling as the messages become available.")
+  ->withDefaultValue(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::GroupID(core::PropertyBuilder::createProperty("Group ID")
+  ->withDescription("A Group ID is used to identify consumers that are within 
the same consumer group. Corresponds to Kafka's 'group.id' property.")
+  ->supportsExpressionLanguage(true)
+  ->build());
+
+core::Property 
ConsumeKafka::OffsetReset(core::PropertyBuilder::createProperty("Offset Reset")
+  ->withDescription("Allows you to manage the condition when there is no 
initial offset in Kafka or if the current offset does not exist any more on the 
server (e.g. because that "
+  "data has been deleted). Corresponds to Kafka's 'auto.offset.reset' 
property.")
+  ->withAllowableValues({OFFSET_RESET_EARLIEST, 
OFFSET_RESET_LATEST, OFFSET_RESET_NONE})
+  ->withDefaultValue(OFFSET_RESET_LATEST)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::KeyAttributeEncoding(core::PropertyBuilder::createProperty("Key 
Attribute Encoding")
+  ->withDescription("FlowFiles that are emitted have an attribute named 
'kafka.key'. This property dictates how the value of the attribute should be 
encoded.")
+  ->withAllowableValues({KEY_ATTR_ENCODING_UTF_8, 
KEY_ATTR_ENCODING_HEX})
+  ->withDefa

[jira] [Created] (NIFI-8060) Create separate NAR for volatile provenance repo

2020-12-02 Thread Bryan Bende (Jira)
Bryan Bende created NIFI-8060:
-

 Summary: Create separate NAR for volatile provenance repo 
 Key: NIFI-8060
 URL: https://issues.apache.org/jira/browse/NIFI-8060
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende


In order to share the volatile provenance repo between regular nifi and 
stateless nifi, it would be helpful to separate the volatile implementation 
into a separate NAR so that stateless does not have to depend on the full prov 
repo NAR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #940: MINIFICPP-1373 - Implement ConsumeKafka

2020-12-02 Thread GitBox


hunyadi-dev commented on a change in pull request #940:
URL: https://github.com/apache/nifi-minifi-cpp/pull/940#discussion_r534256571



##
File path: extensions/librdkafka/ConsumeKafka.cpp
##
@@ -0,0 +1,522 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "ConsumeKafka.h"
+
+#include 
+#include 
+
+#include "core/PropertyValidation.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/gsl.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+constexpr const std::size_t ConsumeKafka::DEFAULT_MAX_POLL_RECORDS;
+constexpr char const* ConsumeKafka::DEFAULT_MAX_POLL_TIME;
+
+core::Property 
ConsumeKafka::KafkaBrokers(core::PropertyBuilder::createProperty("Kafka 
Brokers")
+  ->withDescription("A comma-separated list of known Kafka Brokers in the 
format :.")
+  ->withDefaultValue("localhost:9092", 
core::StandardValidators::get().NON_BLANK_VALIDATOR)
+  ->supportsExpressionLanguage(true)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::SecurityProtocol(core::PropertyBuilder::createProperty("Security 
Protocol")
+  ->withDescription("This property is currently not supported. Protocol used 
to communicate with brokers. Corresponds to Kafka's 'security.protocol' 
property.")
+  ->withAllowableValues({SECURITY_PROTOCOL_PLAINTEXT/*, 
SECURITY_PROTOCOL_SSL, SECURITY_PROTOCOL_SASL_PLAINTEXT, 
SECURITY_PROTOCOL_SASL_SSL*/ })
+  ->withDefaultValue(SECURITY_PROTOCOL_PLAINTEXT)
+  ->isRequired(true)
+  ->build());
+
+core::Property 
ConsumeKafka::TopicNames(core::PropertyBuilder::createProperty("Topic Names")
+  ->withDescription("The name of the Kafka Topic(s) to pull from. More than 
one can be supplied if comma separated.")

Review comment:
   This is verbatim from nifi, but will rephrase.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534193427



##
File path: libminifi/include/utils/CollectionUtils.h
##
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   so that was that sound!
   
   true, that calling a file `CollectionUtils` and then having a single 
function in it which only works for `std::set` is not great :) 
   
   made it generic, and added tests





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7831) KeytabCredentialsService not working with HBase Clients

2020-12-02 Thread Rastislav Krist (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242377#comment-17242377
 ] 

Rastislav Krist commented on NIFI-7831:
---

[~bbende] we have built main branch (1.13.0-SNAPSHOT) from scratch

> KeytabCredentialsService not working with HBase Clients
> ---
>
> Key: NIFI-7831
> URL: https://issues.apache.org/jira/browse/NIFI-7831
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Manuel Navarro
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.13.0
>
>
> HBase Client (both 1.x and 2.x) is not able to renew ticket after expiration 
> with KeytabCredentialsService configured (same behaviour with principal and 
> password configured directly in the controller service). The same 
> KeytabCredentialsService works ok with Hive and Hbase clients configured in 
> the same NIFI cluster. 
> Note that the same configuration works ok in version 1.11 (error start to 
> appear after upgrade from 1.11 to 1.12). 
> After 24hours (time renewal period in our case), the following error appears 
> using HBase_2_ClientServices + HBase_2_ClientMapCacheService : 
> {code:java}
> 2020-09-17 09:00:27,014 ERROR [Relogin service.Chore.1] 
> org.apache.hadoop.hbase.AuthUtil Got exception while trying to refresh 
> credentials: loginUserFromKeyTab must be done first java.io.IOException: 
> loginUserFromKeyTab must be done first at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1194)
>  at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1125)
>  at org.apache.hadoop.hbase.AuthUtil$1.chore(AuthUtil.java:206) at 
> org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> With HBase_1_1_2_ClientServices + HBase_1_1_2_ClientMapCacheService the 
> following error appears: 
>  
> {code:java}
>  2020-09-22 12:18:37,184 WARN [hconnection-0x55d9d8d1-shared--pool3-t769] 
> o.a.hadoop.hbase.ipc.AbstractRpcClient Exception encountered while connecting 
> to the server : javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)] 2020-09-22 12:18:37,197 ERROR 
> [hconnection-0x55d9d8d1-shared--pool3-t769] 
> o.a.hadoop.hbase.ipc.AbstractRpcClient SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'. 
> javax.security.sasl.SaslException: GSS initiate failed at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>  at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:612)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:157)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:738)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:735)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:897)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:866)
>  at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1208) 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
>  at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:328)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:32879)
>  at 
> org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.j

[jira] [Commented] (NIFI-7831) KeytabCredentialsService not working with HBase Clients

2020-12-02 Thread Bryan Bende (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242373#comment-17242373
 ] 

Bryan Bende commented on NIFI-7831:
---

[~rkrist] you tested with a full build from main (1.13.0-SNAPSHOT), or you took 
just some NARs built from main and added them to your 1.12.1 deployment?

> KeytabCredentialsService not working with HBase Clients
> ---
>
> Key: NIFI-7831
> URL: https://issues.apache.org/jira/browse/NIFI-7831
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Manuel Navarro
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.13.0
>
>
> HBase Client (both 1.x and 2.x) is not able to renew ticket after expiration 
> with KeytabCredentialsService configured (same behaviour with principal and 
> password configured directly in the controller service). The same 
> KeytabCredentialsService works ok with Hive and Hbase clients configured in 
> the same NIFI cluster. 
> Note that the same configuration works ok in version 1.11 (error start to 
> appear after upgrade from 1.11 to 1.12). 
> After 24hours (time renewal period in our case), the following error appears 
> using HBase_2_ClientServices + HBase_2_ClientMapCacheService : 
> {code:java}
> 2020-09-17 09:00:27,014 ERROR [Relogin service.Chore.1] 
> org.apache.hadoop.hbase.AuthUtil Got exception while trying to refresh 
> credentials: loginUserFromKeyTab must be done first java.io.IOException: 
> loginUserFromKeyTab must be done first at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1194)
>  at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1125)
>  at org.apache.hadoop.hbase.AuthUtil$1.chore(AuthUtil.java:206) at 
> org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> With HBase_1_1_2_ClientServices + HBase_1_1_2_ClientMapCacheService the 
> following error appears: 
>  
> {code:java}
>  2020-09-22 12:18:37,184 WARN [hconnection-0x55d9d8d1-shared--pool3-t769] 
> o.a.hadoop.hbase.ipc.AbstractRpcClient Exception encountered while connecting 
> to the server : javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)] 2020-09-22 12:18:37,197 ERROR 
> [hconnection-0x55d9d8d1-shared--pool3-t769] 
> o.a.hadoop.hbase.ipc.AbstractRpcClient SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'. 
> javax.security.sasl.SaslException: GSS initiate failed at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>  at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:612)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:157)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:738)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:735)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:897)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:866)
>  at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1208) 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
>  at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:328)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:32879)
>  at 

[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


arpadboda commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534161723



##
File path: libminifi/include/utils/CollectionUtils.h
##
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   I think the filename itself screams for generalisation. :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (NIFI-4985) Allow users to define a specific offset when starting ConsumeKafka

2020-12-02 Thread Dennis Jaheruddin (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242361#comment-17242361
 ] 

Dennis Jaheruddin edited comment on NIFI-4985 at 12/2/20, 1:19 PM:
---

I agree that the suggestion in its original form is most valuable. However, 
perhaps it would make sense to start with something more lightweight that would 
still solve the case 'we need to re-load because we did something wrong'.

If we would be able to make this work for users without exact knowledge of how 
the data is stored in Kafka, this would be even more convenient.

Some thoughts:
 * Rather than specifying a specific offset, allow something like 'reset to 
read last x messages' (presumably per partition).
 * Or perhaps even simpler would be 'reset to timestamp Y'

This would logically still be defined, even if you have multiple topics and/or 
partitions.

 


was (Author: dennisjaheruddin):
I agree that the suggestion in its original form is most valuable. However, 
perhaps it would make sense to start with something more lightweight that would 
still solve the case 'we need to re-load because we did something wrong'.

If we would be able to make this work for users without exact knowledge of how 
the data is stored in Kafka, this would be even more convenient.

Some thoughts:
 * Rather than specifying a specific offset, allow something like 'reset to 
read last x messages' (presumably per partition).
 * Or perhaps even simpler would be 'reset to timestamp'

 

> Allow users to define a specific offset when starting ConsumeKafka
> --
>
> Key: NIFI-4985
> URL: https://issues.apache.org/jira/browse/NIFI-4985
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Sandish Kumar HN
>Priority: Major
>
> It'd be useful to add support for dynamic properties in ConsumeKafka set of 
> processors so that users can define the offset to use when starting the 
> processor. The properties could be something like:
> {noformat}
> kafka...offset{noformat}
> If, for a configured topic, such a property is not defined for a given 
> partition, the consumer would use the auto offset property.
> If a custom offset is defined for a topic/partition, it'd be used when 
> initializing the consumer by calling:
> {noformat}
> seek(TopicPartition, long){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-4985) Allow users to define a specific offset when starting ConsumeKafka

2020-12-02 Thread Dennis Jaheruddin (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242361#comment-17242361
 ] 

Dennis Jaheruddin edited comment on NIFI-4985 at 12/2/20, 1:19 PM:
---

I agree that the suggestion in its original form is most valuable. However, 
perhaps it would make sense to start with something more lightweight that would 
still solve the case 'we need to re-load because we did something wrong but we 
can't afford to start from the beginning and calling the Kafka team takes 
forever'.

If we would be able to make this work for users without exact knowledge of how 
the data is stored in Kafka, this would be even more convenient.

Some thoughts:
 * Rather than specifying a specific offset, allow something like 'reset to 
read last x messages' (presumably per partition).
 * Or perhaps even simpler would be 'reset to timestamp Y'

This would logically still be defined, even if you have multiple topics and/or 
partitions.

 


was (Author: dennisjaheruddin):
I agree that the suggestion in its original form is most valuable. However, 
perhaps it would make sense to start with something more lightweight that would 
still solve the case 'we need to re-load because we did something wrong'.

If we would be able to make this work for users without exact knowledge of how 
the data is stored in Kafka, this would be even more convenient.

Some thoughts:
 * Rather than specifying a specific offset, allow something like 'reset to 
read last x messages' (presumably per partition).
 * Or perhaps even simpler would be 'reset to timestamp Y'

This would logically still be defined, even if you have multiple topics and/or 
partitions.

 

> Allow users to define a specific offset when starting ConsumeKafka
> --
>
> Key: NIFI-4985
> URL: https://issues.apache.org/jira/browse/NIFI-4985
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Sandish Kumar HN
>Priority: Major
>
> It'd be useful to add support for dynamic properties in ConsumeKafka set of 
> processors so that users can define the offset to use when starting the 
> processor. The properties could be something like:
> {noformat}
> kafka...offset{noformat}
> If, for a configured topic, such a property is not defined for a given 
> partition, the consumer would use the auto offset property.
> If a custom offset is defined for a topic/partition, it'd be used when 
> initializing the consumer by calling:
> {noformat}
> seek(TopicPartition, long){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-4985) Allow users to define a specific offset when starting ConsumeKafka

2020-12-02 Thread Dennis Jaheruddin (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242361#comment-17242361
 ] 

Dennis Jaheruddin commented on NIFI-4985:
-

I agree that the suggestion in its original form is most valuable. However, 
perhaps it would make sense to start with something more lightweight that would 
still solve the case 'we need to re-load because we did something wrong'.

If we would be able to make this work for users without exact knowledge of how 
the data is stored in Kafka, this would be even more convenient.

Some thoughts:
 * Rather than specifying a specific offset, allow something like 'reset to 
read last x messages' (presumably per partition).
 * Or perhaps even simpler would be 'reset to timestamp'

 

> Allow users to define a specific offset when starting ConsumeKafka
> --
>
> Key: NIFI-4985
> URL: https://issues.apache.org/jira/browse/NIFI-4985
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Sandish Kumar HN
>Priority: Major
>
> It'd be useful to add support for dynamic properties in ConsumeKafka set of 
> processors so that users can define the offset to use when starting the 
> processor. The properties could be something like:
> {noformat}
> kafka...offset{noformat}
> If, for a configured topic, such a property is not defined for a given 
> partition, the consumer would use the auto offset property.
> If a custom offset is defined for a topic/partition, it'd be used when 
> initializing the consumer by calling:
> {noformat}
> seek(TopicPartition, long){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8059) As a user of the PutEmail processor I would like to be able to sign emails using a certificate

2020-12-02 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242357#comment-17242357
 ] 

David Handermann commented on NIFI-8059:


This feature would be a useful addition to the email capabilities of NiFi.  
Encrypting and signing emails involves a number of configuration options, so it 
seems that it would be better to implement these capabilities as separate 
components, as opposed to introducing more complexity into the existing 
PutEmail Processor.

Although the SSLContextService provides access to the configured key store, it 
still requires constructing a Java KeyStore object and extracting certificates. 
 To support greater flexibility, using custom Controller Services to provide 
access to X.509 certificates and keys would be helpful.

NIFI-7836 and the corresponding [PR|https://github.com/apache/nifi/pull/4557] 
include Processors and Controller Services to handle CMS encryption, which 
underlies the S/MIME specification.  Additional Processors would be necessary 
to support signed emails, but it would be helpful to evaluate those components 
to see how they might align with the use case described.

> As a user of the PutEmail processor I would like to be able to sign emails 
> using a certificate
> --
>
> Key: NIFI-8059
> URL: https://issues.apache.org/jira/browse/NIFI-8059
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Cory Wixom
>Priority: Minor
>  Labels: security
>
> The current PutEmail processor is missing a couple important features to make 
> it usable in certain secure environments. The most important of which is that 
> it doesn't allow you to digitally sign an email with a certificate.
> This ticket is to modify the processor to allow taking an SSLContext and sign 
> the email with the certificate.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534140777



##
File path: libminifi/include/utils/CollectionUtils.h
##
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   even though in my opinion "could be made generic" does not imply "should 
be made generic" (because nobody yet asked for it), since you and @lordgamez 
also think it is worth generalizing, I am turning this into a template





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


arpadboda commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534122907



##
File path: libminifi/include/utils/CollectionUtils.h
##
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   This is specialised for set and string with no reason. 
   
   In case this just requries two iterables with the same underlying type and 
you use find instead of count, this is going to work with vectors, lists, etc, 
while keeping the log(N) efficiency for sets. 
   
   Not to mention unordered sets as well. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


arpadboda commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534122907



##
File path: libminifi/include/utils/CollectionUtils.h
##
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   This is specialised for set with no reason. 
   
   In case this just requries two iterables with the same underlying type and 
you use find instead of count, this is going to work with vectors, lists, etc, 
while keeping the log(N) efficiency for sets. 
   
   Not to mention unordered sets as well. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


arpadboda commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534122907



##
File path: libminifi/include/utils/CollectionUtils.h
##
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   This is specialised for set with no reason. 
   
   In case this just requries two iterables with the same underlying type and 
you use find instead of count, this is going to work with vectors, lists, etc, 
while keeping the log(N) efficiency for sets. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8059) As a user of the PutEmail processor I would like to be able to sign emails using a certificate

2020-12-02 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8059:
---
Labels: security  (was: )

> As a user of the PutEmail processor I would like to be able to sign emails 
> using a certificate
> --
>
> Key: NIFI-8059
> URL: https://issues.apache.org/jira/browse/NIFI-8059
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Cory Wixom
>Priority: Minor
>  Labels: security
>
> The current PutEmail processor is missing a couple important features to make 
> it usable in certain secure environments. The most important of which is that 
> it doesn't allow you to digitally sign an email with a certificate.
> This ticket is to modify the processor to allow taking an SSLContext and sign 
> the email with the certificate.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1410) Add permissions property support for Putfile processor

2020-12-02 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi resolved MINIFICPP-1410.
--
Resolution: Fixed

> Add permissions property support for Putfile processor
> --
>
> Key: MINIFICPP-1410
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1410
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Gabor Gyimesi
>Assignee: Gabor Gyimesi
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The Putfile processor when not using the Boost library, creates directories 
> with a default 700 permissions on non Windows systems. This is too 
> restricting in some use cases which should be remedied, but also should b 
> configurable. A Permissions property needs to be added which to specify the 
> permissions of the created files and directories. The boost implementation 
> should also be updated and tested separately to be in sync with the native 
> implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #942: MINIFICPP-1410 Add permissions property support for Putfile processor

2020-12-02 Thread GitBox


arpadboda closed pull request #942:
URL: https://github.com/apache/nifi-minifi-cpp/pull/942


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1401) MiNiFi should be able to get certs from Win truststore

2020-12-02 Thread Ferenc Gerlits (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Gerlits updated MINIFICPP-1401:
--
Attachment: use-windows-certificate-store_testing.txt

> MiNiFi should be able to get certs from Win truststore
> --
>
> Key: MINIFICPP-1401
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1401
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Ferenc Gerlits
>Assignee: Ferenc Gerlits
>Priority: Major
> Attachments: image-2020-12-02-09-35-51-463.png, 
> use-windows-certificate-store_testing.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In case MiNiFi C++ could get cert from truststore of the OS, users wouldn't 
> need to export it manually. Things would just work after installation. 
> Hint for implementation details: 
> [https://stackoverflow.com/questions/9507184/can-openssl-on-windows-use-the-system-certificate-store]
> The following requirements have been shared by the customer:
>  * it should be possible to define the CN/DN of the cert to use from the Cert 
> Store
>  * it should be possible to define the Key Usage of the cert to use from the 
> Cert Store
> This is because the Cert Store might contain multiple certs and possibly 
> multiple certs with the same CN/DN. Looking at CN/DN + KeyUsage is required, 
> the first one matching both filters is the one to use. If no matching cert is 
> found, the service should not start.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1418) MiNiFi C++ Cross-Compile for arm-linux-gnueabihf

2020-12-02 Thread Michael Sandoval Espinosa (Jira)
Michael Sandoval Espinosa created MINIFICPP-1418:


 Summary: MiNiFi C++ Cross-Compile for arm-linux-gnueabihf
 Key: MINIFICPP-1418
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1418
 Project: Apache NiFi MiNiFi C++
  Issue Type: Question
Affects Versions: 0.7.0
 Environment: Processor Intel® Core™ i7-5930K CPU @ 3.50GHz × 12
OS Ubuntu 18.04.4 LTS
Cross Compile toolchain Cross-pi gcc 10.2.0
Reporter: Michael Sandoval Espinosa


I'm trying to cross-compile the minif cpp agent for the raspberry-pi zero w, in 
the past I was able to compile in the device version 0.4.0, but right now i'm 
trying to compile newest release version to run some tests with custom 
processors. I dont have too much experience with cross-compiling but i have 
created the toolchain, the (supossed) environment, and the cmake_toolchain_file 
following several posts and questions on blogs and forums. I have been 
searching for information about this procedure but only found how to import the 
project as autotools project on Eclipse, butt of course there's a lot of 
configurations that need to be performed before. So I was wondering if there's 
any documentation to follow to get this done, or if someone could please point 
me in the right direction, I would appreciate it a lot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534006825



##
File path: encrypt-config/ArgParser.cpp
##
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include "ArgParser.h"
+#include "utils/OptionalUtils.h"
+#include "utils/StringUtils.h"
+#include "CommandException.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace encrypt_config {
+
+const std::vector Arguments::registered_args_{
+{std::set{"--minifi-home", "-m"},
+ true,
+ "minifi home",
+ "Specifies the home directory used by the minifi agent"}
+};
+
+const std::vector Arguments::registered_flags_{
+{std::set{"--help", "-h"},
+ "Prints this help message"},
+{std::set{"--encrypt-flow-config"},
+ "If set, the flow configuration file (as specified in minifi.properties) 
is also encrypted."}
+};
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   moved it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r534001123



##
File path: libminifi/src/c2/C2Client.cpp
##
@@ -0,0 +1,318 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include "c2/C2Client.h"
+#include "core/state/nodes/MetricsBase.h"
+#include "core/state/nodes/QueueMetrics.h"
+#include "core/state/nodes/AgentInformation.h"
+#include "core/state/nodes/RepositoryMetrics.h"
+#include "properties/Configure.h"
+#include "core/state/UpdateController.h"
+#include "core/controller/ControllerServiceProvider.h"
+#include "c2/C2Agent.h"
+#include "core/state/nodes/FlowInformation.h"
+#include "utils/file/FileSystem.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+C2Client::C2Client(
+std::shared_ptr configuration, 
std::shared_ptr provenance_repo,
+std::shared_ptr flow_file_repo, 
std::shared_ptr content_repo,
+std::unique_ptr flow_configuration, 
std::shared_ptr filesystem,
+std::shared_ptr logger)
+: core::Flow(std::move(provenance_repo), std::move(flow_file_repo), 
std::move(content_repo), std::move(flow_configuration)),
+  configuration_(std::move(configuration)),
+  filesystem_(std::move(filesystem)),
+  logger_(std::move(logger)) {}
+
+void C2Client::stopC2() {
+  if (c2_agent_) {
+c2_agent_->stop();
+  }
+}
+
+bool C2Client::isC2Enabled() const {
+  std::string c2_enable_str;
+  configuration_->get(Configure::nifi_c2_enable, "c2.enable", c2_enable_str);
+  return utils::StringUtils::toBool(c2_enable_str).value_or(false);
+}
+
+void C2Client::initialize(core::controller::ControllerServiceProvider 
*controller, const std::shared_ptr &update_sink) {
+  std::string class_str;
+  configuration_->get("nifi.c2.agent.class", "c2.agent.class", class_str);
+  configuration_->setAgentClass(class_str);
+
+  if (!isC2Enabled()) {
+return;
+  }
+
+  if (class_str.empty()) {
+logger_->log_error("Class name must be defined when C2 is enabled");
+throw std::runtime_error("Class name must be defined when C2 is enabled");
+  }
+
+  std::string identifier_str;
+  if (!configuration_->get("nifi.c2.agent.identifier", "c2.agent.identifier", 
identifier_str) || identifier_str.empty()) {
+// set to the flow controller's identifier
+identifier_str = getControllerUUID().to_string();
+  }
+  configuration_->setAgentIdentifier(identifier_str);
+
+  if (initialized_ && !flow_update_) {
+return;
+  }
+
+  {
+std::lock_guard guard(metrics_mutex_);
+component_metrics_.clear();
+// root_response_nodes_ was not cleared before, it is unclear if that was 
intentional
+  }
+
+  std::map> connections;
+  if (root_ != nullptr) {
+root_->getConnections(connections);
+  }
+
+  std::string class_csv;
+  if (configuration_->get("nifi.c2.root.classes", class_csv)) {
+std::vector classes = utils::StringUtils::split(class_csv, 
",");
+
+for (const std::string& clazz : classes) {
+  auto processor = 
std::dynamic_pointer_cast(core::ClassLoader::getDefaultClassLoader().instantiate(clazz,
 clazz));
+  if (nullptr == processor) {
+logger_->log_error("No metric defined for %s", clazz);
+continue;
+  }
+  auto identifier = 
std::dynamic_pointer_cast(processor);
+  if (identifier != nullptr) {
+identifier->setIdentifier(identifier_str);
+identifier->setAgentClass(class_str);
+  }
+  auto monitor = 
std::dynamic_pointer_cast(processor);
+  if (monitor != nullptr) {
+monitor->addRepository(provenance_repo_);
+monitor->addRepository(flow_file_repo_);
+monitor->setStateMonitor(update_sink);
+  }
+  auto flowMonitor = 
std::dynamic_pointer_cast(processor);
+  if (flowMonitor != nullptr) {
+for (auto &con : connections) {
+  flowMonitor->addConnection(con.second);
+}
+flowMonitor->setStateMonitor(update_sink);
+flowMonitor->setFlowVersion(flow_configuration_->getFlowVersion());
+  }
+  std::lock_guard guard(metrics_mutex_);
+  root_response_nodes_[processor->getName()] = processor;
+}
+  }
+
+  //

[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


lordgamez commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r533999239



##
File path: encrypt-config/ArgParser.cpp
##
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include "ArgParser.h"
+#include "utils/OptionalUtils.h"
+#include "utils/StringUtils.h"
+#include "CommandException.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace encrypt_config {
+
+const std::vector Arguments::registered_args_{
+{std::set{"--minifi-home", "-m"},
+ true,
+ "minifi home",
+ "Specifies the home directory used by the minifi agent"}
+};
+
+const std::vector Arguments::registered_flags_{
+{std::set{"--help", "-h"},
+ "Prints this help message"},
+{std::set{"--encrypt-flow-config"},
+ "If set, the flow configuration file (as specified in minifi.properties) 
is also encrypted."}
+};
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   I would move it now, just in case someone needs a similar utility 
function in the future would find it easier there. A wide generalization may 
not be needed at the moment, anyone using it later could handle that if needed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


lordgamez commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r533997043



##
File path: libminifi/src/c2/C2Client.cpp
##
@@ -0,0 +1,318 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include "c2/C2Client.h"
+#include "core/state/nodes/MetricsBase.h"
+#include "core/state/nodes/QueueMetrics.h"
+#include "core/state/nodes/AgentInformation.h"
+#include "core/state/nodes/RepositoryMetrics.h"
+#include "properties/Configure.h"
+#include "core/state/UpdateController.h"
+#include "core/controller/ControllerServiceProvider.h"
+#include "c2/C2Agent.h"
+#include "core/state/nodes/FlowInformation.h"
+#include "utils/file/FileSystem.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+C2Client::C2Client(
+std::shared_ptr configuration, 
std::shared_ptr provenance_repo,
+std::shared_ptr flow_file_repo, 
std::shared_ptr content_repo,
+std::unique_ptr flow_configuration, 
std::shared_ptr filesystem,
+std::shared_ptr logger)
+: core::Flow(std::move(provenance_repo), std::move(flow_file_repo), 
std::move(content_repo), std::move(flow_configuration)),
+  configuration_(std::move(configuration)),
+  filesystem_(std::move(filesystem)),
+  logger_(std::move(logger)) {}
+
+void C2Client::stopC2() {
+  if (c2_agent_) {
+c2_agent_->stop();
+  }
+}
+
+bool C2Client::isC2Enabled() const {
+  std::string c2_enable_str;
+  configuration_->get(Configure::nifi_c2_enable, "c2.enable", c2_enable_str);
+  return utils::StringUtils::toBool(c2_enable_str).value_or(false);
+}
+
+void C2Client::initialize(core::controller::ControllerServiceProvider 
*controller, const std::shared_ptr &update_sink) {
+  std::string class_str;
+  configuration_->get("nifi.c2.agent.class", "c2.agent.class", class_str);
+  configuration_->setAgentClass(class_str);
+
+  if (!isC2Enabled()) {
+return;
+  }
+
+  if (class_str.empty()) {
+logger_->log_error("Class name must be defined when C2 is enabled");
+throw std::runtime_error("Class name must be defined when C2 is enabled");
+  }
+
+  std::string identifier_str;
+  if (!configuration_->get("nifi.c2.agent.identifier", "c2.agent.identifier", 
identifier_str) || identifier_str.empty()) {
+// set to the flow controller's identifier
+identifier_str = getControllerUUID().to_string();
+  }
+  configuration_->setAgentIdentifier(identifier_str);
+
+  if (initialized_ && !flow_update_) {
+return;
+  }
+
+  {
+std::lock_guard guard(metrics_mutex_);
+component_metrics_.clear();
+// root_response_nodes_ was not cleared before, it is unclear if that was 
intentional
+  }
+
+  std::map> connections;
+  if (root_ != nullptr) {
+root_->getConnections(connections);
+  }
+
+  std::string class_csv;
+  if (configuration_->get("nifi.c2.root.classes", class_csv)) {
+std::vector classes = utils::StringUtils::split(class_csv, 
",");
+
+for (const std::string& clazz : classes) {
+  auto processor = 
std::dynamic_pointer_cast(core::ClassLoader::getDefaultClassLoader().instantiate(clazz,
 clazz));
+  if (nullptr == processor) {
+logger_->log_error("No metric defined for %s", clazz);
+continue;
+  }
+  auto identifier = 
std::dynamic_pointer_cast(processor);
+  if (identifier != nullptr) {
+identifier->setIdentifier(identifier_str);
+identifier->setAgentClass(class_str);
+  }
+  auto monitor = 
std::dynamic_pointer_cast(processor);
+  if (monitor != nullptr) {
+monitor->addRepository(provenance_repo_);
+monitor->addRepository(flow_file_repo_);
+monitor->setStateMonitor(update_sink);
+  }
+  auto flowMonitor = 
std::dynamic_pointer_cast(processor);
+  if (flowMonitor != nullptr) {
+for (auto &con : connections) {
+  flowMonitor->addConnection(con.second);
+}
+flowMonitor->setStateMonitor(update_sink);
+flowMonitor->setFlowVersion(flow_configuration_->getFlowVersion());
+  }
+  std::lock_guard guard(metrics_mutex_);
+  root_response_nodes_[processor->getName()] = processor;
+}
+  }
+
+  // get

[jira] [Commented] (NIFI-7831) KeytabCredentialsService not working with HBase Clients

2020-12-02 Thread Rastislav Krist (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242173#comment-17242173
 ] 

Rastislav Krist commented on NIFI-7831:
---

We have built and deployed a snapshot from main branch yesterday (12/01/2020) 
which should contain the fix from NIFI-7954, but we are still getting the same 
error: `loginUserFromKeyTab must be done first`

Maybe there is anything we have missed along the way?

> KeytabCredentialsService not working with HBase Clients
> ---
>
> Key: NIFI-7831
> URL: https://issues.apache.org/jira/browse/NIFI-7831
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Manuel Navarro
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.13.0
>
>
> HBase Client (both 1.x and 2.x) is not able to renew ticket after expiration 
> with KeytabCredentialsService configured (same behaviour with principal and 
> password configured directly in the controller service). The same 
> KeytabCredentialsService works ok with Hive and Hbase clients configured in 
> the same NIFI cluster. 
> Note that the same configuration works ok in version 1.11 (error start to 
> appear after upgrade from 1.11 to 1.12). 
> After 24hours (time renewal period in our case), the following error appears 
> using HBase_2_ClientServices + HBase_2_ClientMapCacheService : 
> {code:java}
> 2020-09-17 09:00:27,014 ERROR [Relogin service.Chore.1] 
> org.apache.hadoop.hbase.AuthUtil Got exception while trying to refresh 
> credentials: loginUserFromKeyTab must be done first java.io.IOException: 
> loginUserFromKeyTab must be done first at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1194)
>  at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1125)
>  at org.apache.hadoop.hbase.AuthUtil$1.chore(AuthUtil.java:206) at 
> org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> With HBase_1_1_2_ClientServices + HBase_1_1_2_ClientMapCacheService the 
> following error appears: 
>  
> {code:java}
>  2020-09-22 12:18:37,184 WARN [hconnection-0x55d9d8d1-shared--pool3-t769] 
> o.a.hadoop.hbase.ipc.AbstractRpcClient Exception encountered while connecting 
> to the server : javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)] 2020-09-22 12:18:37,197 ERROR 
> [hconnection-0x55d9d8d1-shared--pool3-t769] 
> o.a.hadoop.hbase.ipc.AbstractRpcClient SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'. 
> javax.security.sasl.SaslException: GSS initiate failed at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>  at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:612)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:157)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:738)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:735)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:897)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:866)
>  at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1208) 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
>  at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:328)
>  at 
> org.a

[jira] [Assigned] (MINIFICPP-1329) Fix implementation and usages of StringUtils::StringToBool

2020-12-02 Thread Ferenc Gerlits (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Gerlits reassigned MINIFICPP-1329:
-

Assignee: Amina Dinari

> Fix implementation and usages of StringUtils::StringToBool
> --
>
> Key: MINIFICPP-1329
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1329
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Adam Hunyadi
>Assignee: Amina Dinari
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene, beginner, newbie, starter
>
> *Background:*
> Conversions from string to other values in MiNiFi usually follow the 
> convention of changing an output value and returning a boolean denoting the 
> success of the conversion. For booleans however, this is not the case:
> {code:c++|title=Current Implementation}
>  bool StringUtils::StringToBool(std::string input, bool &output) {
>   std::transform(input.begin(), input.end(), input.begin(), ::tolower);
>   std::istringstream(input) >> std::boolalpha >> output;
>   return output;
> }
> {code}
> It is known to be misused in the code, for example this code assumes the 
> return value false corresponds to a parse failure:
>  
> [https://github.com/apache/nifi-minifi-cpp/blob/rel/minifi-cpp-0.7.0/extensions/opc/src/putopc.cpp#L319-L323]
> *Proposal:*
>  If we want to stay consistent with the other conversions, we can do this:
> {code:c++|title=Minimum change for the new implementation}
> bool StringUtils::StringToBool(std::string input, bool &output) {
>   std::transform(input.begin(), input.end(), input.begin(), ::tolower);
>   output = "true" == input; 
>   return output || "false" == input;
> }
> {code}
> However, many cases use the return value as the conversion result. One should 
> be cautious:
>  # Introduce the new implementation next to the old one as a function with a 
> different name
>  # Change the return value to void on the original
>  # Until the code compiles:
>  ## Eliminate all the usages of return values as parsed values
>  ## Redirect the checked value implementations to the copy
>  # Change the implementation of the original to return the conversion success
>  # Delete the copy
>  # Search and replace the name of the copy to the original
> (i) With a bit more work, we can potentially change the return type to an 
> optional, or a success enum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (MINIFICPP-1401) MiNiFi should be able to get certs from Win truststore

2020-12-02 Thread Ferenc Gerlits (Jira)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242162#comment-17242162
 ] 

Ferenc Gerlits edited comment on MINIFICPP-1401 at 12/2/20, 8:42 AM:
-

Permission to use the code snippet from StackOverflow used in 
{{org::apache::nifi::minifi::utils::tls::extractPrivateKey()}}: 
!image-2020-12-02-09-35-51-463.png!


was (Author: fgerlits):
Permission to use the code snippet from StackOverflow used in 
{{org::apache::nifi::minifi::utils::tls::{color:#00627a}extractPrivateKey()}}: 
!image-2020-12-02-09-35-51-463.png!{color}

> MiNiFi should be able to get certs from Win truststore
> --
>
> Key: MINIFICPP-1401
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1401
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Ferenc Gerlits
>Assignee: Ferenc Gerlits
>Priority: Major
> Attachments: image-2020-12-02-09-35-51-463.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In case MiNiFi C++ could get cert from truststore of the OS, users wouldn't 
> need to export it manually. Things would just work after installation. 
> Hint for implementation details: 
> [https://stackoverflow.com/questions/9507184/can-openssl-on-windows-use-the-system-certificate-store]
> The following requirements have been shared by the customer:
>  * it should be possible to define the CN/DN of the cert to use from the Cert 
> Store
>  * it should be possible to define the Key Usage of the cert to use from the 
> Cert Store
> This is because the Cert Store might contain multiple certs and possibly 
> multiple certs with the same CN/DN. Looking at CN/DN + KeyUsage is required, 
> the first one matching both filters is the one to use. If no matching cert is 
> found, the service should not start.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (MINIFICPP-1401) MiNiFi should be able to get certs from Win truststore

2020-12-02 Thread Ferenc Gerlits (Jira)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242162#comment-17242162
 ] 

Ferenc Gerlits commented on MINIFICPP-1401:
---

Permission to use the code snippet from StackOverflow used in 
{{org::apache::nifi::minifi::utils::tls::{color:#00627a}extractPrivateKey()}}: 
!image-2020-12-02-09-35-51-463.png!{color}

> MiNiFi should be able to get certs from Win truststore
> --
>
> Key: MINIFICPP-1401
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1401
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Ferenc Gerlits
>Assignee: Ferenc Gerlits
>Priority: Major
> Attachments: image-2020-12-02-09-35-51-463.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In case MiNiFi C++ could get cert from truststore of the OS, users wouldn't 
> need to export it manually. Things would just work after installation. 
> Hint for implementation details: 
> [https://stackoverflow.com/questions/9507184/can-openssl-on-windows-use-the-system-certificate-store]
> The following requirements have been shared by the customer:
>  * it should be possible to define the CN/DN of the cert to use from the Cert 
> Store
>  * it should be possible to define the Key Usage of the cert to use from the 
> Cert Store
> This is because the Cert Store might contain multiple certs and possibly 
> multiple certs with the same CN/DN. Looking at CN/DN + KeyUsage is required, 
> the first one matching both filters is the one to use. If no matching cert is 
> found, the service should not start.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r533983259



##
File path: libminifi/include/utils/file/FileSystem.h
##
@@ -0,0 +1,57 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include "utils/OptionalUtils.h"
+#include "utils/EncryptionProvider.h"
+#include "core/logging/LoggerConfiguration.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace file {
+
+class FileSystem {
+ public:
+  explicit FileSystem(bool should_encrypt = false, 
utils::optional encryptor = {});
+
+  FileSystem(const FileSystem&) = delete;
+  FileSystem(FileSystem&&) = delete;
+  FileSystem& operator=(const FileSystem&) = delete;
+  FileSystem& operator=(FileSystem&&) = delete;
+
+  utils::optional read(const std::string& file_name);
+
+  bool write(const std::string& file_name, const std::string& file_content);
+
+ private:
+  bool should_encrypt_on_write_;
+  utils::optional encryptor_;
+  std::shared_ptr 
logger_{logging::LoggerFactory::getLogger()};
+};

Review comment:
   how about `FileIO`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1401) MiNiFi should be able to get certs from Win truststore

2020-12-02 Thread Ferenc Gerlits (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Gerlits updated MINIFICPP-1401:
--
Attachment: image-2020-12-02-09-35-51-463.png

> MiNiFi should be able to get certs from Win truststore
> --
>
> Key: MINIFICPP-1401
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1401
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Ferenc Gerlits
>Assignee: Ferenc Gerlits
>Priority: Major
> Attachments: image-2020-12-02-09-35-51-463.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In case MiNiFi C++ could get cert from truststore of the OS, users wouldn't 
> need to export it manually. Things would just work after installation. 
> Hint for implementation details: 
> [https://stackoverflow.com/questions/9507184/can-openssl-on-windows-use-the-system-certificate-store]
> The following requirements have been shared by the customer:
>  * it should be possible to define the CN/DN of the cert to use from the Cert 
> Store
>  * it should be possible to define the Key Usage of the cert to use from the 
> Cert Store
> This is because the Cert Store might contain multiple certs and possibly 
> multiple certs with the same CN/DN. Looking at CN/DN + KeyUsage is required, 
> the first one matching both filters is the one to use. If no matching cert is 
> found, the service should not start.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r533973565



##
File path: encrypt-config/ArgParser.cpp
##
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include "ArgParser.h"
+#include "utils/OptionalUtils.h"
+#include "utils/StringUtils.h"
+#include "CommandException.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace encrypt_config {
+
+const std::vector Arguments::registered_args_{
+{std::set{"--minifi-home", "-m"},
+ true,
+ "minifi home",
+ "Specifies the home directory used by the minifi agent"}
+};
+
+const std::vector Arguments::registered_flags_{
+{std::set{"--help", "-h"},
+ "Prints this help message"},
+{std::set{"--encrypt-flow-config"},
+ "If set, the flow configuration file (as specified in minifi.properties) 
is also encrypted."}
+};
+
+bool haveCommonItem(const std::set& a, const 
std::set& b) {

Review comment:
   I would wait with generalizing it, because it is currently used once, I 
would move it to the utils on an as-needed basis, should we move it now?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r533971996



##
File path: libminifi/src/utils/StringUtils.cpp
##
@@ -31,7 +31,18 @@ bool StringUtils::StringToBool(std::string input, bool 
&output) {
   return output;
 }
 
-std::string StringUtils::trim(std::string s) {
+utils::optional StringUtils::toBool(const std::string& str) {

Review comment:
   there were tests for it before the "StringView massacre" but I threw the 
baby out with the bathwater, added test for it (yes it aims to replace the 
`StringToBool`, but they have slightly different semantics for now, and 
`StringToBool` is used in many places, so I would retire `StringToBool` 
separately)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r533970246



##
File path: libminifi/src/c2/C2Client.cpp
##
@@ -0,0 +1,318 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include "c2/C2Client.h"
+#include "core/state/nodes/MetricsBase.h"
+#include "core/state/nodes/QueueMetrics.h"
+#include "core/state/nodes/AgentInformation.h"
+#include "core/state/nodes/RepositoryMetrics.h"
+#include "properties/Configure.h"
+#include "core/state/UpdateController.h"
+#include "core/controller/ControllerServiceProvider.h"
+#include "c2/C2Agent.h"
+#include "core/state/nodes/FlowInformation.h"
+#include "utils/file/FileSystem.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+C2Client::C2Client(
+std::shared_ptr configuration, 
std::shared_ptr provenance_repo,
+std::shared_ptr flow_file_repo, 
std::shared_ptr content_repo,
+std::unique_ptr flow_configuration, 
std::shared_ptr filesystem,
+std::shared_ptr logger)
+: core::Flow(std::move(provenance_repo), std::move(flow_file_repo), 
std::move(content_repo), std::move(flow_configuration)),
+  configuration_(std::move(configuration)),
+  filesystem_(std::move(filesystem)),
+  logger_(std::move(logger)) {}
+
+void C2Client::stopC2() {
+  if (c2_agent_) {
+c2_agent_->stop();
+  }
+}
+
+bool C2Client::isC2Enabled() const {
+  std::string c2_enable_str;
+  configuration_->get(Configure::nifi_c2_enable, "c2.enable", c2_enable_str);
+  return utils::StringUtils::toBool(c2_enable_str).value_or(false);
+}
+
+void C2Client::initialize(core::controller::ControllerServiceProvider 
*controller, const std::shared_ptr &update_sink) {
+  std::string class_str;
+  configuration_->get("nifi.c2.agent.class", "c2.agent.class", class_str);
+  configuration_->setAgentClass(class_str);
+
+  if (!isC2Enabled()) {
+return;
+  }
+
+  if (class_str.empty()) {
+logger_->log_error("Class name must be defined when C2 is enabled");
+throw std::runtime_error("Class name must be defined when C2 is enabled");
+  }
+
+  std::string identifier_str;
+  if (!configuration_->get("nifi.c2.agent.identifier", "c2.agent.identifier", 
identifier_str) || identifier_str.empty()) {
+// set to the flow controller's identifier
+identifier_str = getControllerUUID().to_string();
+  }
+  configuration_->setAgentIdentifier(identifier_str);
+
+  if (initialized_ && !flow_update_) {
+return;
+  }
+
+  {
+std::lock_guard guard(metrics_mutex_);
+component_metrics_.clear();
+// root_response_nodes_ was not cleared before, it is unclear if that was 
intentional
+  }
+
+  std::map> connections;
+  if (root_ != nullptr) {
+root_->getConnections(connections);
+  }
+
+  std::string class_csv;
+  if (configuration_->get("nifi.c2.root.classes", class_csv)) {
+std::vector classes = utils::StringUtils::split(class_csv, 
",");
+
+for (const std::string& clazz : classes) {
+  auto processor = 
std::dynamic_pointer_cast(core::ClassLoader::getDefaultClassLoader().instantiate(clazz,
 clazz));
+  if (nullptr == processor) {
+logger_->log_error("No metric defined for %s", clazz);
+continue;
+  }
+  auto identifier = 
std::dynamic_pointer_cast(processor);
+  if (identifier != nullptr) {
+identifier->setIdentifier(identifier_str);
+identifier->setAgentClass(class_str);
+  }
+  auto monitor = 
std::dynamic_pointer_cast(processor);
+  if (monitor != nullptr) {
+monitor->addRepository(provenance_repo_);
+monitor->addRepository(flow_file_repo_);
+monitor->setStateMonitor(update_sink);
+  }
+  auto flowMonitor = 
std::dynamic_pointer_cast(processor);
+  if (flowMonitor != nullptr) {
+for (auto &con : connections) {
+  flowMonitor->addConnection(con.second);
+}
+flowMonitor->setStateMonitor(update_sink);
+flowMonitor->setFlowVersion(flow_configuration_->getFlowVersion());
+  }
+  std::lock_guard guard(metrics_mutex_);
+  root_response_nodes_[processor->getName()] = processor;
+}
+  }
+
+  //

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #937: MINIFICPP-1402 - Encrypt flow configuration and change encryption key

2020-12-02 Thread GitBox


adamdebreceni commented on a change in pull request #937:
URL: https://github.com/apache/nifi-minifi-cpp/pull/937#discussion_r533968416



##
File path: libminifi/include/utils/StringUtils.h
##
@@ -23,7 +23,7 @@
 #include 
 #include 
 #ifdef WIN32
-  #include 
+#include 

Review comment:
   indeed, it was unintentional, fixed

##
File path: libminifi/include/utils/StringUtils.h
##
@@ -143,6 +146,12 @@ class StringUtils {
 return std::equal(endString.rbegin(), endString.rend(), value.rbegin(), 
[](unsigned char lc, unsigned char rc) {return tolower(lc) == tolower(rc);});
   }
 
+  inline static bool startsWith(const std::string &value, const std::string & 
startString) {

Review comment:
   done, and for `endsWith` as well





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org