[GitHub] [nifi] mattyb149 commented on a change in pull request #4509: NIFI-7592: Allow NiFi to be started without a GUI/REST interface

2020-09-29 Thread GitBox


mattyb149 commented on a change in pull request #4509:
URL: https://github.com/apache/nifi/pull/4509#discussion_r497069583



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-headless-server/src/main/java/org/apache/nifi/headless/HeadlessNiFiServer.java
##
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.headless;
+
+import org.apache.nifi.NiFiServer;
+import org.apache.nifi.admin.service.AuditService;
+import org.apache.nifi.admin.service.impl.StandardAuditService;
+import org.apache.nifi.authorization.AuthorizationRequest;
+import org.apache.nifi.authorization.AuthorizationResult;
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.AuthorizerConfigurationContext;
+import org.apache.nifi.authorization.AuthorizerInitializationContext;
+import org.apache.nifi.authorization.FlowParser;
+import org.apache.nifi.authorization.exception.AuthorizationAccessException;
+import org.apache.nifi.authorization.exception.AuthorizerCreationException;
+import org.apache.nifi.authorization.exception.AuthorizerDestructionException;
+import org.apache.nifi.bundle.Bundle;
+import org.apache.nifi.controller.FlowController;
+import org.apache.nifi.controller.StandardFlowService;
+import org.apache.nifi.controller.flow.FlowManager;
+import org.apache.nifi.controller.repository.FlowFileEventRepository;
+import org.apache.nifi.controller.repository.metrics.RingBufferEventRepository;
+import org.apache.nifi.diagnostics.DiagnosticsFactory;
+import org.apache.nifi.encrypt.StringEncryptor;
+import org.apache.nifi.events.VolatileBulletinRepository;
+import org.apache.nifi.nar.ExtensionDiscoveringManager;
+import org.apache.nifi.nar.ExtensionManagerHolder;
+import org.apache.nifi.nar.ExtensionMapping;
+import org.apache.nifi.nar.StandardExtensionDiscoveringManager;
+import org.apache.nifi.registry.VariableRegistry;
+import org.apache.nifi.registry.flow.StandardFlowRegistryClient;
+import org.apache.nifi.registry.variable.FileBasedVariableRegistry;
+import org.apache.nifi.reporting.BulletinRepository;
+import org.apache.nifi.services.FlowService;
+import org.apache.nifi.util.NiFiProperties;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.List;
+import java.util.Set;
+
+/**
+ */
+public class HeadlessNiFiServer implements NiFiServer {
+
+private static final Logger logger = 
LoggerFactory.getLogger(HeadlessNiFiServer.class);
+private NiFiProperties props;
+private Bundle systemBundle;
+private Set bundles;
+private FlowService flowService;
+
+private static final String DEFAULT_SENSITIVE_PROPS_KEY = "nififtw!";
+
+/**
+ * Default constructor
+ */
+public HeadlessNiFiServer() {
+}
+
+public void start() {
+try {
+
+// Create a standard extension manager and discover extensions
+final ExtensionDiscoveringManager extensionManager = new 
StandardExtensionDiscoveringManager();
+extensionManager.discoverExtensions(systemBundle, bundles);
+extensionManager.logClassLoaderMapping();
+
+// Set the extension manager into the holder which makes it 
available to the Spring context via a factory bean
+ExtensionManagerHolder.init(extensionManager);
+
+// Enrich the flow xml using the Extension Manager mapping
+final FlowParser flowParser = new FlowParser();
+final FlowEnricher flowEnricher = new FlowEnricher(this, 
flowParser, props);
+flowEnricher.enrichFlowWithBundleInformation();
+logger.info("Loading Flow...");
+
+FlowFileEventRepository flowFileEventRepository = new 
RingBufferEventRepository(5);
+AuditService auditService = new StandardAuditService();
+Authorizer authorizer = new Authorizer() {
+@Override
+public AuthorizationResult authorize(AuthorizationRequest 
request) throws AuthorizationAccessException {
+return AuthorizationResult.approved();
+}
+
+@Override
+public void initialize(AuthorizerInitializationContext 

[jira] [Created] (NIFI-7862) Allow PutDatabaseRecord to optionally create the target table if it doesn't exist

2020-09-29 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-7862:
--

 Summary: Allow PutDatabaseRecord to optionally create the target 
table if it doesn't exist
 Key: NIFI-7862
 URL: https://issues.apache.org/jira/browse/NIFI-7862
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Matt Burgess


As part of NIFI-6934, the "Database Type" property was added, allowing for 
DB-specific SQL/DML to be generated. That property could be used with a new 
true/false property "Create Table" that would create the target table if it 
didn't already exist.

I wouldn't go so far as to support schema migration (changing column types, 
e.g.), at least not for this case, the intent here would only be to check if 
table exists and attempt to create it , mapping the NiFi Record datatypes to 
DB-specific column types and generating/executing a DB-specific CREATE TABLE 
statement. If "Create Table" is set to true and an error occurs while 
attempting to create the table, the flowfile should be routed to failure or 
rolled back (depending on the setting of the "Rollback on Failure" property).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7748) PutDatabaseRecord: Add an optional record writer: FreeFormRecordWriter

2020-09-29 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204286#comment-17204286
 ] 

Matt Burgess commented on NIFI-7748:


Can you explain a little more about your use case? For custom statements we 
have PutSQL, PutDatabaseRecord generates and prepares the SQL statements for 
you, in order to batch the data using a single prepared (i.e parameterized) 
statement

> PutDatabaseRecord: Add an optional record writer: FreeFormRecordWriter 
> ---
>
> Key: NIFI-7748
> URL: https://issues.apache.org/jira/browse/NIFI-7748
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: DEOM Damien
>Priority: Major
>
> The PutDatabaseRecord processor should allow the user to create custom 
> queries more easily.
> The FreeFormRecordWriter is very convenient, and could be added as an option 
> (as a replacement for the very obscure 'SQL' statement.type)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ottobackwards commented on pull request #4262: NIFI-7436 Ability to walk Record FieldValue to root

2020-09-29 Thread GitBox


ottobackwards commented on pull request #4262:
URL: https://github.com/apache/nifi/pull/4262#issuecomment-700973644


   touch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ottobackwards commented on pull request #4384: NIFI-2072 Support named captures in ExtractText

2020-09-29 Thread GitBox


ottobackwards commented on pull request #4384:
URL: https://github.com/apache/nifi/pull/4384#issuecomment-700973513


   touch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ottobackwards commented on pull request #4513: NIFI-7761 Allow HandleHttpRequest to add specified form data to FlowF…

2020-09-29 Thread GitBox


ottobackwards commented on pull request #4513:
URL: https://github.com/apache/nifi/pull/4513#issuecomment-700973299


   touch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits opened a new pull request #914: MINIFICPP-1323 Encrypt sensitive properties using libsodium

2020-09-29 Thread GitBox


fgerlits opened a new pull request #914:
URL: https://github.com/apache/nifi-minifi-cpp/pull/914


   https://issues.apache.org/jira/browse/MINIFICPP-1323
   
   Encrypt sensitive properties in the minifi.properties file. The change has 
four parts:
   
* introduce a new dependency: libsodium
* wrap libsodium functions in `EncryptionUtils`
* implement a new command-line tool called `encrypt-config` which can 
encrypt the sensitive properties
* decrypt the sensitive properties during startup.
   
   Please also review the wiki page 
https://cwiki.apache.org/confluence/display/MINIFI/Encrypt+sensitive+configuration+properties
 as part of this pull request.
   
   ---
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the LICENSE file?
   - [x] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7859) Write execution duration attributes to SelectHive3QL output flow files

2020-09-29 Thread Nadeem (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nadeem updated NIFI-7859:
-
Status: Patch Available  (was: In Progress)

> Write execution duration attributes to SelectHive3QL output flow files 
> ---
>
> Key: NIFI-7859
> URL: https://issues.apache.org/jira/browse/NIFI-7859
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.12.1
>Reporter: Rahul Soni
>Assignee: Nadeem
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SelectHive3QL & SelectHiveQL processors do not write attributes like 
> query.duration, query.executiontime, query.fetchtime etc. While the generic 
> ExecuteSQL processor does. These attributes can be useful in certain 
> scenarios where you want to measure the performance of the queries.
> Can these attributes be added to the said processor?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] naddym opened a new pull request #4560: NIFI-7859: Support for capturing execution duration of query run as a…

2020-09-29 Thread GitBox


naddym opened a new pull request #4560:
URL: https://github.com/apache/nifi/pull/4560


   …ttributes in SelectHiveQL processors
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende commented on a change in pull request #4540: NIFI-7825: Support native library loading via absolute path

2020-09-29 Thread GitBox


bbende commented on a change in pull request #4540:
URL: https://github.com/apache/nifi/pull/4540#discussion_r496907835



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/bootstrap.conf
##
@@ -66,6 +66,15 @@ java.arg.16=-Djavax.security.auth.useSubjectCredsOnly=true
 # Please see 
https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver_config
 for configuration options.
 java.arg.17=-Dzookeeper.admin.enableServer=false
 
+# The following options configure a Java Agent to handle native library 
loading.
+# It is needed when a custom jar (eg. JDBC driver) has been configured on a 
component in the flow and this custom jar depends on a native library
+# and tries to load it by its absolute path (java.lang.System.load(String 
filename) method call).
+# Use this Java Agent only if you get "Native Library ... already loaded in 
another classloader" errors otherwise!
+#java.arg.18=-javaagent:./lib/aspectjweaver-${aspectj.version}.jar

Review comment:
   Well it does have an effect... If the jars are in lib then they are on 
the classpath of every NAR, so if someone makes a custom NAR that bundles a 
different version of aspectj, then there will be two versions on the classpath, 
and there is nothing the NAR can do to avoid the one in lib.
   
   I would actually vote for option 3... create a new module like 
`nifi-nar-aspect-utils` or whatever we want to call it, and put the dependency 
there along with `LoadNativeLibAspect`, then `lib/aspectj` can contain 
`nifi-nar-aspect-utils` jar and the aspectj jars. Would that work?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #4540: NIFI-7825: Support native library loading via absolute path

2020-09-29 Thread GitBox


turcsanyip commented on a change in pull request #4540:
URL: https://github.com/apache/nifi/pull/4540#discussion_r496900671



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/bootstrap.conf
##
@@ -66,6 +66,15 @@ java.arg.16=-Djavax.security.auth.useSubjectCredsOnly=true
 # Please see 
https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver_config
 for configuration options.
 java.arg.17=-Dzookeeper.admin.enableServer=false
 
+# The following options configure a Java Agent to handle native library 
loading.
+# It is needed when a custom jar (eg. JDBC driver) has been configured on a 
component in the flow and this custom jar depends on a native library
+# and tries to load it by its absolute path (java.lang.System.load(String 
filename) method call).
+# Use this Java Agent only if you get "Native Library ... already loaded in 
another classloader" errors otherwise!
+#java.arg.18=-javaagent:./lib/aspectjweaver-${aspectj.version}.jar

Review comment:
   I double checked it and unfortunately the aspect class in 
`nifi-nar-utils.jar` depends on `aspectjrt.jar`.
   As `nifi-nar-utils.jar` is located in the lib folder and loaded by the 
system classloader, `aspectjrt.jar` must be on the system classpath too.
   And I did not want to separate the 2 AspectJ jars so I put 
`aspectjweaver.jar` to the lib folder.
   
   Options:
   1. leave it as it is now (both `aspectjrt.jar` and `aspectjweaver.jar` in 
the `lib` folder)
   2. leave `aspectjrt.jar` in `lib` and move `aspectjweaver.jar` to 
`lib/aspectj`
   3. find another place for the aspect class and move its `aspectjrt.jar` 
dependency there, move `aspectjweaver.jar` to `lib/aspectj`
   
   Option 2 seems to me a bit weird structure and one of the AspectJ jars is 
still on the system classpath.
   Option 3 could work but I would keep the native library loading stuff 
(`AbstractNativeLibHandlingClassLoader` and `LoadNativeLibAspect`) in the same 
module.
   
   So I would still vote for option 1.
   These jars have no effect if they are not referenced in our code / the agent 
is not turned on.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7861) Arrow Flight Server Controller and Processors

2020-09-29 Thread Peter Wicks (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Wicks updated NIFI-7861:
--
Description: 
* Create an Arrow Flight controller service that hosts an Arrow Flight service 
in NiFi
 * Create a Processor to read data from an Arrow Flight Server
 * Create a Processor to write data to an Arrow Flight Server
 * Create a Processor to fetch data from an Arrow Flight Server ?

 

Due to the in-memory nature of the Arrow Table format, exposing record 
reader/writer services for cross processor consumption does not make sense, 
since the Arrow Table format does not exist on disk.

  was:
* Create an Arrow Flight controller service that hosts an Arrow Flight service 
in NiFi
 * Create a Processor to read data from an Arrow Flight Server
 * Create a Processor to write data to an Arrow Flight Server

 

Due to the in-memory nature of the Arrow Table format, exposing record 
reader/writer services for cross processor consumption does not make sense, 
since the Arrow Table format does not exist on disk.


> Arrow Flight Server Controller and Processors
> -
>
> Key: NIFI-7861
> URL: https://issues.apache.org/jira/browse/NIFI-7861
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
>
> * Create an Arrow Flight controller service that hosts an Arrow Flight 
> service in NiFi
>  * Create a Processor to read data from an Arrow Flight Server
>  * Create a Processor to write data to an Arrow Flight Server
>  * Create a Processor to fetch data from an Arrow Flight Server ?
>  
> Due to the in-memory nature of the Arrow Table format, exposing record 
> reader/writer services for cross processor consumption does not make sense, 
> since the Arrow Table format does not exist on disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7861) Arrow Flight Server Controller and Processors

2020-09-29 Thread Peter Wicks (Jira)
Peter Wicks created NIFI-7861:
-

 Summary: Arrow Flight Server Controller and Processors
 Key: NIFI-7861
 URL: https://issues.apache.org/jira/browse/NIFI-7861
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Peter Wicks
Assignee: Peter Wicks


* Create an Arrow Flight controller service that hosts an Arrow Flight service 
in NiFi
 * Create a Processor to read data from an Arrow Flight Server
 * Create a Processor to write data to an Arrow Flight Server

 

Due to the in-memory nature of the Arrow Table format, exposing record 
reader/writer services for cross processor consumption does not make sense, 
since the Arrow Table format does not exist on disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende commented on a change in pull request #4540: NIFI-7825: Support native library loading via absolute path

2020-09-29 Thread GitBox


bbende commented on a change in pull request #4540:
URL: https://github.com/apache/nifi/pull/4540#discussion_r496806431



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/bootstrap.conf
##
@@ -66,6 +66,15 @@ java.arg.16=-Djavax.security.auth.useSubjectCredsOnly=true
 # Please see 
https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver_config
 for configuration options.
 java.arg.17=-Dzookeeper.admin.enableServer=false
 
+# The following options configure a Java Agent to handle native library 
loading.
+# It is needed when a custom jar (eg. JDBC driver) has been configured on a 
component in the flow and this custom jar depends on a native library
+# and tries to load it by its absolute path (java.lang.System.load(String 
filename) method call).
+# Use this Java Agent only if you get "Native Library ... already loaded in 
another classloader" errors otherwise!
+#java.arg.18=-javaagent:./lib/aspectjweaver-${aspectj.version}.jar

Review comment:
   I think if we can do the `lib/aspectj` approach then that is the best 
option, anything we can do to avoid introducing additional dependencies to the 
system classpath is good.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #4540: NIFI-7825: Support native library loading via absolute path

2020-09-29 Thread GitBox


turcsanyip commented on a change in pull request #4540:
URL: https://github.com/apache/nifi/pull/4540#discussion_r496792424



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/bootstrap.conf
##
@@ -66,6 +66,15 @@ java.arg.16=-Djavax.security.auth.useSubjectCredsOnly=true
 # Please see 
https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver_config
 for configuration options.
 java.arg.17=-Dzookeeper.admin.enableServer=false
 
+# The following options configure a Java Agent to handle native library 
loading.
+# It is needed when a custom jar (eg. JDBC driver) has been configured on a 
component in the flow and this custom jar depends on a native library
+# and tries to load it by its absolute path (java.lang.System.load(String 
filename) method call).
+# Use this Java Agent only if you get "Native Library ... already loaded in 
another classloader" errors otherwise!
+#java.arg.18=-javaagent:./lib/aspectjweaver-${aspectj.version}.jar

Review comment:
   The agent does not require `aspectjweaver.jar` on the classpath so 
`lib/aspectj` directory could work.
   If there are concerns about having this jar on the system classpath, I can 
try out that solution too.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7855) Limit or Tune metrics in DataDog Reporting Task

2020-09-29 Thread Crystal Harris (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203988#comment-17203988
 ] 

Crystal Harris commented on NIFI-7855:
--

Thanks. I think this is saying I need another ReportingTask to filter the 
DataDog Reporting task. I was hoping for some NiFi configuration which would 
reduce or suppress metrics, but this may work. I'm a rank novice. The majority 
of metrics I'm receiving that I don't care about are nifi.Exception_Failure 
stats. More investigation needed.

> Limit or Tune metrics in DataDog Reporting Task
> ---
>
> Key: NIFI-7855
> URL: https://issues.apache.org/jira/browse/NIFI-7855
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Crystal Harris
>Priority: Major
>
> I am using the DataDog reporting task with NiFi and we are sendings tens of 
> thousands of custom metrics to DataDog. Is there a way or can there be a way 
> to limit metrics, either by volume or metric type? i.e., Can we send 
> nifi.Files and nifi.Active but not some of the others?
> If not, I think using the DataDog reporting task is a non-starter due to 
> cost. We're at $4k/month in custom metrics right now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #886: MINIFICPP-1323 Encrypt sensitive properties

2020-09-29 Thread GitBox


szaszm commented on a change in pull request #886:
URL: https://github.com/apache/nifi-minifi-cpp/pull/886#discussion_r496767468



##
File path: main/MiNiFiMain.cpp
##
@@ -53,13 +53,41 @@
 #include "core/FlowConfiguration.h"
 #include "core/ConfigurationFactory.h"
 #include "core/RepositoryFactory.h"
+#include "properties/Decryptor.h"
 #include "utils/file/PathUtils.h"
 #include "utils/file/FileUtils.h"
 #include "utils/Environment.h"
 #include "FlowController.h"
 #include "AgentDocs.h"
 #include "MainHelper.h"
 
+namespace {
+#ifdef OPENSSL_SUPPORT
+bool containsEncryptedProperties(const minifi::Configure& minifi_properties) {
+  const auto is_encrypted_property_marker = [_properties](const 
std::string& property_name) {
+return utils::StringUtils::endsWith(property_name, ".protected") &&
+minifi::Decryptor::isEncrypted(minifi_properties.get(property_name));
+  };
+  const auto property_names = minifi_properties.getConfiguredKeys();
+  return std::any_of(property_names.begin(), property_names.end(), 
is_encrypted_property_marker);
+}
+
+void decryptSensitiveProperties(minifi::Configure& minifi_properties, const 
std::string& minifi_home, logging::Logger& logger) {

Review comment:
   In case of encrypt-config: definitely, since a few microseconds delay 
doesn't hurt the user experience. My estimate: 100 properties * 3 std::strings 
(~allocations) per property * ~500 ns per allocation ~= 15 ns = 150 μs
   In case of `decryptSensitiveProperties`: probably yes, because it runs only 
once during startup/initialization, but I'm not 100% sure about this one.
   Also, maybe moving resources can help avoid copies. (pass-by-value and move 
into the value, transform private copy, return private copy with NRVO or 
implicit move).
   
   If you disagree, that's also fine with me, I'm not insisting on this change.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7855) Limit or Tune metrics in DataDog Reporting Task

2020-09-29 Thread Alan Jackoway (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203981#comment-17203981
 ] 

Alan Jackoway commented on NIFI-7855:
-

What kind of limit are you thinking of? In one of my NiFis, at least 60% of 
metrics come from the connection metrics like OutputCount. Would you want to 
shut off all of the connection metrics? For my use cases, I was considering a 
block/allow list setup for the connections since they seem to be the main 
contributor to metric count.

[SiteToSiteProvenanceReportingtask|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-site-to-site-reporting-nar/1.5.0/org.apache.nifi.reporting.SiteToSiteProvenanceReportingTask/]
 has an existing pattern for filtering on component type or id that someone 
could potentially follow.

> Limit or Tune metrics in DataDog Reporting Task
> ---
>
> Key: NIFI-7855
> URL: https://issues.apache.org/jira/browse/NIFI-7855
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Crystal Harris
>Priority: Major
>
> I am using the DataDog reporting task with NiFi and we are sendings tens of 
> thousands of custom metrics to DataDog. Is there a way or can there be a way 
> to limit metrics, either by volume or metric type? i.e., Can we send 
> nifi.Files and nifi.Active but not some of the others?
> If not, I think using the DataDog reporting task is a non-starter due to 
> cost. We're at $4k/month in custom metrics right now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7856) Provenance failed to be compressed after nifi upgrade to 1.12

2020-09-29 Thread Mengze Li (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203949#comment-17203949
 ] 

Mengze Li commented on NIFI-7856:
-

sure, this is our setting around provenance
{code}
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=

# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=10 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving 
a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be 
truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2


# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=10

{code}

> Provenance failed to be compressed after nifi upgrade to 1.12
> -
>
> Key: NIFI-7856
> URL: https://issues.apache.org/jira/browse/NIFI-7856
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Mengze Li
>Priority: Major
> Attachments: 1683472.prov, ls.png, screenshot-1.png, screenshot-2.png
>
>
> We upgraded our nifi cluster from 1.11.3 to 1.12.0.
> The nodes come up and everything looks to be functional. I can see 1.12.0 is 
> running.
> Later on, we discovered that the data provenance is missing. From checking 
> our logs, we see tons of errors compressing the logs.
> {code}
> 2020-09-28 03:38:35,205 ERROR [Compress Provenance Logs-1-thread-1] 
> o.a.n.p.s.EventFileCompressor Failed to compress 
> ./provenance_repository/2752821.prov on rollover
> {code}
> This didn't happen in 1.11.3. 
> Is this a known issue? We are considering reverting back if there is no 
> solution for this since we can't go prod with no/broken data provenance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] frankgh commented on pull request #4559: NIFI-7817 - Backport of Fix ParquetReader instantiation error (#4538)

2020-09-29 Thread GitBox


frankgh commented on pull request #4559:
URL: https://github.com/apache/nifi/pull/4559#issuecomment-700720378


   @bbende thanks for letting me know, will close this now in that case



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] frankgh closed pull request #4559: NIFI-7817 - Backport of Fix ParquetReader instantiation error (#4538)

2020-09-29 Thread GitBox


frankgh closed pull request #4559:
URL: https://github.com/apache/nifi/pull/4559


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7856) Provenance failed to be compressed after nifi upgrade to 1.12

2020-09-29 Thread Mark Payne (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203935#comment-17203935
 ] 

Mark Payne commented on NIFI-7856:
--

Thanks. Can you provide what properties you have in nifi.properties for the 
Provenance Repository. E.g.:
{code}# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=

# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
nifi.provenance.repository.rollover.time=1 mins
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
# Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving 
a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be 
truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
{code}

> Provenance failed to be compressed after nifi upgrade to 1.12
> -
>
> Key: NIFI-7856
> URL: https://issues.apache.org/jira/browse/NIFI-7856
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Mengze Li
>Priority: Major
> Attachments: 1683472.prov, ls.png, screenshot-1.png, screenshot-2.png
>
>
> We upgraded our nifi cluster from 1.11.3 to 1.12.0.
> The nodes come up and everything looks to be functional. I can see 1.12.0 is 
> running.
> Later on, we discovered that the data provenance is missing. From checking 
> our logs, we see tons of errors compressing the logs.
> {code}
> 2020-09-28 03:38:35,205 ERROR [Compress Provenance Logs-1-thread-1] 
> o.a.n.p.s.EventFileCompressor Failed to compress 
> ./provenance_repository/2752821.prov on rollover
> {code}
> This didn't happen in 1.11.3. 
> Is this a known issue? We are considering reverting back if there is no 
> solution for this since we can't go prod with no/broken data provenance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende commented on pull request #4559: NIFI-7817 - Backport of Fix ParquetReader instantiation error (#4538)

2020-09-29 Thread GitBox


bbende commented on pull request #4559:
URL: https://github.com/apache/nifi/pull/4559#issuecomment-700713413


   @frankgh we don't need to submit this to the support branch, the way it 
works is that if the community decides to do another 1.12.x maintenance release 
(1.12.1 is already made and released yesterday), then the RM would go through 
all the tickets that landed in main (1.13.0-SNAPSHOT), and cherry-pick any bug 
fixes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende commented on a change in pull request #4540: NIFI-7825: Support native library loading via absolute path

2020-09-29 Thread GitBox


bbende commented on a change in pull request #4540:
URL: https://github.com/apache/nifi/pull/4540#discussion_r496726770



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/bootstrap.conf
##
@@ -66,6 +66,15 @@ java.arg.16=-Djavax.security.auth.useSubjectCredsOnly=true
 # Please see 
https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver_config
 for configuration options.
 java.arg.17=-Dzookeeper.admin.enableServer=false
 
+# The following options configure a Java Agent to handle native library 
loading.
+# It is needed when a custom jar (eg. JDBC driver) has been configured on a 
component in the flow and this custom jar depends on a native library
+# and tries to load it by its absolute path (java.lang.System.load(String 
filename) method call).
+# Use this Java Agent only if you get "Native Library ... already loaded in 
another classloader" errors otherwise!
+#java.arg.18=-javaagent:./lib/aspectjweaver-${aspectj.version}.jar

Review comment:
   Do we have any concerns about aspectjweaver being directly in the lib 
directory which puts in on the classpath of every single NAR? Wondering if it 
should be separated like `lib/aspectj/` which would not be added to NiFi's 
normal classpath, but I don't know enough to say if it still works correctly 
then.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (NIFI-7856) Provenance failed to be compressed after nifi upgrade to 1.12

2020-09-29 Thread Mengze Li (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203925#comment-17203925
 ] 

Mengze Li edited comment on NIFI-7856 at 9/29/20, 1:40 PM:
---

>From our logs, it happens every hour (seems that rollover MAX_TIME_REACHED is 
>met, not sure the exact schedule is), see screenshot. 
 It happens consistently after the restart, the cluster has been running for 4+ 
days.
 The issue for us is that the data provenance is missing for some processors 
(never show up after the upgrade so latest record was 25th) and data provenance 
is displaying either incomplete or delayed records.
 This can be a huge issue for our prod troubleshooting if we move this to our 
prod env.
 Attached one prov file as well.
 !screenshot-2.png!


was (Author: leeyoda):
>From our logs, it happens every hour (seems that rollover MAX_TIME_REACHED is 
>set to be an hour), see screenshot. 
 It happens consistently after the restart, the cluster has been running for 4+ 
days.
 The issue for us is that the data provenance is missing for some processors 
(never show up after the upgrade so latest record was 25th) and data provenance 
is displaying either incomplete or delayed records.
 This can be a huge issue for our prod troubleshooting if we move this to our 
prod env.
 Attached one prov file as well.
 !screenshot-2.png!

> Provenance failed to be compressed after nifi upgrade to 1.12
> -
>
> Key: NIFI-7856
> URL: https://issues.apache.org/jira/browse/NIFI-7856
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Mengze Li
>Priority: Major
> Attachments: 1683472.prov, ls.png, screenshot-1.png, screenshot-2.png
>
>
> We upgraded our nifi cluster from 1.11.3 to 1.12.0.
> The nodes come up and everything looks to be functional. I can see 1.12.0 is 
> running.
> Later on, we discovered that the data provenance is missing. From checking 
> our logs, we see tons of errors compressing the logs.
> {code}
> 2020-09-28 03:38:35,205 ERROR [Compress Provenance Logs-1-thread-1] 
> o.a.n.p.s.EventFileCompressor Failed to compress 
> ./provenance_repository/2752821.prov on rollover
> {code}
> This didn't happen in 1.11.3. 
> Is this a known issue? We are considering reverting back if there is no 
> solution for this since we can't go prod with no/broken data provenance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] frankgh opened a new pull request #4559: NIFI-7817 - Backport of Fix ParquetReader instantiation error (#4538)

2020-09-29 Thread GitBox


frankgh opened a new pull request #4559:
URL: https://github.com/apache/nifi/pull/4559


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7856) Provenance failed to be compressed after nifi upgrade to 1.12

2020-09-29 Thread Mengze Li (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203925#comment-17203925
 ] 

Mengze Li commented on NIFI-7856:
-

>From our logs, it happens every hour (seems that rollover MAX_TIME_REACHED is 
>set to be an hour), see screenshot. 
 It happens consistently after the restart, the cluster has been running for 4+ 
days.
 The issue for us is that the data provenance is missing for some processors 
(never show up after the upgrade so latest record was 25th) and data provenance 
is displaying either incomplete or delayed records.
 This can be a huge issue for our prod troubleshooting if we move this to our 
prod env.
 Attached one prov file as well.
 !screenshot-2.png!

> Provenance failed to be compressed after nifi upgrade to 1.12
> -
>
> Key: NIFI-7856
> URL: https://issues.apache.org/jira/browse/NIFI-7856
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Mengze Li
>Priority: Major
> Attachments: 1683472.prov, ls.png, screenshot-1.png, screenshot-2.png
>
>
> We upgraded our nifi cluster from 1.11.3 to 1.12.0.
> The nodes come up and everything looks to be functional. I can see 1.12.0 is 
> running.
> Later on, we discovered that the data provenance is missing. From checking 
> our logs, we see tons of errors compressing the logs.
> {code}
> 2020-09-28 03:38:35,205 ERROR [Compress Provenance Logs-1-thread-1] 
> o.a.n.p.s.EventFileCompressor Failed to compress 
> ./provenance_repository/2752821.prov on rollover
> {code}
> This didn't happen in 1.11.3. 
> Is this a known issue? We are considering reverting back if there is no 
> solution for this since we can't go prod with no/broken data provenance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende merged pull request #4541: NIFI-7827 - PutS3Object - configuration of the multipart temp dir

2020-09-29 Thread GitBox


bbende merged pull request #4541:
URL: https://github.com/apache/nifi/pull/4541


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7827) PutS3Object multipart state directory should be configurable

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203924#comment-17203924
 ] 

ASF subversion and git services commented on NIFI-7827:
---

Commit a57d38c58d6fb3ea84d64dbd26a62d4a444ec48c in nifi's branch 
refs/heads/main from Pierre Villard
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a57d38c ]

NIFI-7827 - PutS3Object - configuration of the multipart temp dir (#4541)



> PutS3Object multipart state directory should be configurable
> 
>
> Key: NIFI-7827
> URL: https://issues.apache.org/jira/browse/NIFI-7827
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As of now, PutS3Object is hard coding the directory where some state will be 
> stored when sending large files using multipart. This folder should be 
> configurable and default to the JVM temporary folder instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7827) PutS3Object multipart state directory should be configurable

2020-09-29 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-7827:
--
Fix Version/s: 1.13.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> PutS3Object multipart state directory should be configurable
> 
>
> Key: NIFI-7827
> URL: https://issues.apache.org/jira/browse/NIFI-7827
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> As of now, PutS3Object is hard coding the directory where some state will be 
> stored when sending large files using multipart. This folder should be 
> configurable and default to the JVM temporary folder instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7856) Provenance failed to be compressed after nifi upgrade to 1.12

2020-09-29 Thread Mengze Li (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mengze Li updated NIFI-7856:

Attachment: 1683472.prov

> Provenance failed to be compressed after nifi upgrade to 1.12
> -
>
> Key: NIFI-7856
> URL: https://issues.apache.org/jira/browse/NIFI-7856
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Mengze Li
>Priority: Major
> Attachments: 1683472.prov, ls.png, screenshot-1.png, screenshot-2.png
>
>
> We upgraded our nifi cluster from 1.11.3 to 1.12.0.
> The nodes come up and everything looks to be functional. I can see 1.12.0 is 
> running.
> Later on, we discovered that the data provenance is missing. From checking 
> our logs, we see tons of errors compressing the logs.
> {code}
> 2020-09-28 03:38:35,205 ERROR [Compress Provenance Logs-1-thread-1] 
> o.a.n.p.s.EventFileCompressor Failed to compress 
> ./provenance_repository/2752821.prov on rollover
> {code}
> This didn't happen in 1.11.3. 
> Is this a known issue? We are considering reverting back if there is no 
> solution for this since we can't go prod with no/broken data provenance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende commented on pull request #4541: NIFI-7827 - PutS3Object - configuration of the multipart temp dir

2020-09-29 Thread GitBox


bbende commented on pull request #4541:
URL: https://github.com/apache/nifi/pull/4541#issuecomment-700694413


   Looks good, will merge, thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7817) ParquetReader fails to instantiate due to missing default value for compression-type property

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203922#comment-17203922
 ] 

ASF subversion and git services commented on NIFI-7817:
---

Commit fa0a1df23fd446db4b9e1b526341168a5f2c6046 in nifi's branch 
refs/heads/main from Pierre Villard
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=fa0a1df ]

NIFI-7817 - Fix ParquetReader instantiation error (#4538)



> ParquetReader fails to instantiate due to missing default value for 
> compression-type property
> -
>
> Key: NIFI-7817
> URL: https://issues.apache.org/jira/browse/NIFI-7817
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.0
>Reporter: Alexander Denissov
>Priority: Major
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> We have a customer Processor that uses ParquetReader to parse incoming Flow 
> File and then send records to the target system. It worked before, but broke 
> with NiFi 1.12.0 release.
> The exception stack trace:
> {{Caused by: java.lang.NullPointerException: Name is null}}
>  {{ at java.lang.Enum.valueOf(Enum.java:236)}}
>  {{ at 
> org.apache.parquet.hadoop.metadata.CompressionCodecName.valueOf(CompressionCodecName.java:26)}}
>  {{ at 
> org.apache.nifi.parquet.utils.ParquetUtils.createParquetConfig(ParquetUtils.java:172)}}
>  {{ at 
> org.apache.nifi.parquet.ParquetReader.createRecordReader(ParquetReader.java:48)}}
>  {{ at 
> org.apache.nifi.serialization.RecordReaderFactory.createRecordReader(RecordReaderFactory.java:49)}}
>  {{ at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)}}
>  {{ at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)}}
>  {{ at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)}}
>  {{ at java.lang.reflect.Method.invoke(Method.java:498)}}
>  {{ at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)}}
>  {{ at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:105)}}
>  {{ at com.sun.proxy.$Proxy100.createRecordReader(Unknown Source)}}
>  {{ at 
> io.pivotal.greenplum.nifi.processors.PutGreenplumRecord.lambda$null$1(PutGreenplumRecord.java:300)}}
>  {{ at 
> org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)}}
>  
> Basically, the creation of a ParquetReader by the RecordReaderFactory fails.
> The actual problem occurs here: 
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-parquet-bundle/nifi-parquet-processors/src/main/java/org/apache/nifi/parquet/utils/ParquetUtils.java#L171-L172]
> where 
> {{final String compressionTypeValue = 
> context.getProperty(ParquetUtils.COMPRESSION_TYPE).getValue();}}
> comes back with the value of null, since "compression-type" property is not 
> exposed on ParquetReader and would not be set by the flow designers. The 
> returned null value is then passed to get the enum instance and fails there.
> {{final CompressionCodecName codecName = 
> CompressionCodecName.valueOf(compressionTypeValue);}}
> While there might be several solutions to this, including updating 
> parquet-specific defaulting logic, I traced the root cause of this regression 
> to the fix for NIFI-7635, to this commit: 
> [https://github.com/apache/nifi/commit/4f11e3626093d3090f97c0efc5e229d83b6006e4#diff-782335ecee68f6939c3724dba3983d3d]
> where the default value of provided property descriptor, expressed previously 
> as 
> property.getDefaultValue()
> is no longer used. That value used to contain UNCOMPRESSED value for the use 
> case in question and it used to work before this commit.
> I'd think the issue needs to get fixed in this place as it might affect a 
> variety of other use cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7817) ParquetReader fails to instantiate due to missing default value for compression-type property

2020-09-29 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-7817.
---
Fix Version/s: 1.13.0
 Assignee: Pierre Villard
   Resolution: Fixed

> ParquetReader fails to instantiate due to missing default value for 
> compression-type property
> -
>
> Key: NIFI-7817
> URL: https://issues.apache.org/jira/browse/NIFI-7817
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.0
>Reporter: Alexander Denissov
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> We have a customer Processor that uses ParquetReader to parse incoming Flow 
> File and then send records to the target system. It worked before, but broke 
> with NiFi 1.12.0 release.
> The exception stack trace:
> {{Caused by: java.lang.NullPointerException: Name is null}}
>  {{ at java.lang.Enum.valueOf(Enum.java:236)}}
>  {{ at 
> org.apache.parquet.hadoop.metadata.CompressionCodecName.valueOf(CompressionCodecName.java:26)}}
>  {{ at 
> org.apache.nifi.parquet.utils.ParquetUtils.createParquetConfig(ParquetUtils.java:172)}}
>  {{ at 
> org.apache.nifi.parquet.ParquetReader.createRecordReader(ParquetReader.java:48)}}
>  {{ at 
> org.apache.nifi.serialization.RecordReaderFactory.createRecordReader(RecordReaderFactory.java:49)}}
>  {{ at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)}}
>  {{ at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)}}
>  {{ at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)}}
>  {{ at java.lang.reflect.Method.invoke(Method.java:498)}}
>  {{ at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)}}
>  {{ at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:105)}}
>  {{ at com.sun.proxy.$Proxy100.createRecordReader(Unknown Source)}}
>  {{ at 
> io.pivotal.greenplum.nifi.processors.PutGreenplumRecord.lambda$null$1(PutGreenplumRecord.java:300)}}
>  {{ at 
> org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)}}
>  
> Basically, the creation of a ParquetReader by the RecordReaderFactory fails.
> The actual problem occurs here: 
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-parquet-bundle/nifi-parquet-processors/src/main/java/org/apache/nifi/parquet/utils/ParquetUtils.java#L171-L172]
> where 
> {{final String compressionTypeValue = 
> context.getProperty(ParquetUtils.COMPRESSION_TYPE).getValue();}}
> comes back with the value of null, since "compression-type" property is not 
> exposed on ParquetReader and would not be set by the flow designers. The 
> returned null value is then passed to get the enum instance and fails there.
> {{final CompressionCodecName codecName = 
> CompressionCodecName.valueOf(compressionTypeValue);}}
> While there might be several solutions to this, including updating 
> parquet-specific defaulting logic, I traced the root cause of this regression 
> to the fix for NIFI-7635, to this commit: 
> [https://github.com/apache/nifi/commit/4f11e3626093d3090f97c0efc5e229d83b6006e4#diff-782335ecee68f6939c3724dba3983d3d]
> where the default value of provided property descriptor, expressed previously 
> as 
> property.getDefaultValue()
> is no longer used. That value used to contain UNCOMPRESSED value for the use 
> case in question and it used to work before this commit.
> I'd think the issue needs to get fixed in this place as it might affect a 
> variety of other use cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende merged pull request #4538: NIFI-7817 - Fix ParquetReader instantiation error

2020-09-29 Thread GitBox


bbende merged pull request #4538:
URL: https://github.com/apache/nifi/pull/4538


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende commented on pull request #4538: NIFI-7817 - Fix ParquetReader instantiation error

2020-09-29 Thread GitBox


bbende commented on pull request #4538:
URL: https://github.com/apache/nifi/pull/4538#issuecomment-700690499


   Looks good, will merge, thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7856) Provenance failed to be compressed after nifi upgrade to 1.12

2020-09-29 Thread Mengze Li (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mengze Li updated NIFI-7856:

Attachment: screenshot-2.png

> Provenance failed to be compressed after nifi upgrade to 1.12
> -
>
> Key: NIFI-7856
> URL: https://issues.apache.org/jira/browse/NIFI-7856
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Mengze Li
>Priority: Major
> Attachments: ls.png, screenshot-1.png, screenshot-2.png
>
>
> We upgraded our nifi cluster from 1.11.3 to 1.12.0.
> The nodes come up and everything looks to be functional. I can see 1.12.0 is 
> running.
> Later on, we discovered that the data provenance is missing. From checking 
> our logs, we see tons of errors compressing the logs.
> {code}
> 2020-09-28 03:38:35,205 ERROR [Compress Provenance Logs-1-thread-1] 
> o.a.n.p.s.EventFileCompressor Failed to compress 
> ./provenance_repository/2752821.prov on rollover
> {code}
> This didn't happen in 1.11.3. 
> Is this a known issue? We are considering reverting back if there is no 
> solution for this since we can't go prod with no/broken data provenance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7833) Add Ozone support for Hadoop processors

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203917#comment-17203917
 ] 

ASF subversion and git services commented on NIFI-7833:
---

Commit 740bfee8f4c11f0c0f904b17ec2c82ff604b8d76 in nifi's branch 
refs/heads/main from Pierre Villard
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=740bfee ]

NIFI-7833 - Add Ozone support in Hadoop components where appropriate (#4545)

* NIFI-7833 - Add Ozone support in Hadoop components where appropriate

* added fs dependency

> Add Ozone support for Hadoop processors
> ---
>
> Key: NIFI-7833
> URL: https://issues.apache.org/jira/browse/NIFI-7833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> It'd be nice provide ways in NiFi to ingest data into Ozone. This can be done 
> though the same API HDFS supports. The only difference is the URL being 
> o3fs:// instead of hdfs://.
> On NiFi's side we need to add the appropriate profiles to make sure the Ozone 
> client is correctly added like the clients we add for the cloud providers. 
> Example in:
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/pom.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7833) Add Ozone support for Hadoop processors

2020-09-29 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-7833:
--
Fix Version/s: 1.13.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add Ozone support for Hadoop processors
> ---
>
> Key: NIFI-7833
> URL: https://issues.apache.org/jira/browse/NIFI-7833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> It'd be nice provide ways in NiFi to ingest data into Ozone. This can be done 
> though the same API HDFS supports. The only difference is the URL being 
> o3fs:// instead of hdfs://.
> On NiFi's side we need to add the appropriate profiles to make sure the Ozone 
> client is correctly added like the clients we add for the cloud providers. 
> Example in:
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/pom.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7833) Add Ozone support for Hadoop processors

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203918#comment-17203918
 ] 

ASF subversion and git services commented on NIFI-7833:
---

Commit 740bfee8f4c11f0c0f904b17ec2c82ff604b8d76 in nifi's branch 
refs/heads/main from Pierre Villard
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=740bfee ]

NIFI-7833 - Add Ozone support in Hadoop components where appropriate (#4545)

* NIFI-7833 - Add Ozone support in Hadoop components where appropriate

* added fs dependency

> Add Ozone support for Hadoop processors
> ---
>
> Key: NIFI-7833
> URL: https://issues.apache.org/jira/browse/NIFI-7833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> It'd be nice provide ways in NiFi to ingest data into Ozone. This can be done 
> though the same API HDFS supports. The only difference is the URL being 
> o3fs:// instead of hdfs://.
> On NiFi's side we need to add the appropriate profiles to make sure the Ozone 
> client is correctly added like the clients we add for the cloud providers. 
> Example in:
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-hadoop-libraries-bundle/nifi-hadoop-libraries-nar/pom.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende commented on pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

2020-09-29 Thread GitBox


bbende commented on pull request #4545:
URL: https://github.com/apache/nifi/pull/4545#issuecomment-700689011


   Looks good, will merge, thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende merged pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

2020-09-29 Thread GitBox


bbende merged pull request #4545:
URL: https://github.com/apache/nifi/pull/4545


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7856) Provenance failed to be compressed after nifi upgrade to 1.12

2020-09-29 Thread Mark Payne (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203904#comment-17203904
 ] 

Mark Payne commented on NIFI-7856:
--

[~leeyoda] thanks for the updated logs & screenshot from 'ls' command. Does 
this happen frequently, or just once or twice? If only once or twice does it 
happen during or shortly after startup? Or after NiFi has been running for a 
while?

I can't think of any changes in 1.12.0 that may have affected this, so 
wondering if perhaps it's related to restarted moreso than changing to 1.12.0.

The interesting thing is that, based on the logs and the screenshot, that file 
already was compressed. So not sure why it was attempting to compress it 
again... the good news is that it shouldn't cause any problems, given that it's 
already compressed. But would definitely prefer to resolve the issue, 
regardless.

> Provenance failed to be compressed after nifi upgrade to 1.12
> -
>
> Key: NIFI-7856
> URL: https://issues.apache.org/jira/browse/NIFI-7856
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Mengze Li
>Priority: Major
> Attachments: ls.png, screenshot-1.png
>
>
> We upgraded our nifi cluster from 1.11.3 to 1.12.0.
> The nodes come up and everything looks to be functional. I can see 1.12.0 is 
> running.
> Later on, we discovered that the data provenance is missing. From checking 
> our logs, we see tons of errors compressing the logs.
> {code}
> 2020-09-28 03:38:35,205 ERROR [Compress Provenance Logs-1-thread-1] 
> o.a.n.p.s.EventFileCompressor Failed to compress 
> ./provenance_repository/2752821.prov on rollover
> {code}
> This didn't happen in 1.11.3. 
> Is this a known issue? We are considering reverting back if there is no 
> solution for this since we can't go prod with no/broken data provenance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #886: MINIFICPP-1323 Encrypt sensitive properties

2020-09-29 Thread GitBox


fgerlits commented on a change in pull request #886:
URL: https://github.com/apache/nifi-minifi-cpp/pull/886#discussion_r496664831



##
File path: encrypt-config/CMakeLists.txt
##
@@ -0,0 +1,30 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+cmake_minimum_required(VERSION 3.7)
+project(encrypt-config)
+set(VERSION, "0.1")
+
+file(GLOB ENCRYPT_CONFIG_FILES  "*.cpp")
+add_executable(encrypt-config "${ENCRYPT_CONFIG_FILES}")
+include_directories(../libminifi/include  ../thirdparty/cxxopts/include)

Review comment:
   Hm... simply changing to `target_include_directories` worked now.  Not 
sure if I did something wrong earlier, or some changes since then have made it 
work.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-7859) Write execution duration attributes to SelectHive3QL output flow files

2020-09-29 Thread Nadeem (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nadeem reassigned NIFI-7859:


Assignee: Nadeem

> Write execution duration attributes to SelectHive3QL output flow files 
> ---
>
> Key: NIFI-7859
> URL: https://issues.apache.org/jira/browse/NIFI-7859
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.12.1
>Reporter: Rahul Soni
>Assignee: Nadeem
>Priority: Minor
>
> SelectHive3QL & SelectHiveQL processors do not write attributes like 
> query.duration, query.executiontime, query.fetchtime etc. While the generic 
> ExecuteSQL processor does. These attributes can be useful in certain 
> scenarios where you want to measure the performance of the queries.
> Can these attributes be added to the said processor?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #886: MINIFICPP-1323 Encrypt sensitive properties

2020-09-29 Thread GitBox


fgerlits commented on a change in pull request #886:
URL: https://github.com/apache/nifi-minifi-cpp/pull/886#discussion_r496657392



##
File path: main/MiNiFiMain.cpp
##
@@ -53,13 +53,41 @@
 #include "core/FlowConfiguration.h"
 #include "core/ConfigurationFactory.h"
 #include "core/RepositoryFactory.h"
+#include "properties/Decryptor.h"
 #include "utils/file/PathUtils.h"
 #include "utils/file/FileUtils.h"
 #include "utils/Environment.h"
 #include "FlowController.h"
 #include "AgentDocs.h"
 #include "MainHelper.h"
 
+namespace {
+#ifdef OPENSSL_SUPPORT
+bool containsEncryptedProperties(const minifi::Configure& minifi_properties) {
+  const auto is_encrypted_property_marker = [_properties](const 
std::string& property_name) {
+return utils::StringUtils::endsWith(property_name, ".protected") &&
+minifi::Decryptor::isEncrypted(minifi_properties.get(property_name));
+  };
+  const auto property_names = minifi_properties.getConfiguredKeys();
+  return std::any_of(property_names.begin(), property_names.end(), 
is_encrypted_property_marker);
+}
+
+void decryptSensitiveProperties(minifi::Configure& minifi_properties, const 
std::string& minifi_home, logging::Logger& logger) {

Review comment:
   Do you think it's worth copying the whole `Configure` object?  It's 
large.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #886: MINIFICPP-1323 Encrypt sensitive properties

2020-09-29 Thread GitBox


fgerlits commented on a change in pull request #886:
URL: https://github.com/apache/nifi-minifi-cpp/pull/886#discussion_r496656796



##
File path: libminifi/include/utils/EncryptionUtils.h
##
@@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+
+struct evp_aead_st;
+typedef struct evp_aead_st EVP_AEAD;
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace crypto {
+
+using Bytes = std::vector;
+
+Bytes stringToBytes(const std::string& text);
+
+std::string bytesToString(const Bytes& bytes);
+
+Bytes randomBytes(size_t num_bytes);
+
+struct EncryptionType {
+  static const EVP_AEAD* cipher();
+  static std::string name();
+  static size_t nonceLength();
+};

Review comment:
   I wanted to group together the set of constants related to the 
encryption cipher.  This could be a namespace instead of an empty struct, but I 
don't think there is a huge difference.  I don't want to add extra complexity 
now to support a multi-cipher use case which will probably not happen, 
especially with libsodium, which hides the cipher from the user and may change 
it in future versions.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #886: MINIFICPP-1323 Encrypt sensitive properties

2020-09-29 Thread GitBox


fgerlits commented on a change in pull request #886:
URL: https://github.com/apache/nifi-minifi-cpp/pull/886#discussion_r496654947



##
File path: libminifi/include/properties/Decryptor.h
##
@@ -0,0 +1,55 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+
+#include "utils/EncryptionUtils.h"
+#include "utils/OptionalUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+
+class Configure;
+
+class Decryptor {
+ public:
+  explicit Decryptor(const utils::crypto::Bytes& encryption_key);
+  static bool isEncrypted(const utils::optional& encryption_type);
+  std::string decrypt(const std::string& encrypted_text, const std::string& 
aad) const;
+  void decryptSensitiveProperties(Configure& configure) const;
+
+ private:
+  const utils::crypto::Bytes encryption_key_;
+};
+
+inline Decryptor::Decryptor(const utils::crypto::Bytes& encryption_key) : 
encryption_key_(encryption_key) {}
+
+inline bool Decryptor::isEncrypted(const utils::optional& 
encryption_type) {

Review comment:
   I have renamed it to `isValidEncryptionMarker`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7503) PutSQL Shouldn't Commit When in AutoCommit Mode

2020-09-29 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7503:
-
Fix Version/s: 1.13.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> PutSQL Shouldn't Commit When in AutoCommit Mode
> ---
>
> Key: NIFI-7503
> URL: https://issues.apache.org/jira/browse/NIFI-7503
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> PutSQL Commits even when autoCommit is true. Some drivers like RedShift raise 
> an exception and the JDBC Spec allows for an exception here. Recommend we 
> check to see if autoCommit is true before calling commit.
> https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#commit()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7503) PutSQL Shouldn't Commit When in AutoCommit Mode

2020-09-29 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7503:
-
Component/s: Extensions

> PutSQL Shouldn't Commit When in AutoCommit Mode
> ---
>
> Key: NIFI-7503
> URL: https://issues.apache.org/jira/browse/NIFI-7503
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Shawn Weeks
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> PutSQL Commits even when autoCommit is true. Some drivers like RedShift raise 
> an exception and the JDBC Spec allows for an exception here. Recommend we 
> check to see if autoCommit is true before calling commit.
> https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#commit()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7503) PutSQL Shouldn't Commit When in AutoCommit Mode

2020-09-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203800#comment-17203800
 ] 

ASF subversion and git services commented on NIFI-7503:
---

Commit 95fb8e314421221a2fd062e67974b8a5199914ea in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=95fb8e3 ]

NIFI-7503: PutSQL - only call commit() and rollback() if autocommit is disabled

Signed-off-by: Pierre Villard 

This closes #4558.


> PutSQL Shouldn't Commit When in AutoCommit Mode
> ---
>
> Key: NIFI-7503
> URL: https://issues.apache.org/jira/browse/NIFI-7503
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> PutSQL Commits even when autoCommit is true. Some drivers like RedShift raise 
> an exception and the JDBC Spec allows for an exception here. Recommend we 
> check to see if autoCommit is true before calling commit.
> https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#commit()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4558: NIFI-7503: PutSQL - only call commit() and rollback() if autocommit is disabled

2020-09-29 Thread GitBox


asfgit closed pull request #4558:
URL: https://github.com/apache/nifi/pull/4558


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-7860) My bad

2020-09-29 Thread Jens M Kofoed (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens M Kofoed resolved NIFI-7860.
-
Resolution: Not A Problem

> My bad
> --
>
> Key: NIFI-7860
> URL: https://issues.apache.org/jira/browse/NIFI-7860
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.0, 1.12.1
> Environment: Ubuntu 18.04 with NIFI 1.12.1
>Reporter: Jens M Kofoed
>Priority: Major
>
> sorry, my bad



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7860) My bad

2020-09-29 Thread Jens M Kofoed (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens M Kofoed updated NIFI-7860:

Description: sorry, my bad  (was: I have created a complete new server for 
NIFI 1.12.0 and created a new certificate. It is not a wildcard certificate but 
it has 2 entries (hostname and fqdn) in dns record. But the server does not 
start and give an error. Looking through reported bugs, I found that this issue 
was allready reported. Therefore I waited for release 1.12.1

Release 1.12.1 has exactly the same error. I have now created a new certificate 
with only the fqdn in the dns entry, and this is working.

So both releases 1.12.0 and 1.12.1 can not handle a private certificate if 
there are multiple dns entries.

Sorry for not finding out why, in release 1.12.0, so it could be fixed together 
with the other certificate issues which should have been fixed in 1.12.1)

> My bad
> --
>
> Key: NIFI-7860
> URL: https://issues.apache.org/jira/browse/NIFI-7860
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.0, 1.12.1
> Environment: Ubuntu 18.04 with NIFI 1.12.1
>Reporter: Jens M Kofoed
>Priority: Major
>
> sorry, my bad



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7860) My bad

2020-09-29 Thread Jens M Kofoed (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens M Kofoed updated NIFI-7860:

Summary: My bad  (was: Multiple dns records in certificate does not work)

> My bad
> --
>
> Key: NIFI-7860
> URL: https://issues.apache.org/jira/browse/NIFI-7860
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.0, 1.12.1
> Environment: Ubuntu 18.04 with NIFI 1.12.1
>Reporter: Jens M Kofoed
>Priority: Major
>
> I have created a complete new server for NIFI 1.12.0 and created a new 
> certificate. It is not a wildcard certificate but it has 2 entries (hostname 
> and fqdn) in dns record. But the server does not start and give an error. 
> Looking through reported bugs, I found that this issue was allready reported. 
> Therefore I waited for release 1.12.1
> Release 1.12.1 has exactly the same error. I have now created a new 
> certificate with only the fqdn in the dns entry, and this is working.
> So both releases 1.12.0 and 1.12.1 can not handle a private certificate if 
> there are multiple dns entries.
> Sorry for not finding out why, in release 1.12.0, so it could be fixed 
> together with the other certificate issues which should have been fixed in 
> 1.12.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] tpalfy commented on a change in pull request #4556: NIFI-7830: Support large files in PutAzureDataLakeStorage

2020-09-29 Thread GitBox


tpalfy commented on a change in pull request #4556:
URL: https://github.com/apache/nifi/pull/4556#discussion_r496118900



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/processors/azure/storage/ITFetchAzureDataLakeStorage.java
##
@@ -216,13 +216,15 @@ public void testFetchNonExistentFile() {
 testFailedFetch(fileSystemName, directory, filename, 
inputFlowFileContent, inputFlowFileContent, 404);
 }
 
-@Ignore("Takes some time, only recommended for manual testing.")
+//@Ignore("Takes some time, only recommended for manual testing.")

Review comment:
   Does it no longer take "some time"? :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7860) Multiple dns records in certificate does not work

2020-09-29 Thread Jens M Kofoed (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens M Kofoed updated NIFI-7860:

Description: 
I have created a complete new server for NIFI 1.12.0 and created a new 
certificate. It is not a wildcard certificate but it has 2 entries (hostname 
and fqdn) in dns record. But the server does not start and give an error. 
Looking through reported bugs, I found that this issue was allready reported. 
Therefore I waited for release 1.12.1

Release 1.12.1 has exactly the same error. I have now created a new certificate 
with only the fqdn in the dns entry, and this is working.

So both releases 1.12.0 and 1.12.1 can not handle a private certificate if 
there are multiple dns entries.

Sorry for not finding out why, in release 1.12.0, so it could be fixed together 
with the other certificate issues which should have been fixed in 1.12.1

  was:
I have created a complete new server for NIFI 1.12.0 and created a new 
certificate. It is not a wildcard certificate but it has 2 entries (hostname 
and fqdn) in dns record. But the server does not start and give an error. 
Looking through reported bugs, I found that this issue was allready reported. 
Therefore I waited for release 1.12.1

Release 1.12.1 has exactly the same error. I have now created a new certificate 
with only the fqdn in the dns entry, and this is working.

So both releases 1.12.0 and 1.12.1 can not handle a private certificate if 
there are multiple dns entries.


> Multiple dns records in certificate does not work
> -
>
> Key: NIFI-7860
> URL: https://issues.apache.org/jira/browse/NIFI-7860
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.12.0, 1.12.1
> Environment: Ubuntu 18.04 with NIFI 1.12.1
>Reporter: Jens M Kofoed
>Priority: Major
>
> I have created a complete new server for NIFI 1.12.0 and created a new 
> certificate. It is not a wildcard certificate but it has 2 entries (hostname 
> and fqdn) in dns record. But the server does not start and give an error. 
> Looking through reported bugs, I found that this issue was allready reported. 
> Therefore I waited for release 1.12.1
> Release 1.12.1 has exactly the same error. I have now created a new 
> certificate with only the fqdn in the dns entry, and this is working.
> So both releases 1.12.0 and 1.12.1 can not handle a private certificate if 
> there are multiple dns entries.
> Sorry for not finding out why, in release 1.12.0, so it could be fixed 
> together with the other certificate issues which should have been fixed in 
> 1.12.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7860) Multiple dns records in certificate does not work

2020-09-29 Thread Jens M Kofoed (Jira)
Jens M Kofoed created NIFI-7860:
---

 Summary: Multiple dns records in certificate does not work
 Key: NIFI-7860
 URL: https://issues.apache.org/jira/browse/NIFI-7860
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.12.1, 1.12.0
 Environment: Ubuntu 18.04 with NIFI 1.12.1
Reporter: Jens M Kofoed


I have created a complete new server for NIFI 1.12.0 and created a new 
certificate. It is not a wildcard certificate but it has 2 entries (hostname 
and fqdn) in dns record. But the server does not start and give an error. 
Looking through reported bugs, I found that this issue was allready reported. 
Therefore I waited for release 1.12.1

Release 1.12.1 has exactly the same error. I have now created a new certificate 
with only the fqdn in the dns entry, and this is working.

So both releases 1.12.0 and 1.12.1 can not handle a private certificate if 
there are multiple dns entries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] themanifold commented on pull request #4552: NIFI-7845 Better handling on malformed/empty strings

2020-09-29 Thread GitBox


themanifold commented on pull request #4552:
URL: https://github.com/apache/nifi/pull/4552#issuecomment-700503064


   > @themanifold Could you please clarify your use case and the problem you 
would like to solve in more detail?
   > 
   > If I understand correctly, `ConsumeAMQP` can emit the following 
`amqp$headers` attribute values (as strings):
   > 
   > * `{a=1}` if there is 1 key=value pair in the message headers
   > 
   > * `{a=1,b=2}` if there are 2 key=value pairs in the headers of the 
received message
   > 
   > * `{}` if the headers of the received message is empty
   > 
   > 
   > And you would like `PublishAMQP` to be able to parse these strings 
properly. Eg. in case of `{a=1,b=2}`, remove the curly braces and add 2 headers 
(`a=1` and `b=2`), and in case of `{}`, no headers need to be added.
   > 
   > So your flow is `ConsumeAMQP` (from a queue) => process data => 
`PublishAMQP` (to another queue with keeping the same headers). Is my 
understanding correct?
   
   Everything here is correct apart from the flow. My initial attempt in NiFi 
didn't actually do any data processing. I did: `ConsumeAMQP` (from a queue) => 
`PublishAMQP` (exchange). If you try this in NiFi, you'll find that the 
`PublishAMQP` can't cope with `{}` (if your incoming message has no headers). 
Additionally, you'll see that `ConsumeAMQP` actually can't cope with curly 
braces full stop and will end up making some invalid headers - specifically the 
first key and the last key will have `{` and `}` appended and prepended, 
respectively. When I say that it can't cope, I mean that throws up a warning 
that will obscure any real errors.
   
   My work around right now is to actually delete the empty header using the 
`update attribute` processor.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip edited a comment on pull request #4552: NIFI-7845 Better handling on malformed/empty strings

2020-09-29 Thread GitBox


turcsanyip edited a comment on pull request #4552:
URL: https://github.com/apache/nifi/pull/4552#issuecomment-700496947


   @themanifold Could you please clarify your use case and the problem you 
would like to solve in more detail?
   
   If I understand correctly, `ConsumeAMQP` can emit the following 
`amqp$headers` attribute values (as strings):
   - `{a=1}` if there is 1 key=value pair in the message headers
   - `{a=1,b=2}` if there are 2 key=value pairs in the headers of the received 
message
   - `{}` if the headers of the received message is empty
   
   And you would like `PublishAMQP` to be able to parse these strings properly. 
Eg. in case of `{a=1,b=2}`, remove the curly braces and add 2 headers (`a=1` 
and `b=2`), and in case of `{}`, no headers need to be added.
   
   So your flow is `ConsumeAMQP` (from a queue) => process data => 
`PublishAMQP` (to another queue with keeping the same headers). Is my 
understanding correct?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on pull request #4552: NIFI-7845 Better handling on malformed/empty strings

2020-09-29 Thread GitBox


turcsanyip commented on pull request #4552:
URL: https://github.com/apache/nifi/pull/4552#issuecomment-700496947


   @themanifold Could you please clarify your use case and the problem you 
would like to solve in more detail?
   
   If I understand correctly, `ConsumeAMQP` can emit the following 
`amqp$headers` attribute values (as strings):
   - `{a=1}` if there is 1 key=value pair in the message headers
   - `{a=1,b=2}` if there are 2 key=value pairs in the headers of the received 
message
   - `{}` if the headers of the received message is empty
   
   And you would like 'PublishAMQP' to be able to parse these strings properly. 
Eg. in case of `{a=1,b=2}`, remove the curly braces and add 2 headers (`a=1` 
and `b=2`), and in case of `{}`, no headers need to be added.
   
   So your flow is `ConsumeAMQP` (from a queue) => process data => 
'PublishAMQP' (to another queue with keeping the same headers). Is my 
understanding correct?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org