[GitHub] [nifi] thanatas commented on pull request #4065: NIFI-4239 - Adding CaptureChangePostgreSQL processor to capture data changes (INSERT/UPDATE/DELETE) in PostgreSQL tables via Logical Replicati

2020-06-18 Thread GitBox


thanatas commented on pull request #4065:
URL: https://github.com/apache/nifi/pull/4065#issuecomment-646444884


   Thank for the the nar file,
   
   I will try it, you are great.
   *Erkan Vural*
   
   
   
   On Fri, Jun 19, 2020 at 2:02 AM arkonv  wrote:
   
   > This is great feature, is there any build version of it, unfortunately it
   > has not merged yet?
   > Building is not feasible. Thanks..
   > I have a nar file here
   > https://1drv.ms/u/s!AhDKWcSHcK7Gljm8RGvm-aiiNkne?e=hYW9B1
   >
   > —
   > You are receiving this because you commented.
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   >
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto opened a new pull request #4351: NIFI-7558 Fixed CatchAllFilter init logic by calling super.init().

2020-06-18 Thread GitBox


alopresto opened a new pull request #4351:
URL: https://github.com/apache/nifi/pull/4351


   Renamed legacy terms.
   Added unit tests.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] arkonv edited a comment on pull request #4065: NIFI-4239 - Adding CaptureChangePostgreSQL processor to capture data changes (INSERT/UPDATE/DELETE) in PostgreSQL tables via Logical Repl

2020-06-18 Thread GitBox


arkonv edited a comment on pull request #4065:
URL: https://github.com/apache/nifi/pull/4065#issuecomment-646347139


   > This is great feature, is there any build version of it, unfortunately it 
has not merged yet?
   > Building is not feasible. Thanks..
   
   I have a nar file here 
https://1drv.ms/u/s!AhDKWcSHcK7Gljm8RGvm-aiiNkne?e=hYW9B1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] arkonv commented on pull request #4065: NIFI-4239 - Adding CaptureChangePostgreSQL processor to capture data changes (INSERT/UPDATE/DELETE) in PostgreSQL tables via Logical Replication

2020-06-18 Thread GitBox


arkonv commented on pull request #4065:
URL: https://github.com/apache/nifi/pull/4065#issuecomment-646347139


   > This is great feature, is there any build version of it, unfortunately it 
has not merged yet?
   > Building is not feasible. Thanks..
   I have a nar file here 
https://1drv.ms/u/s!AhDKWcSHcK7Gljm8RGvm-aiiNkne?e=hYW9B1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7559) NiFi Cluster page shows multiple entries for same node identity

2020-06-18 Thread Mohammed Nadeem (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammed Nadeem updated NIFI-7559:
--
Attachment: (was: image-2020-06-19-02-01-35-512.png)

> NiFi Cluster page shows multiple entries for same node identity
> ---
>
> Key: NIFI-7559
> URL: https://issues.apache.org/jira/browse/NIFI-7559
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 1.9.1, 1.9.2
> Environment: Linux
>Reporter: Mohammed Nadeem
>Priority: Major
> Attachments: image-2020-06-19-02-52-35-420.png
>
>
> In a containerized environment, When a NiFi node doesn't meet health probes 
> performed by Kubelet (Kubernetes), then the node is killed and restarted. 
> This may loop continuously causing *NiFi Cluster page* ( Hamburger Menu -> 
> Cluster ) to show multiple entries for the same node that gets restarted 
> again and again.
> To replicate the issue:
> 1. Setup single-node cluster ( statefulset application with replicaCount = 1)
> 2. Change configured probes such as Startup or Liveness to duration less than 
> regular node start up time .
> 3. Scale new node (replicaCount), for example from 1 to 2. Then new scaled 
> node *node-1* gets killed and restarted continuously by kubelet . 
> Please check the screenshot below. 
> *nifi-1:8080* is being added 7 times when it should be added just once, This 
> shows  incorrect *1/8* nodes cluster summary when expected behavior is just 
> 1/2 nodes . 
> !image-2020-06-19-02-50-27-674.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7559) NiFi Cluster page shows multiple entries for same node identity

2020-06-18 Thread Mohammed Nadeem (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammed Nadeem updated NIFI-7559:
--
Attachment: (was: image-2020-06-19-02-50-27-674.png)

> NiFi Cluster page shows multiple entries for same node identity
> ---
>
> Key: NIFI-7559
> URL: https://issues.apache.org/jira/browse/NIFI-7559
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Affects Versions: 1.9.1, 1.9.2
> Environment: Linux
>Reporter: Mohammed Nadeem
>Priority: Major
> Attachments: image-2020-06-19-02-52-35-420.png
>
>
> In a containerized environment, When a NiFi node doesn't meet health probes 
> performed by Kubelet (Kubernetes), then the node is killed and restarted. 
> This may loop continuously causing *NiFi Cluster page* ( Hamburger Menu -> 
> Cluster ) to show multiple entries for the same node that gets restarted 
> again and again.
> To replicate the issue:
> 1. Setup single-node cluster ( statefulset application with replicaCount = 1)
> 2. Change configured probes such as Startup or Liveness to duration less than 
> regular node start up time .
> 3. Scale new node (replicaCount), for example from 1 to 2. Then new scaled 
> node *node-1* gets killed and restarted continuously by kubelet . 
> Please check the screenshot below. 
> *nifi-1:8080* is being added 7 times when it should be added just once, This 
> shows  incorrect *1/8* nodes cluster summary when expected behavior is just 
> 1/2 nodes . 
> !image-2020-06-19-02-50-27-674.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7559) NiFi Cluster page shows multiple entries for same node identity

2020-06-18 Thread Mohammed Nadeem (Jira)
Mohammed Nadeem created NIFI-7559:
-

 Summary: NiFi Cluster page shows multiple entries for same node 
identity
 Key: NIFI-7559
 URL: https://issues.apache.org/jira/browse/NIFI-7559
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework, Core UI
Affects Versions: 1.9.2, 1.9.1
 Environment: Linux
Reporter: Mohammed Nadeem
 Attachments: image-2020-06-19-02-01-35-512.png, 
image-2020-06-19-02-50-27-674.png, image-2020-06-19-02-52-35-420.png

In a containerized environment, When a NiFi node doesn't meet health probes 
performed by Kubelet (Kubernetes), then the node is killed and restarted. This 
may loop continuously causing *NiFi Cluster page* ( Hamburger Menu -> Cluster ) 
to show multiple entries for the same node that gets restarted again and again.

To replicate the issue:
1. Setup single-node cluster ( statefulset application with replicaCount = 1)
2. Change configured probes such as Startup or Liveness to duration less than 
regular node start up time .
3. Scale new node (replicaCount), for example from 1 to 2. Then new scaled node 
*node-1* gets killed and restarted continuously by kubelet . 

Please check the screenshot below. 

*nifi-1:8080* is being added 7 times when it should be added just once, This 
shows  incorrect *1/8* nodes cluster summary when expected behavior is just 1/2 
nodes . 

!image-2020-06-19-02-50-27-674.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] readl1 commented on a change in pull request #2231: NIFI-4521 MS SQL CDC Processor

2020-06-18 Thread GitBox


readl1 commented on a change in pull request #2231:
URL: https://github.com/apache/nifi/pull/2231#discussion_r442460589



##
File path: 
nifi-nar-bundles/nifi-cdc/nifi-cdc-mssql-bundle/nifi-cdc-mssql-processors/src/main/java/org/apache/nifi/cdc/mssql/processors/CaptureChangeMSSQL.java
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.cdc.mssql.processors;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.cdc.CDCException;
+import org.apache.nifi.cdc.mssql.MSSQLCDCUtils;
+import org.apache.nifi.cdc.mssql.event.MSSQLTableInfo;
+import org.apache.nifi.cdc.mssql.event.TableCapturePlan;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.dbcp.DBCPService;
+import org.apache.nifi.expression.AttributeExpression;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.ResultSetRecordSet;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Timestamp;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
+
+@TriggerSerially
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@Tags({"sql", "jdbc", "cdc", "mssql"})
+@CapabilityDescription("Retrieves Change Data Capture (CDC) events from a 
Microsoft SQL database. CDC Events include INSERT, UPDATE, DELETE operations. 
Events "
++ "for each table are output as Record Sets, ordered by the time, and 
sequence, at which the operation occurred. In a cluster, it is recommended to 
run "
++ "this processor on primary only.")
+@Stateful(scopes = Scope.CLUSTER, description = "Information including the 
timestamp of the last CDC event per table in the database is stored by this 
processor, so "
++ "that it can continue from the same point in time if restarted.")
+@WritesAttributes({
+@WritesAttribute(attribute = "tablename", description="Name of the 
table this changeset was captured from."),
+@WritesAttribute(attribute="mssqlcdc.row.count", description="The 
number of rows in this changeset"),
+  

[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442458341



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorSummaryStatusEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorScheduleSummaryDTO;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummariesEntity;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummaryEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorSummaryStatusEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/schedule-summaries/query";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorScheduleSummariesEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getScheduleSummaries().stream()
+.collect(Collectors.toMap(entity -> 
entity.getScheduleSummary().getId(), entity -> entity));
+
+for (final NodeResponse nodeResponse : successfulResponses) {
+final ProcessorScheduleSummariesEntity nodeResponseEntity = 
nodeResponse == clientResponse ? responseEntity :
+
nodeResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+for (final ProcessorScheduleSummaryEntity processorEntity : 
nodeResponseEntity.getScheduleSummaries()) {
+final String processorId = 
processorEntity.getScheduleSummary().getId();
+
+final ProcessorScheduleSummaryEntity mergedEntity = 
scheduleSummaries.computeIfAbsent(processorId, id -> new 
ProcessorScheduleSummaryEntity());
+merge(mergedEntity, processorEntity);
+}
+}
+
+final ProcessorScheduleSummariesEntity mergedEntity = new 
ProcessorScheduleSummariesEntity();
+mergedEntity.setScheduleSummaries(new 
ArrayList<>(scheduleSummaries.values()));
+return new NodeResponse(clientResponse, mergedEntity);
+}
+
+private void merge(final ProcessorScheduleSummaryEntity target, final 
ProcessorScheduleSummaryEntity additional) {
+PermissionsDtoMerger.mergePermissions(target.getPermissions(), 
additional.getPermissions());
+
+final ProcessorScheduleSummaryDTO targetSummaryDto = 
target.getScheduleSummary();
+final ProcessorScheduleSummaryDTO additionalSummaryDto = 
additional.getScheduleSummary();
+
+// If name is null, it's because of permissions, so we want to nullify 
it in the target.
+if (additionalSummaryDto.getName() == null) {
+targetSummaryDto.setName(null);
+}
+
+
targetSummaryDto.setActiveThreadCount(targetSummaryDto.getActiveThreadCount() + 
additionalSummaryDto.getActiveThreadCount());
+
+final String additionalRunStatus = additionalSummaryDto.getRunStatus();

Review comment:
   Looking at how we merge the 

[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442450270



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorSummaryStatusEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorScheduleSummaryDTO;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummariesEntity;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummaryEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorSummaryStatusEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/schedule-summaries/query";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorScheduleSummariesEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getScheduleSummaries().stream()
+.collect(Collectors.toMap(entity -> 
entity.getScheduleSummary().getId(), entity -> entity));
+
+for (final NodeResponse nodeResponse : successfulResponses) {
+final ProcessorScheduleSummariesEntity nodeResponseEntity = 
nodeResponse == clientResponse ? responseEntity :
+
nodeResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+for (final ProcessorScheduleSummaryEntity processorEntity : 
nodeResponseEntity.getScheduleSummaries()) {
+final String processorId = 
processorEntity.getScheduleSummary().getId();
+
+final ProcessorScheduleSummaryEntity mergedEntity = 
scheduleSummaries.computeIfAbsent(processorId, id -> new 
ProcessorScheduleSummaryEntity());
+merge(mergedEntity, processorEntity);
+}
+}
+
+final ProcessorScheduleSummariesEntity mergedEntity = new 
ProcessorScheduleSummariesEntity();
+mergedEntity.setScheduleSummaries(new 
ArrayList<>(scheduleSummaries.values()));
+return new NodeResponse(clientResponse, mergedEntity);
+}
+
+private void merge(final ProcessorScheduleSummaryEntity target, final 
ProcessorScheduleSummaryEntity additional) {
+PermissionsDtoMerger.mergePermissions(target.getPermissions(), 
additional.getPermissions());
+
+final ProcessorScheduleSummaryDTO targetSummaryDto = 
target.getScheduleSummary();
+final ProcessorScheduleSummaryDTO additionalSummaryDto = 
additional.getScheduleSummary();
+
+// If name is null, it's because of permissions, so we want to nullify 
it in the target.
+if (additionalSummaryDto.getName() == null) {
+targetSummaryDto.setName(null);
+}
+
+
targetSummaryDto.setActiveThreadCount(targetSummaryDto.getActiveThreadCount() + 
additionalSummaryDto.getActiveThreadCount());
+
+final String additionalRunStatus = additionalSummaryDto.getRunStatus();

Review comment:
   Wow, yes. Thanks. Now I understand. 

[GitHub] [nifi] tpalfy commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


tpalfy commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442439764



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorSummaryStatusEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorScheduleSummaryDTO;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummariesEntity;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummaryEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorSummaryStatusEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/schedule-summaries/query";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorScheduleSummariesEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getScheduleSummaries().stream()
+.collect(Collectors.toMap(entity -> 
entity.getScheduleSummary().getId(), entity -> entity));
+
+for (final NodeResponse nodeResponse : successfulResponses) {
+final ProcessorScheduleSummariesEntity nodeResponseEntity = 
nodeResponse == clientResponse ? responseEntity :
+
nodeResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+for (final ProcessorScheduleSummaryEntity processorEntity : 
nodeResponseEntity.getScheduleSummaries()) {
+final String processorId = 
processorEntity.getScheduleSummary().getId();
+
+final ProcessorScheduleSummaryEntity mergedEntity = 
scheduleSummaries.computeIfAbsent(processorId, id -> new 
ProcessorScheduleSummaryEntity());
+merge(mergedEntity, processorEntity);
+}
+}
+
+final ProcessorScheduleSummariesEntity mergedEntity = new 
ProcessorScheduleSummariesEntity();
+mergedEntity.setScheduleSummaries(new 
ArrayList<>(scheduleSummaries.values()));
+return new NodeResponse(clientResponse, mergedEntity);
+}
+
+private void merge(final ProcessorScheduleSummaryEntity target, final 
ProcessorScheduleSummaryEntity additional) {
+PermissionsDtoMerger.mergePermissions(target.getPermissions(), 
additional.getPermissions());
+
+final ProcessorScheduleSummaryDTO targetSummaryDto = 
target.getScheduleSummary();
+final ProcessorScheduleSummaryDTO additionalSummaryDto = 
additional.getScheduleSummary();
+
+// If name is null, it's because of permissions, so we want to nullify 
it in the target.
+if (additionalSummaryDto.getName() == null) {
+targetSummaryDto.setName(null);
+}
+
+
targetSummaryDto.setActiveThreadCount(targetSummaryDto.getActiveThreadCount() + 
additionalSummaryDto.getActiveThreadCount());
+
+final String additionalRunStatus = additionalSummaryDto.getRunStatus();

Review comment:
   My issue is that the outcome depends 

[jira] [Created] (NIFI-7558) Context path filtering does not work when behind a reverse proxy with a context path

2020-06-18 Thread Andy LoPresto (Jira)
Andy LoPresto created NIFI-7558:
---

 Summary: Context path filtering does not work when behind a 
reverse proxy with a context path
 Key: NIFI-7558
 URL: https://issues.apache.org/jira/browse/NIFI-7558
 Project: Apache NiFi
  Issue Type: Bug
  Components: Configuration, Core Framework
Affects Versions: 1.11.4
Reporter: Andy LoPresto
Assignee: Andy LoPresto


As discussed with [~markap14], the {{CatchAllFilter#init()}} method doesn't 
call {{super.init()}} so this fails. Will make the change and improve related 
terminology (with backward compatible configuration). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] tpalfy opened a new pull request #4350: NIFI-6934 In PutDatabaseRecord added Postgres UPSERT support

2020-06-18 Thread GitBox


tpalfy opened a new pull request #4350:
URL: https://github.com/apache/nifi/pull/4350


   https://issues.apache.org/jira/browse/NIFI-6934
   
   Implemented using `DatabaseAdapter`
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442399466



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorRunStatusDetailsEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorRunStatusDetailsDTO;
+import org.apache.nifi.web.api.entity.ProcessorsRunStatusDetailsEntity;
+import org.apache.nifi.web.api.entity.ProcessorRunStatusDetailsEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorRunStatusDetailsEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/run-status-details/queries";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorsRunStatusDetailsEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorsRunStatusDetailsEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getRunStatusDetails().stream()
+.collect(Collectors.toMap(entity -> 
entity.getRunStatusDetails().getId(), entity -> entity));
+
+for (final NodeResponse nodeResponse : successfulResponses) {
+final ProcessorsRunStatusDetailsEntity nodeResponseEntity = 
nodeResponse == clientResponse ? responseEntity :
+
nodeResponse.getClientResponse().readEntity(ProcessorsRunStatusDetailsEntity.class);
+
+for (final ProcessorRunStatusDetailsEntity processorEntity : 
nodeResponseEntity.getRunStatusDetails()) {
+final String processorId = 
processorEntity.getRunStatusDetails().getId();
+
+final ProcessorRunStatusDetailsEntity mergedEntity = 
scheduleSummaries.computeIfAbsent(processorId, id -> new 
ProcessorRunStatusDetailsEntity());
+merge(mergedEntity, processorEntity);
+}
+}
+
+final ProcessorsRunStatusDetailsEntity mergedEntity = new 
ProcessorsRunStatusDetailsEntity();
+mergedEntity.setRunStatusDetails(new 
ArrayList<>(scheduleSummaries.values()));
+return new NodeResponse(clientResponse, mergedEntity);
+}
+
+private void merge(final ProcessorRunStatusDetailsEntity target, final 
ProcessorRunStatusDetailsEntity additional) {
+PermissionsDtoMerger.mergePermissions(target.getPermissions(), 
additional.getPermissions());
+
+final ProcessorRunStatusDetailsDTO targetSummaryDto = 
target.getRunStatusDetails();
+final ProcessorRunStatusDetailsDTO additionalSummaryDto = 
additional.getRunStatusDetails();
+
+// If name is null, it's because of permissions, so we want to nullify 
it in the target.
+if (additionalSummaryDto.getName() == null) {
+targetSummaryDto.setName(null);
+}
+
+
targetSummaryDto.setActiveThreadCount(targetSummaryDto.getActiveThreadCount() + 
additionalSummaryDto.getActiveThreadCount());
+
+final String additionalRunStatus = additionalSummaryDto.getRunStatus();
+if 

[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442398673



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorRunStatusDetailsEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorRunStatusDetailsDTO;
+import org.apache.nifi.web.api.entity.ProcessorsRunStatusDetailsEntity;
+import org.apache.nifi.web.api.entity.ProcessorRunStatusDetailsEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorRunStatusDetailsEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/run-status-details/queries";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorsRunStatusDetailsEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorsRunStatusDetailsEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getRunStatusDetails().stream()
+.collect(Collectors.toMap(entity -> 
entity.getRunStatusDetails().getId(), entity -> entity));
+
+for (final NodeResponse nodeResponse : successfulResponses) {
+final ProcessorsRunStatusDetailsEntity nodeResponseEntity = 
nodeResponse == clientResponse ? responseEntity :
+
nodeResponse.getClientResponse().readEntity(ProcessorsRunStatusDetailsEntity.class);
+
+for (final ProcessorRunStatusDetailsEntity processorEntity : 
nodeResponseEntity.getRunStatusDetails()) {
+final String processorId = 
processorEntity.getRunStatusDetails().getId();
+
+final ProcessorRunStatusDetailsEntity mergedEntity = 
scheduleSummaries.computeIfAbsent(processorId, id -> new 
ProcessorRunStatusDetailsEntity());
+merge(mergedEntity, processorEntity);
+}
+}
+
+final ProcessorsRunStatusDetailsEntity mergedEntity = new 
ProcessorsRunStatusDetailsEntity();
+mergedEntity.setRunStatusDetails(new 
ArrayList<>(scheduleSummaries.values()));
+return new NodeResponse(clientResponse, mergedEntity);
+}
+
+private void merge(final ProcessorRunStatusDetailsEntity target, final 
ProcessorRunStatusDetailsEntity additional) {
+PermissionsDtoMerger.mergePermissions(target.getPermissions(), 
additional.getPermissions());
+
+final ProcessorRunStatusDetailsDTO targetSummaryDto = 
target.getRunStatusDetails();
+final ProcessorRunStatusDetailsDTO additionalSummaryDto = 
additional.getRunStatusDetails();
+
+// If name is null, it's because of permissions, so we want to nullify 
it in the target.
+if (additionalSummaryDto.getName() == null) {

Review comment:
   Yeah I think it's probably reasonable to use that as the conditional.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442398328



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorSummaryStatusEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorScheduleSummaryDTO;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummariesEntity;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummaryEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorSummaryStatusEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/schedule-summaries/query";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorScheduleSummariesEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getScheduleSummaries().stream()

Review comment:
   This suggestion is definitely more succinct. But I would argue less 
clear. I would also be hesitant to change it because this is a pretty common 
pattern repeated in most of the Endpoint Mergers, so I'd rather stick with the 
pattern that is common and heavily tested/utilized.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442395701



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorSummaryStatusEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorScheduleSummaryDTO;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummariesEntity;
+import org.apache.nifi.web.api.entity.ProcessorScheduleSummaryEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorSummaryStatusEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/schedule-summaries/query";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorScheduleSummariesEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getScheduleSummaries().stream()
+.collect(Collectors.toMap(entity -> 
entity.getScheduleSummary().getId(), entity -> entity));
+
+for (final NodeResponse nodeResponse : successfulResponses) {
+final ProcessorScheduleSummariesEntity nodeResponseEntity = 
nodeResponse == clientResponse ? responseEntity :
+
nodeResponse.getClientResponse().readEntity(ProcessorScheduleSummariesEntity.class);
+
+for (final ProcessorScheduleSummaryEntity processorEntity : 
nodeResponseEntity.getScheduleSummaries()) {
+final String processorId = 
processorEntity.getScheduleSummary().getId();
+
+final ProcessorScheduleSummaryEntity mergedEntity = 
scheduleSummaries.computeIfAbsent(processorId, id -> new 
ProcessorScheduleSummaryEntity());
+merge(mergedEntity, processorEntity);
+}
+}
+
+final ProcessorScheduleSummariesEntity mergedEntity = new 
ProcessorScheduleSummariesEntity();
+mergedEntity.setScheduleSummaries(new 
ArrayList<>(scheduleSummaries.values()));
+return new NodeResponse(clientResponse, mergedEntity);
+}
+
+private void merge(final ProcessorScheduleSummaryEntity target, final 
ProcessorScheduleSummaryEntity additional) {
+PermissionsDtoMerger.mergePermissions(target.getPermissions(), 
additional.getPermissions());
+
+final ProcessorScheduleSummaryDTO targetSummaryDto = 
target.getScheduleSummary();
+final ProcessorScheduleSummaryDTO additionalSummaryDto = 
additional.getScheduleSummary();
+
+// If name is null, it's because of permissions, so we want to nullify 
it in the target.
+if (additionalSummaryDto.getName() == null) {
+targetSummaryDto.setName(null);
+}
+
+
targetSummaryDto.setActiveThreadCount(targetSummaryDto.getActiveThreadCount() + 
additionalSummaryDto.getActiveThreadCount());
+
+final String additionalRunStatus = additionalSummaryDto.getRunStatus();

Review comment:
   Yes. If any node indicates that the 

[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442392977



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java
##
@@ -3332,6 +3336,38 @@ private ProcessorEntity createProcessorEntity(final 
ProcessorNode processor, fin
 .collect(Collectors.toSet());
 }
 
+@Override
+public ProcessorsRunStatusDetailsEntity 
getProcessorsRunStatusDetails(final Set processorIds, final NiFiUser 
user) {
+final List summaryEntities = 
processorIds.stream()
+.map(processorDAO::getProcessor)
+.map(processor -> createRunStatusDetailsEntity(processor, user))
+.collect(Collectors.toList());
+
+final ProcessorsRunStatusDetailsEntity summariesEntity = new 
ProcessorsRunStatusDetailsEntity();
+summariesEntity.setRunStatusDetails(summaryEntities);
+return summariesEntity;
+}
+
+private ProcessorRunStatusDetailsEntity createRunStatusDetailsEntity(final 
ProcessorNode processor, final NiFiUser user) {
+final RevisionDTO revision = 
dtoFactory.createRevisionDTO(revisionManager.getRevision(processor.getIdentifier()));
+final PermissionsDTO permissions = 
dtoFactory.createPermissionsDto(processor, user);
+final ProcessorStatus processorStatus = 
controllerFacade.getProcessorStatus(processor.getIdentifier());
+final ProcessorRunStatusDetailsDTO runStatusDetailsDto = 
dtoFactory.createProcessorRunStatusDetailsDto(processor, processorStatus);
+
+if (!Boolean.TRUE.equals(permissions.getCanRead())) {

Review comment:
   Actually, in this case I guess it doesn't matter - we know that the 
permissions always will be populated. So even though it's a Boolean, it's safe 
to assume that the value will be set.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442335960



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorRunStatusDetailsEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorRunStatusDetailsDTO;
+import org.apache.nifi.web.api.entity.ProcessorsRunStatusDetailsEntity;
+import org.apache.nifi.web.api.entity.ProcessorRunStatusDetailsEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorRunStatusDetailsEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/run-status-details/queries";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorsRunStatusDetailsEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorsRunStatusDetailsEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getRunStatusDetails().stream()
+.collect(Collectors.toMap(entity -> 
entity.getRunStatusDetails().getId(), entity -> entity));
+
+for (final NodeResponse nodeResponse : successfulResponses) {
+final ProcessorsRunStatusDetailsEntity nodeResponseEntity = 
nodeResponse == clientResponse ? responseEntity :
+
nodeResponse.getClientResponse().readEntity(ProcessorsRunStatusDetailsEntity.class);
+
+for (final ProcessorRunStatusDetailsEntity processorEntity : 
nodeResponseEntity.getRunStatusDetails()) {
+final String processorId = 
processorEntity.getRunStatusDetails().getId();
+
+final ProcessorRunStatusDetailsEntity mergedEntity = 
scheduleSummaries.computeIfAbsent(processorId, id -> new 
ProcessorRunStatusDetailsEntity());
+merge(mergedEntity, processorEntity);
+}
+}
+
+final ProcessorsRunStatusDetailsEntity mergedEntity = new 
ProcessorsRunStatusDetailsEntity();
+mergedEntity.setRunStatusDetails(new 
ArrayList<>(scheduleSummaries.values()));
+return new NodeResponse(clientResponse, mergedEntity);
+}
+
+private void merge(final ProcessorRunStatusDetailsEntity target, final 
ProcessorRunStatusDetailsEntity additional) {
+PermissionsDtoMerger.mergePermissions(target.getPermissions(), 
additional.getPermissions());
+
+final ProcessorRunStatusDetailsDTO targetSummaryDto = 
target.getRunStatusDetails();
+final ProcessorRunStatusDetailsDTO additionalSummaryDto = 
additional.getRunStatusDetails();
+
+// If name is null, it's because of permissions, so we want to nullify 
it in the target.
+if (additionalSummaryDto.getName() == null) {

Review comment:
   It would be more descriptive in terms of the reasoning behind why it 
would be null. However, the convention is more to merge field-by-field. If for 
some reason the name were ever to become null for another reason, this would 
probably be safer.





[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #812: MINIFICPP-1247 - Enhance logging for CWEL

2020-06-18 Thread GitBox


arpadboda closed pull request #812:
URL: https://github.com/apache/nifi-minifi-cpp/pull/812


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] granthenke commented on pull request #4347: NIFI-7551 Add support for VARCHAR to Kudu NAR bundle

2020-06-18 Thread GitBox


granthenke commented on pull request #4347:
URL: https://github.com/apache/nifi/pull/4347#issuecomment-646169441


    LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7557) Cache large/common FlowFile attributes when restoring FlowFile Repository

2020-06-18 Thread Mark Payne (Jira)
Mark Payne created NIFI-7557:


 Summary: Cache large/common FlowFile attributes when restoring 
FlowFile Repository
 Key: NIFI-7557
 URL: https://issues.apache.org/jira/browse/NIFI-7557
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne


When NiFi is restarted, it restores FlowFiles from the repository. Each 
attribute on a FlowFile is read from disk and put into a HashMap. There are 
times when a Processor will add a large attribute to every FlowFile that it 
sees, and this results in using much more heap upon NiFi restart to store 
FlowFiles than it does while NiFi is running. This is because the Processor 
holds the value of that FlowFile as a single String object and adds that String 
to the HashMap of attributes on every FlowFile.

However, on restart, NiFi deserializes a byte stream to come up with the 
attribute value. As a result, each FlowFile that has that attribute value ends 
up with its own String object, even though the same value is repeated many 
times.

As a result, a huge amount of heap may be used on restart, causing NiFi to 
encounter OOME when attempting to restore the FlowFile Repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442335960



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/ProcessorRunStatusDetailsEndpointMerger.java
##
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.cluster.coordination.http.endpoints;
+
+import org.apache.nifi.cluster.coordination.http.EndpointResponseMerger;
+import org.apache.nifi.cluster.manager.NodeResponse;
+import org.apache.nifi.cluster.manager.PermissionsDtoMerger;
+import org.apache.nifi.controller.status.RunStatus;
+import org.apache.nifi.web.api.dto.ProcessorRunStatusDetailsDTO;
+import org.apache.nifi.web.api.entity.ProcessorsRunStatusDetailsEntity;
+import org.apache.nifi.web.api.entity.ProcessorRunStatusDetailsEntity;
+
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class ProcessorRunStatusDetailsEndpointMerger implements 
EndpointResponseMerger {
+public static final String SCHEDULE_SUMMARY_URI = 
"/nifi-api/processors/run-status-details/queries";
+
+@Override
+public boolean canHandle(final URI uri, final String method) {
+return "POST".equalsIgnoreCase(method) && 
SCHEDULE_SUMMARY_URI.equals(uri.getPath());
+}
+
+@Override
+public NodeResponse merge(final URI uri, final String method, final 
Set successfulResponses, final Set 
problematicResponses, final NodeResponse clientResponse) {
+if (!canHandle(uri, method)) {
+throw new IllegalArgumentException("Cannot use Endpoint Mapper of 
type " + getClass().getSimpleName() + " to map responses for URI " + uri + ", 
HTTP Method " + method);
+}
+
+final ProcessorsRunStatusDetailsEntity responseEntity = 
clientResponse.getClientResponse().readEntity(ProcessorsRunStatusDetailsEntity.class);
+
+// Create mapping of Processor ID to its schedule Summary.
+final Map scheduleSummaries = 
responseEntity.getRunStatusDetails().stream()
+.collect(Collectors.toMap(entity -> 
entity.getRunStatusDetails().getId(), entity -> entity));
+
+for (final NodeResponse nodeResponse : successfulResponses) {
+final ProcessorsRunStatusDetailsEntity nodeResponseEntity = 
nodeResponse == clientResponse ? responseEntity :
+
nodeResponse.getClientResponse().readEntity(ProcessorsRunStatusDetailsEntity.class);
+
+for (final ProcessorRunStatusDetailsEntity processorEntity : 
nodeResponseEntity.getRunStatusDetails()) {
+final String processorId = 
processorEntity.getRunStatusDetails().getId();
+
+final ProcessorRunStatusDetailsEntity mergedEntity = 
scheduleSummaries.computeIfAbsent(processorId, id -> new 
ProcessorRunStatusDetailsEntity());
+merge(mergedEntity, processorEntity);
+}
+}
+
+final ProcessorsRunStatusDetailsEntity mergedEntity = new 
ProcessorsRunStatusDetailsEntity();
+mergedEntity.setRunStatusDetails(new 
ArrayList<>(scheduleSummaries.values()));
+return new NodeResponse(clientResponse, mergedEntity);
+}
+
+private void merge(final ProcessorRunStatusDetailsEntity target, final 
ProcessorRunStatusDetailsEntity additional) {
+PermissionsDtoMerger.mergePermissions(target.getPermissions(), 
additional.getPermissions());
+
+final ProcessorRunStatusDetailsDTO targetSummaryDto = 
target.getRunStatusDetails();
+final ProcessorRunStatusDetailsDTO additionalSummaryDto = 
additional.getRunStatusDetails();
+
+// If name is null, it's because of permissions, so we want to nullify 
it in the target.
+if (additionalSummaryDto.getName() == null) {

Review comment:
   It would be more descriptive in terms of the reasoning behind why it 
would be null. However, the convention is more to merge field-by-field. If for 
some reason the name were ever to become null for another reason, this would 
probably be safer.





[GitHub] [nifi] markap14 commented on a change in pull request #4337: NIFI-7536: Fix to improve performance of updating parameters

2020-06-18 Thread GitBox


markap14 commented on a change in pull request #4337:
URL: https://github.com/apache/nifi/pull/4337#discussion_r442334635



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java
##
@@ -3332,6 +3336,38 @@ private ProcessorEntity createProcessorEntity(final 
ProcessorNode processor, fin
 .collect(Collectors.toSet());
 }
 
+@Override
+public ProcessorsRunStatusDetailsEntity 
getProcessorsRunStatusDetails(final Set processorIds, final NiFiUser 
user) {
+final List summaryEntities = 
processorIds.stream()
+.map(processorDAO::getProcessor)
+.map(processor -> createRunStatusDetailsEntity(processor, user))
+.collect(Collectors.toList());
+
+final ProcessorsRunStatusDetailsEntity summariesEntity = new 
ProcessorsRunStatusDetailsEntity();
+summariesEntity.setRunStatusDetails(summaryEntities);
+return summariesEntity;
+}
+
+private ProcessorRunStatusDetailsEntity createRunStatusDetailsEntity(final 
ProcessorNode processor, final NiFiUser user) {
+final RevisionDTO revision = 
dtoFactory.createRevisionDTO(revisionManager.getRevision(processor.getIdentifier()));
+final PermissionsDTO permissions = 
dtoFactory.createPermissionsDto(processor, user);
+final ProcessorStatus processorStatus = 
controllerFacade.getProcessorStatus(processor.getIdentifier());
+final ProcessorRunStatusDetailsDTO runStatusDetailsDto = 
dtoFactory.createProcessorRunStatusDetailsDto(processor, processorStatus);
+
+if (!Boolean.TRUE.equals(permissions.getCanRead())) {

Review comment:
   @tpalfy DTO objects in nifi always use Objects rather than primitives 
because a primitive would default to `false` and we don't want that - we want 
the exact value that was set, including `null` potentially. Because it can be 
`null`, we cannot use auto-boxing, as it could throw NPE





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7553) PutFTP enhancement for iSeries FTP

2020-06-18 Thread Douglas Korinke (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139419#comment-17139419
 ] 

Douglas Korinke commented on NIFI-7553:
---

Here was my original SO question with more details. Happy to bring them over 
here if helpful:

[https://stackoverflow.com/questions/62315916/apache-nifi-put-file-on-iseries-ftp]

> PutFTP enhancement for iSeries FTP
> --
>
> Key: NIFI-7553
> URL: https://issues.apache.org/jira/browse/NIFI-7553
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
> Environment: RHEL 7
>Reporter: Douglas Korinke
>Priority: Major
>  Labels: PutFtp, newbie
> Attachments: image-2020-06-17-19-26-51-942.png, 
> image-2020-06-17-19-26-55-110.png
>
>
> I am attempting to perform a simple proof of concept, to get a CSV from an 
> SFTP and put it on an iSeries (AS400) integrated file system (FTP).
> The iSeries has two flavors of file structure, namefmt 0 for the legacy 
> library/physical file nomenclature, and namefmt 1 which allows the user to 
> interact with a typical directory/file structure.
> The improvement I'm having is that in order for me to change directory to an 
> IFS directory, I first have to issue the command QUOTE SITE NAMEFMT 1. I 
> tried using pre.cmd._ but realized that is before the PUT and not before 
> the CD.
> A second way I attempted to tackle this was to use //home/dkorinke which 
> usually tells the iSeries you want an IFS directory:
> !image-2020-06-17-19-26-55-110.png!
> When I used this in the PutFTP processer, I get the following:
> !image-2020-06-17-19-26-51-942.png!
> Is there anyway to execute the SITE NAMEFMT 1 command before the CD in a 
> PutFTP process?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] xijianlv closed pull request #1913: NIFI-3603:Localization needed

2020-06-18 Thread GitBox


xijianlv closed pull request #1913:
URL: https://github.com/apache/nifi/pull/1913


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7556) NullPointerException when changing version of process group with versioned child process group

2020-06-18 Thread Exidex (Jira)
Exidex created NIFI-7556:


 Summary: NullPointerException when changing version of process 
group with versioned child process group 
 Key: NIFI-7556
 URL: https://issues.apache.org/jira/browse/NIFI-7556
 Project: Apache NiFi
  Issue Type: Bug
  Components: Flow Versioning
Affects Versions: 1.11.4, 1.11.3
Reporter: Exidex


After adding versioned child process group to process group and pushing it to 
nifi registry I am unable to change the version of other instances of the same 
process group because of NullPointerException. Removing child process group 
resolved the issue.

Seams like this is related to 
[NIFI-6985|https://issues.apache.org/jira/browse/NIFI-6985]

{code:java}
2020-06-18 12:14:00,712 ERROR [Version Control Update Thread-1] 
org.apache.nifi.web.api.VersionsResource Failed to update flow to new version
java.lang.NullPointerException: null
at 
org.apache.nifi.groups.StandardProcessGroup.addMissingParameters(StandardProcessGroup.java:4183)
at 
org.apache.nifi.groups.StandardProcessGroup.updateParameterContext(StandardProcessGroup.java:4234)
at 
org.apache.nifi.groups.StandardProcessGroup.updateProcessGroup(StandardProcessGroup.java:3754)
at 
org.apache.nifi.groups.StandardProcessGroup.updateProcessGroup(StandardProcessGroup.java:3881)
at 
org.apache.nifi.groups.StandardProcessGroup.updateFlow(StandardProcessGroup.java:3653)
at 
org.apache.nifi.web.dao.impl.StandardProcessGroupDAO.updateProcessGroupFlow(StandardProcessGroupDAO.java:408)
at 
org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$FastClassBySpringCGLIB$$10a99b47.invoke()
at 
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:736)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:84)
at 
org.apache.nifi.audit.ProcessGroupAuditor.updateProcessGroupFlowAdvice(ProcessGroupAuditor.java:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:627)
at 
org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:616)
at 
org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at 
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:671)
at 
org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$EnhancerBySpringCGLIB$$c78ba84e.updateProcessGroupFlow()
at 
org.apache.nifi.web.StandardNiFiServiceFacade$14.update(StandardNiFiServiceFacade.java:5044)
at 
org.apache.nifi.web.revision.NaiveRevisionManager.updateRevision(NaiveRevisionManager.java:117)
at 
org.apache.nifi.web.StandardNiFiServiceFacade.updateProcessGroupContents(StandardNiFiServiceFacade.java:5040)
at 
org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
at 
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at 
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:736)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at 
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:84)
at 
org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithWriteLock(NiFiServiceFacadeLock.java:179)
at 
org.apache.nifi.web.NiFiServiceFacadeLock.updateLock(NiFiServiceFacadeLock.java:66)
at sun.reflect.GeneratedMethodAccessor703.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[jira] [Assigned] (NIFI-7255) nifi.properties configuration change can result in duplicate nodes listed in Cluster UI

2020-06-18 Thread Simon Bence (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Bence reassigned NIFI-7255:
-

Assignee: Simon Bence

> nifi.properties configuration change can result in duplicate nodes listed in 
> Cluster UI
> ---
>
> Key: NIFI-7255
> URL: https://issues.apache.org/jira/browse/NIFI-7255
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.7.0, 1.8.0, 1.9.0, 1.10.0
>Reporter: Matthew Clarke
>Assignee: Simon Bence
>Priority: Major
>
> The elements that go in to coming up with the "Node identifier" which is then 
> store in local state on a NiFi node include:
> /**
> the unique identifier for the node
> */
> private final String id; /**
> the IP or hostname to use for sending requests to the node's external
> interface
> */
> private final String apiAddress; /**
> the port to use use for sending requests to the node's external interface,
> this can be HTTP API port or HTTPS API port depending on whether //TODO: .
> */
> private final int apiPort; /**
> the IP or hostname to use for sending requests to the node's internal
> interface
> */
> private final String socketAddress; /**
> the port to use for sending requests to the node's internal interface
> */
> private final int socketPort; /**
> The IP or hostname to use for sending FlowFiles when load balancing a 
> connection
> */
> private final String loadBalanceAddress; /**
> the port to use for sending FlowFiles when load balancing a connection
> */
> private final int loadBalancePort; /**
> the IP or hostname that external clients should use to communicate with this 
> node via Site-to-Site
> */
> private final String siteToSiteAddress; /**
> the port that external clients should use to communicate with this node via 
> Site-to-Site RAW Socket protocol
> */
> private final Integer siteToSitePort; /**
> the port that external clients should use to communicate with this node via 
> Site-to-Site HTTP protocol,
> this can be HTTP API port or HTTPS API port depending on whether 
> siteToSiteSecure or not.
> */
> private final Integer siteToSiteHttpApiPort; /**
> whether or not site-to-site communications with this node are secure
> */
> private final Boolean siteToSiteSecure; private final Set 
> nodeIdentities;
> With the following fields being used to determine quality:
> apiAddress
> apiPort
> socketAddress
> socketPort
> If for example the apiPort is changed by switching from 8080 for (http) to 
> 8443 (for https),  the node will show up twice in the in the cluster UI ( 
> hostname:8443 --> connected and hostname:8080 --> disconnected).   Having 
> these disconnected nodes will prevent changes to the UI.  Worse yet is that 
> ZK may report  as the elected Cluster coordinator and end up having 
> both the 8080 and 8443 node both being marked as the cluster coordinator.  
> Then you may not even be able to access the cluster because requests fails to 
> replicate to the Cluster coordinator because it is not connected.
> Resolving this issue requires users to shutdown NiFi, delete the local state 
> directory contents, and restart NiFi.   
> Downside to this resolution is any local state retained for NiFi components 
> (for example processors) is lost as well.
> Suggested solution here is for NiFi to retain current node identifier field 
> configuration values.  If on restart loaded configurations show any change to 
> those values, NiFi should clear out the previous retained node ids and create 
> all new node Ids.
> Might also make sense to move the stored out of local state to make manual 
> removal of this information possible without affecting state stored by 
> components found in the flow.xml.
>  
> The following Jira only addressed what specific configuration change that can 
> result in this issue:
> https://jira.apache.org/jira/browse/NIFI-5672



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7549) Addign Hazelcast based implementation for DistributedMapCacheClient

2020-06-18 Thread Simon Bence (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Bence updated NIFI-7549:
--
Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi/pull/4349

> Addign Hazelcast based implementation for DistributedMapCacheClient
> ---
>
> Key: NIFI-7549
> URL: https://issues.apache.org/jira/browse/NIFI-7549
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Adding Hazelcast support in the same fashion as in case of Redis for example 
> would be useful. Even further: in order to make it easier to use, embedded 
> Hazelcast support should be added, which makes it unnecessary to start 
> Hazelcast cluster manually.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


szaszm commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r441597164



##
File path: libminifi/include/core/PropertyValue.h
##
@@ -202,14 +196,50 @@ class PropertyValue : public state::response::ValueNode {
   auto operator=(const std::string ) -> typename std::enable_if<
   std::is_same::value ||
   std::is_same::value, PropertyValue&>::type {
-value_ = std::make_shared(ref);
-type_id = value_->getTypeIndex();
-return *this;
+validator_.clearValidationResult();
+return WithAssignmentGuard(ref, [&] () -> PropertyValue& {
+  value_ = std::make_shared(ref);
+  type_id = value_->getTypeIndex();
+  return *this;
+});
+  }
+
+ private:
+  template
+  T convertImpl(const char* const type_name) const {
+if (!isValueUsable()) {
+  throw utils::InvalidValueException("Cannot convert invalid value");
+}
+T res;
+if (value_->convertValue(res)) {
+  return res;
+}
+throw utils::ConversionException(std::string("Invalid conversion to ") + 
type_name + " for " + value_->getStringValue());
+  }

Review comment:
   This could use `core::getClassName`. I would also make this public 
and encourage its use over the implicit conversion operators, with a long term 
goal of moving to explicit conversions.

##
File path: libminifi/include/core/PropertyValue.h
##
@@ -75,64 +77,52 @@ class PropertyValue : public state::response::ValueNode {
   }
 
   std::shared_ptr getValidator() const {
-return validator_;
+return *validator_;
   }
 
   ValidationResult validate(const std::string ) const {
-if (validator_) {
-  return validator_->validate(subject, getValue());
-} else {
+auto cachedResult = validator_.isValid();
+if (cachedResult == CachedValueValidator::Result::SUCCESS) {
   return ValidationResult::Builder::createBuilder().isValid(true).build();
 }
+if (cachedResult == CachedValueValidator::Result::FAILURE) {
+  return 
ValidationResult::Builder::createBuilder().withSubject(subject).withInput(getValue()->getStringValue()).isValid(false).build();
+}
+auto result = validator_->validate(subject, getValue());
+validator_.setValidationResult(result.valid());
+return result;

Review comment:
   This logic should move to `CachedValueValidator` and the caching 
behavior should be transparent IMO.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7555) LDAP user group provider throws PartialResults Exception when referral strategy = IGNORE

2020-06-18 Thread Melissa Clark (Jira)
Melissa Clark created NIFI-7555:
---

 Summary: LDAP user group provider throws PartialResults Exception 
when referral strategy = IGNORE
 Key: NIFI-7555
 URL: https://issues.apache.org/jira/browse/NIFI-7555
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.11.4
 Environment: AWS Instance running Ubuntu 18.04
LDAPS user group provider
Reporter: Melissa Clark


We are using LDAPS for authentication and for user-group provision.

At the level I need to search the LDAP tree, two referrals are returned. If I 
tell Nifi to IGNORE referrals in the "referral strategy" option in 
authorizers.xml, Nifi throws a "PartialResults" exception and stops.

This is unexpected as, by saying that referrals should be ignored, I have 
indicated that I am happy with the partial results.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1264) Add getter interface helper

2020-06-18 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1264:

Description: 
*Background:*

In our codebase we have quite a lot of getter functions that follow this 
signature:
{code:c++|title=Current Interface}
void getMemberField(Type& argToModify);
bool getNullableMemberField(OtherType& argToModify);
{code}
Most developers would much rather use an interface that looks like this though:
{code:c++|title=Proposed Interface}
Type& getMemberField();
const Type& getMemberField();
optional getNullableMemberField();
{code}
The problem with the former version is that in some cases it makes the code 
unnecessary verbose. An example from our codebase:
{code:c++|title=Example}
 # Version 1
 utils::Identifier sourceUuid;
 source->getUUID(sourceUuid);
 connection->setSourceUUID(sourceUuid);
 # Version 2
 connection->setSourceUUID(source->getUUID(sourceUuid));
{code}
Unfortunately more ofthen than not this is the exact pattern the getters appear 
in.

*Proposal:*
 As we do not want to break API, we cannot change the getters. However, we can 
resort to implementing a helper utility. A godbolt implementation for a 
convenience function is available here:

[[Proof of 
Concept]|https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAM1QDsCBlZAQwBtMQBGAFlICsupVs1qhkAUgBMAISnTSAZ0ztkBPHUqZa6AMKpWAVwC2tLgDZSW9ABk8tTADljAI0zEuAZlIAHVAsLqtHqGJuY%2BfgF0tvZORq7unF5KKmp0DATMxATBxqacFsmYqoHpmQTRji5unooZWTmh%2BbVlFbHxngCUiqgGxMgcAORSHnbIhlgA1OIeOgoExHbA09jiAAwAgsOj45hTM%2BpzxJjMRstrm5IjtGMGk9M6BmqshACeZxtb1zt7OkaYRiQ3h4Vh9Lttbrt7qhvKlaGx3hcrjc7jMAG5FIjEBGfZGQmZUAzXWHw4HnHHfe4EF7eTAAfXmzEICmxG0OBlUEwA8qx0C8dMIFMyPgB2WQbCYSiao1B4dATYCYAjrCB2AhSMwTTLADpTUWa4jAPYAEU102kuqN50lE1VmuNE04kjNZOFlo8Ys2GwI/28wm9PzGzEFEwAYssJmyOesDQAVamYZ1en1%2BvE6Kk0uF/CYAJUVpAm6a0J12/IU%2BcLmcwADoaxNo8BmaTWfN2QQ67H4/dc22IKWQCAAFQdCD1hQ1qsdcMuj3W71GX3Mf33fwALzpbYcCOtkbbCoItK1upn1slBn8ogL8ftFeLEYI6H7BAMvrpyn%2BWjVMwc%2BbmD5AT5fe5R3HM5gUfTt3StSVxFdRNNlgyCPiTecUx%2BG8s05Zw%2BHLeNK1DQkER3HNFV6WgADV4RFY8JTPRZ22AXMqGvXDb3rOMaXuEMCLA/9k0XXY9wPA17lWZZwI4xD1mtWiL3rZiM1vX9%2ByOAF0VpI4qDcLR%2BiAg1GLE/8II9KCJW7UiKNYCAdWmE0sHYb04OtMziHItgIEwvh1QmVAsPzLjaAmAlaB1EBvKw2kIHVHy%2BC6QLCQioLrNFGDLXFaC0olaE3EXEh6KskyjwK605K1RyTwlJSQDsaUAGtMGoeL82i2l8y1SdJPKiUjifFz9SWDroNgjKJm8BZUX4kACo8gcwr4Wkyolfy4toebJJSuDzltIxGVoKzCuG7leX5IMFALTA5gWiZutI4ietc1gYPNb05nzdVDr5AUFH7PcpPWtahs9QGySQoGQeB0GIfBgYulYEABgAVgGUhTAGVYkdQOGdDkOQIx6PpIUuTgkYIOG0Y6LoapAbhuCrSRuAADjMeGPAATm4VZJHpjx6bZoQ4e4JGUbR0gMYGJGvtWUgSdR6HSDgWAYEQFBUHnPB2DICgIDQVX1ZQYRRE4VYjdIKg1e9YgvogZw4aJ0hnDsTIXhtpHtb%2BehOVoVgnZl0gsG20R2FJpH8COYp0S%2Bn3MAADyKR5BmF1VlGdoQ8GcYhHb0LAg6lhYjGdroaHoJg2A4Hh%2BEEfWxGxmQU%2BcL7YErEAG1YUh0XcZ5vXJkWYUCCOAFpfxsiQZDkThhQmPvOQ8cW32KDQICsBo8i8KxWiqdwLF8fxYSXmot8iWg17iaoCln2FSnqfRchqQo59oC/yjsSpj435pL5CZe38fmJ15AMwugUHjfoXAYZw0RsjbOoso6Mz7mYbgExAwXkNlWVYKCJgQFwIQXKwxOD5j0DrNwUxCY6ixiPGQxMg5d0pvDSWsMBgC1IHnOBKD6aOnhvDYUwpDarDgcKCBPtRbixAJLaWZM5aKwgEgHoBBvCPHIJQbW3g1bVFwZgfAmJBCF0YCwQO9NSAAHd07eHznzBGgtIFwxwRMfRhAEATGgWYWB8DEGGmQag4WojoZdAQMcLA7grKmIYXnDw8Mqws04PTYU1NJCcxZizEJvAhbozhkIkRlCKYgHYVWYU8NEjcDMKseGLMuFmFwXQ6e/DhaCKluk0xkhzECJSTUmWXc27%2BA0NwIAA%3D%3D]

We can introduce this and simplify later usages of getters.

With this, the syntax would simplify to:
{code:c++|title=Using a helper}
 connection->setSourceUUID(ReturnVal(source, ::getUUID));
{code}

  was:
*Background:*

In our codebase we have quite a lot of getter functions that follow this 
signature:
{code:c++|title=Current Interface}
void getMemberField(Type& argToModify);
bool getNullableMemberField(OtherType& argToModify);
{code}
Most developers would much rather use an interface that looks like this though:
{code:c++|title=Proposed Interface}
Type& getMemberField();
const Type& getMemberField();
optional getNullableMemberField();
{code}
The problem with the former version is that in some cases it makes the code 
unnecessary verbose. An example from our codebase:
{code:c++|title=Example}
 # Version 1
 utils::Identifier sourceUuid;
 source->getUUID(sourceUuid);
 connection->setSourceUUID(sourceUuid);
 # Version 2
 connection->setSourceUUID(source->getUUID(sourceUuid));
{code}
Unfortunately more ofthen than not this is the exact pattern the getters appear 
in.

*Proposal:*
 As we do not want to break API, we cannot change the getters. However, we can 
resort to implementing a helper utility. A godbolt implementation for a 
convenience function is available here:

[[Proof of 

[jira] [Updated] (MINIFICPP-1264) Add getter interface helper

2020-06-18 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1264:

Description: 
*Background:*

In our codebase we have quite a lot of getter functions that follow this 
signature:
{code:c++|title=Current Interface}
void getMemberField(Type& argToModify);
bool getNullableMemberField(OtherType& argToModify);
{code}
Most developers would much rather use an interface that looks like this though:
{code:c++|title=Proposed Interface}
Type& getMemberField();
const Type& getMemberField();
optional getNullableMemberField();
{code}
The problem with the former version is that in some cases it makes the code 
unnecessary verbose. An example from our codebase:
{code:c++|title=Example}
 # Version 1
 utils::Identifier sourceUuid;
 source->getUUID(sourceUuid);
 connection->setSourceUUID(sourceUuid);
 # Version 2
 connection->setSourceUUID(source->getUUID(sourceUuid));
{code}
Unfortunately more ofthen than not this is the exact pattern the getters appear 
in.

*Proposal:*
 As we do not want to break API, we cannot change the getters. However, we can 
resort to implementing a helper utility. A godbolt implementation for a 
convenience function is available here:

[[Proof of 
Concept]|https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAM1QDsCBlZAQwBtMQBGAFlICsupVs1qhkAUgBMAISnTSAZ0ztkBPHUqZa6AMKpWAVwC2tQVvQAZPLUwA5YwCNMxENwBspAA6oFhdbT1DE0FvXzU6Kxt7IycXd0VlTFV/BgJmYgJA41NOBJVw2lT0gki7R2dXDwU0jKzg3Ori0ujYyoBKRVQDYmQOAHIpAGZrZEMsAGpxQZ1q4mtgKexxAAYAQSGRscxJ6fVZzGYjRZX1yWHaUYMJqZ0DNVZCAE9jtY2LrZ2dI0wjEmfBpavM6bK7bG6oTwFNgvU7nS7XaYANySRGIMLe8LB0yoBguUNY6OB71BnwIj08mAA%2BgRiMxCAp0WtZgZVOMAPKsdCPHTCBQM14Adlka3GovGiNQeHQ42AmAIqwg1gIUjc43SwDakyFauIwB2ABE1VNpFr9ScxeMlWqDeNOJJjeaxUqAFTjBw22gGVisSFowbC9YCs3%2Bk4nAg/TzCcOfUbMPnjABii3GzNZq11ABVyZgHWtw0ZI8xozcyRTaIdtgAlOWkcalrQV8Y8hS1%2Bvl74AOi743TwAZAJOqYIPcz2Zu1eHEGbIBAzraEF7Ci7HbaydDQsdovzheL018AC8qcPbDCLUOZXLKeqtQGLRaDL5RHXsza243qugZwQDJGqYlvvQNy2LWH5fj%2B7A3Iuy7HACX5jiGIpiuIQa5oGwYBmGEZRliOhvt87IOHwrbZu22wJriMLnhO3S0AAatCgq3mKD7zCOwDVlQr4kY2vZZhSNzkbQixflhRbbLKBBXrqNzLMJID1qh96PnqvZcWW74EJ%2BIDED8qDIpSOlUM4Wi9JBuocXJCkIesiGitRxB0WwECalMhpYOw4aKWK9mOawEBsoRKrjKghG1oJ4w4rQmogMFhGUhAKohXwHQRbi8WRS5G4obZkw5apEnOZuN5FRaqnql5d4pppM7WBKADWmDUGltZJZStbqqu1mVaKOnfg5OoLF1SHZasFqeHMiJiSARUBXwrqtRVia4qltCUqhyHoaGaxWkYdK0M5xU5RyXI8nGCh1pg1SLb1NHjD59EEtq4bVLWKrHdyvIKDOEmjRtHYFZ1GEjVtNmgyD4OvJDYNQyDfQdKwIB9AArH0pCmH0yyo6giM6HIcgpl0PRgmcnCowQiOY20HR1a43AdpI3AABxuEjgwAJzcMskiM4MjMc0IiPcKj6OY6Q2N9KjX3LKQ5MY3DpBwLAMCICgqAFng7BkBQEBoOrmsoMIoicMsJukFQGvhsQX0QA4iOk6QDjWOkjx26jusAQQbK0KwLty6QWC7aI7AU6j%2BA6ckyJfX7mAAB5JHc/Si0qyiu0IeAOLSxDchgidk3MRiux0ND0EwbAcDw/CCIbYh4zIacOF9sCkSAfasKQyIuA84ZU2LkL%2BFHAC0H6uRIMhyJwArjAPbKDJLiTJBoEDmHUOSkOYzTlC4uShH4dAryEPi77QG8xBUDTzwURS1Po2SCEo%2BQpDUJTWGUp9b4oT/7w0T8n60nAdAoQmvQuDw0RijNGIcxaIxjszAebhuDjFjE%2BY2HZlioPGBAXAhASCTBJrWPQetnC4MGP/JstdpBkxDj3GmSNpYIz6ELUgBd4GoMZnaJGSMBQCmNsseBAoIF%2B3FpLEA0tZaUwVsrCASAugEE8HccglBdaeA1mfNe%2BBUSCGLowFgwdGakAAO60k8IXAWyNhaQPFkMTg4x9GEAQOMGBbg4EIKQXqFBaDRZiLhh0BABwsAuGcqYxhBdBhIw7GzTgjMBTcG4JIbmbM2ahN4CLLGiNhGiKodTEAHCOwCiRpwQY7hlhIzZtwtwuR6GzwEaLIRMtMmmMkOYwRaS6lyx7h3XwGhuBAA%3D%3D%3D]

We can introduce this and simplify later usages of getters.

With this, the syntax would simplify to:
{code:c++|title=Using a helper}
 connection->setSourceUUID(ReturnVal(source, ::getUUID).get());
{code}

  was:
*Background:*

In our codebase we have quite a lot of getter functions that follow this 
signature:
{code:c++|title=Current Interface}
void getMemberField(Type& argToModify);
bool getNullableMemberField(OtherType& argToModify);
{code}
Most developers would much rather use an interface that looks like this though:
{code:c++|title=Proposed Interface}
Type& getMemberField();
const Type& getMemberField();
optional getNullableMemberField();
{code}
The problem with the former version is that in some cases it makes the code 
unnecessary verbose. An example from our codebase:
{code:c++|title=Example}
 # Version 1
 utils::Identifier sourceUuid;
 source->getUUID(sourceUuid);
 connection->setSourceUUID(sourceUuid);
 # Version 2
 connection->setSourceUUID(source->getUUID(sourceUuid));
{code}
Unfortunately more ofthen than not this is the exact pattern the getters appear 
in.

*Proposal:*
 A godbolt implementation for a convenience function is available here:

[[Proof of 

[GitHub] [nifi] simonbence opened a new pull request #4349: NIFI-7549 Adding Hazelcast based DistributedMapCacheClient support

2020-06-18 Thread GitBox


simonbence opened a new pull request #4349:
URL: https://github.com/apache/nifi/pull/4349


   [NIFI-7549](https://issues.apache.org/jira/browse/NIFI-7549)
   
   The PR contains my proposal of Hazelcast support for 
DistributedMapCacheClient. In general, I followed the patterns I found in the 
existing implementations, for the cases were not explicitly documented the 
behaviour follows them, mainly the ones were added with the feature itself (I 
considered them the most relevant and accurate implementations)
   
   As for the organisation of the implementation, I did split the feature into 
three "layers". The package structure follows this as well. In the bottom, 
there is the HazelcastCache, and the implementation. This layer is responsible 
to directly communicate with the Hazelcast (via a provided connection) and hide 
the details of the used data structure. The current implementation is based on 
IMap, but there is the possibility to change or extend this. Also, the map-like 
data structure's interface is heavily changed between Hazelcast 3.x and 4.x. In 
case if the support would be needed for older implementations, wrapping the 
logic could help to avoid sprawl of the changes.
   
   The layer above is the "cache manager" (HazelcastCacheManager). This is 
responsible to create the cache instances and maintain the connection. 
Currently there are two implementation: one which starts an embedded Hazelcast 
for easy usage and one which connects to a Hazelcast cluster running outside 
NiFi. The embedded provides a limited capability for configuration, but it 
could serve effectively as local cache. The "standalone" could joint to any 
non-enterprise Hazelcast. Note: I looked after how to connect with secured 
Hazelcast, but as I found it is part of the enterprise package. For now, it was 
not part of my intent to support that. This layer should hide all Hazelcast 
specific interface or implementation.
   
   The top layer is the actual DistributedMapCacheClient implementation. 
Depends on both the bottom ones, as the manager is needed for acquiring the 
cache which it works with. All the NiFi specific logic is within this. 
AtomicDistributedMapCacheClient methods are supported. The revision handling 
comes in with this is general for all the entries. A long-based version is 
attached to all the entries.
   
   Please share your thoughts on the proposal, I hope it would be useful for 
the community!
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1264) Add getter interface helper

2020-06-18 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1264:

Summary: Add getter interface helper  (was: Add getter-setter interface 
helper)

> Add getter interface helper
> ---
>
> Key: MINIFICPP-1264
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1264
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Minor
> Fix For: 0.8.0
>
>
> *Background:*
> In our codebase we have quite a lot of getter functions that follow this 
> signature:
> {code:c++|title=Current Interface}
> void getMemberField(Type& argToModify);
> bool getNullableMemberField(OtherType& argToModify);
> {code}
> Most developers would much rather use an interface that looks like this 
> though:
> {code:c++|title=Proposed Interface}
> Type& getMemberField();
> const Type& getMemberField();
> optional getNullableMemberField();
> {code}
> The problem with the former version is that in some cases it makes the code 
> unnecessary verbose. An example from our codebase:
> {code:c++|title=Example}
>  # Version 1
>  utils::Identifier sourceUuid;
>  source->getUUID(sourceUuid);
>  connection->setSourceUUID(sourceUuid);
>  # Version 2
>  connection->setSourceUUID(source->getUUID(sourceUuid));
> {code}
> Unfortunately more ofthen than not this is the exact pattern the getters 
> appear in.
> *Proposal:*
>  A godbolt implementation for a convenience function is available here:
> [[Proof of 
> Concept]|https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAM1QDsCBlZAQwBtMQBGAFlICsupVs1qhkAUgBMAISnTSAZ0ztkBPHUqZa6AMKpWAVwC2tQVvQAZPLUwA5YwCNMxENwBspAA6oFhdbT1DE0FvXzU6Kxt7IycXd0VlTFV/BgJmYgJA41NOBJVw2lT0gki7R2dXDwU0jKzg3Ori0ujYyoBKRVQDYmQOAHIpAGZrZEMsAGpxQZ1q4mtgKexxAAYAQSGRscxJ6fVZzGYjRZX1yWHaUYMJqZ0DNVZCAE9jtY2LrZ2dI0wjEmfBpavM6bK7bG6oTwFNgvU7nS7XaYANySRGIMLe8LB0yoBguUNY6OB71BnwIj08mAA%2BgRiMxCAp0WtZgZVOMAPKsdCPHTCBQM14Adlka3GovGiNQeHQ42AmAIqwg1gIUjc43SwDakyFauIwB2ABE1VNpFr9ScxeMlWqDeNOJJjeaxUqAFTjBw22gGVisSFowbC9YCs3%2Bk4nAg/TzCcOfUbMPnjABii3GzNZq11ABVyZgHWtw0ZI8xozcyRTaIdtgAlOWkcalrQV8Y8hS1%2Bvl74AOi743TwAZAJOqYIPcz2Zu1eHEGbIBAzraEF7Ci7HbaydDQsdovzheL018AC8qcPbDCLUOZXLKeqtQGLRaDL5RHXsza243qugZwQDJGqYlvvQNy2LWH5fj%2B7A3Iuy7HACX5jiGIpiuIQa5oGwYBmGEZRliOhvt87IOHwrbZu22wJriMLnhO3S0AAatCgq3mKD7zCOwDVlQr4kY2vZZhSNzkbQixflhRbbLKBBXrqNzLMJID1qh96PnqvZcWW74EJ%2BIDED8qDIpSOlUM4Wi9JBuocXJCkIesiGitRxB0WwECalMhpYOw4aKWK9mOawEBsoRKrjKghG1oJ4w4rQmogMFhGUhAKohXwHQRbi8WRS5G4obZkw5apEnOZuN5FRaqnql5d4pppM7WBKADWmDUGltZJZStbqqu1mVaKOnfg5OoLF1SHZasFqeHMiJiSARUBXwrqtRVia4qltCUqhyHoaGaxWkYdK0M5xU5RyXI8nGCh1pg1SLb1NHjD59EEtq4bVLWKrHdyvIKDOEmjRtHYFZ1GEjVtNmgyD4OvJDYNQyDfQdKwIB9AArH0pCmH0yyo6giM6HIcgpl0PRgmcnCowQiOY20HR1a43AdpI3AABxuEjgwAJzcMskiM4MjMc0IiPcKj6OY6Q2N9KjX3LKQ5MY3DpBwLAMCICgqAFng7BkBQEBoOrmsoMIoicMsJukFQGvhsQX0QA4iOk6QDjWOkjx26jusAQQbK0KwLty6QWC7aI7AU6j%2BA6ckyJfX7mAAB5JHc/Si0qyiu0IeAOLSxDchgidk3MRiux0ND0EwbAcDw/CCIbYh4zIacOF9sCkSAfasKQyIuA84ZU2LkL%2BFHAC0H6uRIMhyJwArjAPbKDJLiTJBoEDmHUOSkOYzTlC4uShH4dAryEPi77QG8xBUDTzwURS1Po2SCEo%2BQpDUJTWGUp9b4oT/7w0T8n60nAdAoQmvQuDw0RijNGIcxaIxjszAebhuDjFjE%2BY2HZlioPGBAXAhASCTBJrWPQetnC4MGP/JstdpBkxDj3GmSNpYIz6ELUgBd4GoMZnaJGSMBQCmNsseBAoIF%2B3FpLEA0tZaUwVsrCASAugEE8HccglBdaeA1mfNe%2BBUSCGLowFgwdGakAAO60k8IXAWyNhaQPFkMTg4x9GEAQOMGBbg4EIKQXqFBaDRZiLhh0BABwsAuGcqYxhBdBhIw7GzTgjMBTcG4JIbmbM2ahN4CLLGiNhGiKodTEAHCOwCiRpwQY7hlhIzZtwtwuR6GzwEaLIRMtMmmMkOYwRaS6lyx7h3XwGhuBAA%3D%3D%3D]
> We can introduce this and simplify later usages of getters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442078583



##
File path: libminifi/include/core/CachedValueValidator.h
##
@@ -0,0 +1,103 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NIFI_MINIFI_CPP_CACHEDVALUEVALIDATOR_H
+#define NIFI_MINIFI_CPP_CACHEDVALUEVALIDATOR_H
+
+#include "PropertyValidation.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+
+class CachedValueValidator{
+ public:
+  enum class Result {
+FAILURE,
+SUCCESS,
+RECOMPUTE
+  };
+
+  CachedValueValidator() = default;
+  CachedValueValidator(const CachedValueValidator& other) : 
validator_(other.validator_) {}
+  CachedValueValidator(CachedValueValidator&& other) noexcept : 
validator_(std::move(other.validator_)) {}
+  CachedValueValidator& operator=(const CachedValueValidator& other) {
+validator_ = other.validator_;
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+  CachedValueValidator& operator=(CachedValueValidator&& other) {
+validator_ = std::move(other.validator_);
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+
+  CachedValueValidator(const std::shared_ptr& other) : 
validator_(other) {}
+  CachedValueValidator(std::shared_ptr&& other) : 
validator_(std::move(other)) {}
+  CachedValueValidator& operator=(const std::shared_ptr& 
new_validator) {
+validator_ = new_validator;
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+  CachedValueValidator& operator=(std::shared_ptr&& 
new_validator) {
+validator_ = std::move(new_validator);
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+
+  const std::shared_ptr& operator->() const {
+return validator_;
+  }
+
+  operator bool() const {

Review comment:
   done, and the self-assignment check is also done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1264) Add getter-setter interface helper

2020-06-18 Thread Adam Hunyadi (Jira)
Adam Hunyadi created MINIFICPP-1264:
---

 Summary: Add getter-setter interface helper
 Key: MINIFICPP-1264
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1264
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Affects Versions: 0.7.0
Reporter: Adam Hunyadi
Assignee: Adam Hunyadi
 Fix For: 0.8.0


*Background:*

In our codebase we have quite a lot of getter functions that follow this 
signature:
{code:c++|title=Current Interface}
void getMemberField(Type& argToModify);
bool getNullableMemberField(OtherType& argToModify);
{code}
Most developers would much rather use an interface that looks like this though:
{code:c++|title=Proposed Interface}
Type& getMemberField();
const Type& getMemberField();
optional getNullableMemberField();
{code}
The problem with the former version is that in some cases it makes the code 
unnecessary verbose. An example from our codebase:
{code:c++|title=Example}
 # Version 1
 utils::Identifier sourceUuid;
 source->getUUID(sourceUuid);
 connection->setSourceUUID(sourceUuid);
 # Version 2
 connection->setSourceUUID(source->getUUID(sourceUuid));
{code}
Unfortunately more ofthen than not this is the exact pattern the getters appear 
in.

*Proposal:*
 A godbolt implementation for a convenience function is available here:

[[Proof of 
Concept]|https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAM1QDsCBlZAQwBtMQBGAFlICsupVs1qhkAUgBMAISnTSAZ0ztkBPHUqZa6AMKpWAVwC2tQVvQAZPLUwA5YwCNMxENwBspAA6oFhdbT1DE0FvXzU6Kxt7IycXd0VlTFV/BgJmYgJA41NOBJVw2lT0gki7R2dXDwU0jKzg3Ori0ujYyoBKRVQDYmQOAHIpAGZrZEMsAGpxQZ1q4mtgKexxAAYAQSGRscxJ6fVZzGYjRZX1yWHaUYMJqZ0DNVZCAE9jtY2LrZ2dI0wjEmfBpavM6bK7bG6oTwFNgvU7nS7XaYANySRGIMLe8LB0yoBguUNY6OB71BnwIj08mAA%2BgRiMxCAp0WtZgZVOMAPKsdCPHTCBQM14Adlka3GovGiNQeHQ42AmAIqwg1gIUjc43SwDakyFauIwB2ABE1VNpFr9ScxeMlWqDeNOJJjeaxUqAFTjBw22gGVisSFowbC9YCs3%2Bk4nAg/TzCcOfUbMPnjABii3GzNZq11ABVyZgHWtw0ZI8xozcyRTaIdtgAlOWkcalrQV8Y8hS1%2Bvl74AOi743TwAZAJOqYIPcz2Zu1eHEGbIBAzraEF7Ci7HbaydDQsdovzheL018AC8qcPbDCLUOZXLKeqtQGLRaDL5RHXsza243qugZwQDJGqYlvvQNy2LWH5fj%2B7A3Iuy7HACX5jiGIpiuIQa5oGwYBmGEZRliOhvt87IOHwrbZu22wJriMLnhO3S0AAatCgq3mKD7zCOwDVlQr4kY2vZZhSNzkbQixflhRbbLKBBXrqNzLMJID1qh96PnqvZcWW74EJ%2BIDED8qDIpSOlUM4Wi9JBuocXJCkIesiGitRxB0WwECalMhpYOw4aKWK9mOawEBsoRKrjKghG1oJ4w4rQmogMFhGUhAKohXwHQRbi8WRS5G4obZkw5apEnOZuN5FRaqnql5d4pppM7WBKADWmDUGltZJZStbqqu1mVaKOnfg5OoLF1SHZasFqeHMiJiSARUBXwrqtRVia4qltCUqhyHoaGaxWkYdK0M5xU5RyXI8nGCh1pg1SLb1NHjD59EEtq4bVLWKrHdyvIKDOEmjRtHYFZ1GEjVtNmgyD4OvJDYNQyDfQdKwIB9AArH0pCmH0yyo6giM6HIcgpl0PRgmcnCowQiOY20HR1a43AdpI3AABxuEjgwAJzcMskiM4MjMc0IiPcKj6OY6Q2N9KjX3LKQ5MY3DpBwLAMCICgqAFng7BkBQEBoOrmsoMIoicMsJukFQGvhsQX0QA4iOk6QDjWOkjx26jusAQQbK0KwLty6QWC7aI7AU6j%2BA6ckyJfX7mAAB5JHc/Si0qyiu0IeAOLSxDchgidk3MRiux0ND0EwbAcDw/CCIbYh4zIacOF9sCkSAfasKQyIuA84ZU2LkL%2BFHAC0H6uRIMhyJwArjAPbKDJLiTJBoEDmHUOSkOYzTlC4uShH4dAryEPi77QG8xBUDTzwURS1Po2SCEo%2BQpDUJTWGUp9b4oT/7w0T8n60nAdAoQmvQuDw0RijNGIcxaIxjszAebhuDjFjE%2BY2HZlioPGBAXAhASCTBJrWPQetnC4MGP/JstdpBkxDj3GmSNpYIz6ELUgBd4GoMZnaJGSMBQCmNsseBAoIF%2B3FpLEA0tZaUwVsrCASAugEE8HccglBdaeA1mfNe%2BBUSCGLowFgwdGakAAO60k8IXAWyNhaQPFkMTg4x9GEAQOMGBbg4EIKQXqFBaDRZiLhh0BABwsAuGcqYxhBdBhIw7GzTgjMBTcG4JIbmbM2ahN4CLLGiNhGiKodTEAHCOwCiRpwQY7hlhIzZtwtwuR6GzwEaLIRMtMmmMkOYwRaS6lyx7h3XwGhuBAA%3D%3D%3D]

We can introduce this and simplify later usages of getters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442078239



##
File path: libminifi/include/utils/ValueUtils.h
##
@@ -0,0 +1,193 @@
+/**

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442076547



##
File path: libminifi/include/utils/ValueParser.h
##
@@ -0,0 +1,193 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenseas/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBMINIFI_INCLUDE_UTILS_VALUEUTILS_H_
+#define LIBMINIFI_INCLUDE_UTILS_VALUEUTILS_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "PropertyErrors.h"
+#include "GeneralUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+class ValueParser {
+ private:
+  template
+  struct is_non_narrowing_convertible: std::false_type {
+static_assert(std::is_integral::value && 
std::is_integral::value, "Checks only integral values");
+  };
+
+  template
+  struct is_non_narrowing_convertible()})>>: std::true_type {
+static_assert(std::is_integral::value && 
std::is_integral::value, "Checks only integral values");
+  };
+
+ public:
+  explicit ValueParser(const std::string& str, std::size_t offset = 0): 
str(str), offset(offset) {}
+
+  template
+  ValueParser& parseInt(Out& out) {
+static_assert(is_non_narrowing_convertible::value, "Expected 
lossless conversion from int");
+long result;  // NOLINT
+auto len = safeCallConverter(std::strtol, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse int");
+}
+if (result < (std::numeric_limits::min)() || result > 
(std::numeric_limits::max)()) {
+  throw ParseException("Cannot convert long to int");
+}
+offset += len;
+out = {static_cast(result)};
+return *this;
+  }
+
+  template
+  ValueParser& parseLong(Out& out) {
+static_assert(is_non_narrowing_convertible::value, "Expected 
lossless conversion from long");  // NOLINT
+long result;  // NOLINT
+auto len = safeCallConverter(std::strtol, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse long");
+}
+offset += len;
+out = {result};
+return *this;
+  }
+
+  template
+  ValueParser& parseLongLong(Out& out) {
+static_assert(is_non_narrowing_convertible::value, 
"Expected lossless conversion from long long");  // NOLINT
+long long result;  // NOLINT
+auto len = safeCallConverter(std::strtoll, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse long long");
+}
+offset += len;
+out = {result};
+return *this;
+  }
+
+  template
+  ValueParser& parseUInt32(Out& out) {
+static_assert(is_non_narrowing_convertible::value, 
"Expected lossless conversion from uint32_t");
+parseSpace();
+if (offset < str.length() && str[offset] == '-') {
+  throw ParseException("Not an unsigned long");
+}
+unsigned long result;  // NOLINT
+auto len = safeCallConverter(std::strtoul, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse uint32_t");
+}
+if (result > (std::numeric_limits::max)()) {
+  throw ParseException("Cannot convert unsigned long to uint32_t");
+}
+offset += len;
+out = {static_cast(result)};
+return *this;
+  }
+
+  template
+  ValueParser& parseUnsignedLongLong(Out& out) {
+static_assert(is_non_narrowing_convertible::value, "Expected lossless conversion from unsigned long long");  // NOLINT
+parseSpace();
+if (offset < str.length() && str[offset] == '-') {
+  throw ParseException("Not an unsigned long");
+}
+unsigned long long result;  // NOLINT
+auto len = safeCallConverter(std::strtoull, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse unsigned long long");
+}
+offset += len;
+out = {result};
+return *this;
+  }
+
+  template
+  ValueParser& parseBool(Out& out) {
+parseSpace();
+if (std::strncmp(str.c_str() + offset, "false", std::strlen("false")) == 
0) {
+  offset += std::strlen("false");
+  out = false;
+} else if (std::strncmp(str.c_str() + offset, "true", std::strlen("true")) 
== 0) {
+  offset += std::strlen("true");
+  out = true;
+} else {
+  throw ParseException("Couldn't parse bool");
+}
+return *this;
+  }
+
+  void 

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442076380



##
File path: libminifi/include/utils/PropertyErrors.h
##
@@ -0,0 +1,115 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenseas/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBMINIFI_INCLUDE_UTILS_PROPERTYERRORS_H_
+#define LIBMINIFI_INCLUDE_UTILS_PROPERTYERRORS_H_
+
+#include 
+
+#include "Exception.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+
+namespace core {
+
+class PropertyValue;
+class ConfigurableComponent;
+class Property;
+
+} /* namespace core */
+
+namespace utils {
+
+class ValueException: public Exception{
+ private:
+  explicit ValueException(const std::string& err): 
Exception(ExceptionType::GENERAL_EXCEPTION, err) {}
+  explicit ValueException(const char* err): 
Exception(ExceptionType::GENERAL_EXCEPTION, err) {}
+
+  // base class already has a virtual destructor
+
+  friend class ParseException;
+  friend class ConversionException;
+  friend class InvalidValueException;
+};

Review comment:
   moved most stuff into the namespace `internal`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442075945



##
File path: libminifi/include/core/CachedValueValidator.h
##
@@ -0,0 +1,103 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NIFI_MINIFI_CPP_CACHEDVALUEVALIDATOR_H
+#define NIFI_MINIFI_CPP_CACHEDVALUEVALIDATOR_H
+
+#include "PropertyValidation.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+
+class CachedValueValidator{
+ public:
+  enum class Result {
+FAILURE,
+SUCCESS,
+RECOMPUTE
+  };
+
+  CachedValueValidator() = default;
+  CachedValueValidator(const CachedValueValidator& other) : 
validator_(other.validator_) {}
+  CachedValueValidator(CachedValueValidator&& other) noexcept : 
validator_(std::move(other.validator_)) {}
+  CachedValueValidator& operator=(const CachedValueValidator& other) {
+validator_ = other.validator_;
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+  CachedValueValidator& operator=(CachedValueValidator&& other) {
+validator_ = std::move(other.validator_);
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+
+  CachedValueValidator(const std::shared_ptr& other) : 
validator_(other) {}
+  CachedValueValidator(std::shared_ptr&& other) : 
validator_(std::move(other)) {}
+  CachedValueValidator& operator=(const std::shared_ptr& 
new_validator) {
+validator_ = new_validator;
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+  CachedValueValidator& operator=(std::shared_ptr&& 
new_validator) {
+validator_ = std::move(new_validator);
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+
+  const std::shared_ptr& operator->() const {
+return validator_;
+  }
+
+  operator bool() const {
+return (bool)validator_;
+  }
+
+  const std::shared_ptr& operator*() const {
+return validator_;
+  }
+
+  void setValidationResult(bool success) const {
+validation_result_ = success ? Result::SUCCESS : Result::FAILURE;
+  }
+
+  void clearValidationResult() const {
+validation_result_ = Result::RECOMPUTE;
+  }

Review comment:
   clearValidationResult can be not const, but setValidationResult must be 
const as `PropertyValue::validate` is const, so the cached result must be 
mutable





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442075945



##
File path: libminifi/include/core/CachedValueValidator.h
##
@@ -0,0 +1,103 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NIFI_MINIFI_CPP_CACHEDVALUEVALIDATOR_H
+#define NIFI_MINIFI_CPP_CACHEDVALUEVALIDATOR_H
+
+#include "PropertyValidation.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace core {
+
+class CachedValueValidator{
+ public:
+  enum class Result {
+FAILURE,
+SUCCESS,
+RECOMPUTE
+  };
+
+  CachedValueValidator() = default;
+  CachedValueValidator(const CachedValueValidator& other) : 
validator_(other.validator_) {}
+  CachedValueValidator(CachedValueValidator&& other) noexcept : 
validator_(std::move(other.validator_)) {}
+  CachedValueValidator& operator=(const CachedValueValidator& other) {
+validator_ = other.validator_;
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+  CachedValueValidator& operator=(CachedValueValidator&& other) {
+validator_ = std::move(other.validator_);
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+
+  CachedValueValidator(const std::shared_ptr& other) : 
validator_(other) {}
+  CachedValueValidator(std::shared_ptr&& other) : 
validator_(std::move(other)) {}
+  CachedValueValidator& operator=(const std::shared_ptr& 
new_validator) {
+validator_ = new_validator;
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+  CachedValueValidator& operator=(std::shared_ptr&& 
new_validator) {
+validator_ = std::move(new_validator);
+validation_result_ = Result::RECOMPUTE;
+return *this;
+  }
+
+  const std::shared_ptr& operator->() const {
+return validator_;
+  }
+
+  operator bool() const {
+return (bool)validator_;
+  }
+
+  const std::shared_ptr& operator*() const {
+return validator_;
+  }
+
+  void setValidationResult(bool success) const {
+validation_result_ = success ? Result::SUCCESS : Result::FAILURE;
+  }
+
+  void clearValidationResult() const {
+validation_result_ = Result::RECOMPUTE;
+  }

Review comment:
   clearValidationResult can be not const, but setValidationResult must be 
const are `PropertyValue::validate` is const, so the cached result must be 
mutable





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442075344



##
File path: extensions/libarchive/MergeContent.cpp
##
@@ -39,14 +39,29 @@ namespace nifi {
 namespace minifi {
 namespace processors {
 
-core::Property MergeContent::MergeStrategy("Merge Strategy", "Defragment or 
Bin-Packing Algorithm", MERGE_STRATEGY_DEFRAGMENT);
-core::Property MergeContent::MergeFormat("Merge Format", "Merge Format", 
MERGE_FORMAT_CONCAT_VALUE);
+core::Property MergeContent::MergeStrategy(
+  core::PropertyBuilder::createProperty("Merge Strategy")
+  ->withDescription("Defragment or Bin-Packing Algorithm")
+  ->withAllowableValues({MERGE_STRATEGY_DEFRAGMENT, 
MERGE_STRATEGY_BIN_PACK})
+  ->withDefaultValue(MERGE_STRATEGY_DEFRAGMENT)->build());
+core::Property MergeContent::MergeFormat(
+  core::PropertyBuilder::createProperty("Merge Format")
+  ->withDescription("Merge Format")
+  ->withAllowableValues({MERGE_FORMAT_CONCAT_VALUE, 
MERGE_FORMAT_TAR_VALUE, MERGE_FORMAT_ZIP_VALUE})
+  ->withDefaultValue(MERGE_FORMAT_CONCAT_VALUE)->build());
 core::Property MergeContent::CorrelationAttributeName("Correlation Attribute 
Name", "Correlation Attribute Name", "");
-core::Property MergeContent::DelimiterStratgey("Delimiter Strategy", 
"Determines if Header, Footer, and Demarcator should point to files", 
DELIMITER_STRATEGY_FILENAME);
+core::Property MergeContent::DelimiterStratgey(

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


fgerlits commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442063207



##
File path: libminifi/include/utils/ValueParser.h
##
@@ -0,0 +1,193 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenseas/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBMINIFI_INCLUDE_UTILS_VALUEUTILS_H_
+#define LIBMINIFI_INCLUDE_UTILS_VALUEUTILS_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "PropertyErrors.h"
+#include "GeneralUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+class ValueParser {
+ private:
+  template
+  struct is_non_narrowing_convertible: std::false_type {
+static_assert(std::is_integral::value && 
std::is_integral::value, "Checks only integral values");
+  };
+
+  template
+  struct is_non_narrowing_convertible()})>>: std::true_type {
+static_assert(std::is_integral::value && 
std::is_integral::value, "Checks only integral values");
+  };
+
+ public:
+  explicit ValueParser(const std::string& str, std::size_t offset = 0): 
str(str), offset(offset) {}
+
+  template
+  ValueParser& parseInt(Out& out) {
+static_assert(is_non_narrowing_convertible::value, "Expected 
lossless conversion from int");
+long result;  // NOLINT
+auto len = safeCallConverter(std::strtol, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse int");
+}
+if (result < (std::numeric_limits::min)() || result > 
(std::numeric_limits::max)()) {
+  throw ParseException("Cannot convert long to int");
+}
+offset += len;
+out = {static_cast(result)};
+return *this;
+  }
+
+  template
+  ValueParser& parseLong(Out& out) {
+static_assert(is_non_narrowing_convertible::value, "Expected 
lossless conversion from long");  // NOLINT
+long result;  // NOLINT
+auto len = safeCallConverter(std::strtol, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse long");
+}
+offset += len;
+out = {result};
+return *this;
+  }
+
+  template
+  ValueParser& parseLongLong(Out& out) {
+static_assert(is_non_narrowing_convertible::value, 
"Expected lossless conversion from long long");  // NOLINT
+long long result;  // NOLINT
+auto len = safeCallConverter(std::strtoll, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse long long");
+}
+offset += len;
+out = {result};
+return *this;
+  }
+
+  template
+  ValueParser& parseUInt32(Out& out) {
+static_assert(is_non_narrowing_convertible::value, 
"Expected lossless conversion from uint32_t");
+parseSpace();
+if (offset < str.length() && str[offset] == '-') {
+  throw ParseException("Not an unsigned long");
+}
+unsigned long result;  // NOLINT
+auto len = safeCallConverter(std::strtoul, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse uint32_t");
+}
+if (result > (std::numeric_limits::max)()) {
+  throw ParseException("Cannot convert unsigned long to uint32_t");
+}
+offset += len;
+out = {static_cast(result)};
+return *this;
+  }
+
+  template
+  ValueParser& parseUnsignedLongLong(Out& out) {
+static_assert(is_non_narrowing_convertible::value, "Expected lossless conversion from unsigned long long");  // NOLINT
+parseSpace();
+if (offset < str.length() && str[offset] == '-') {
+  throw ParseException("Not an unsigned long");
+}
+unsigned long long result;  // NOLINT
+auto len = safeCallConverter(std::strtoull, result);
+if ( len == 0 ) {
+  throw ParseException("Couldn't parse unsigned long long");
+}
+offset += len;
+out = {result};
+return *this;
+  }
+
+  template
+  ValueParser& parseBool(Out& out) {
+parseSpace();
+if (std::strncmp(str.c_str() + offset, "false", std::strlen("false")) == 
0) {
+  offset += std::strlen("false");
+  out = false;
+} else if (std::strncmp(str.c_str() + offset, "true", std::strlen("true")) 
== 0) {
+  offset += std::strlen("true");
+  out = true;
+} else {
+  throw ParseException("Couldn't parse bool");
+}
+return *this;
+  }
+
+  void 

[jira] [Updated] (NIFI-7549) Addign Hazelcast based implementation for DistributedMapCacheClient

2020-06-18 Thread Simon Bence (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Bence updated NIFI-7549:
--
Fix Version/s: 1.12.0
   Issue Type: New Feature  (was: Bug)

> Addign Hazelcast based implementation for DistributedMapCacheClient
> ---
>
> Key: NIFI-7549
> URL: https://issues.apache.org/jira/browse/NIFI-7549
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Fix For: 1.12.0
>
>
> Adding Hazelcast support in the same fashion as in case of Redis for example 
> would be useful. Even further: in order to make it easier to use, embedded 
> Hazelcast support should be added, which makes it unnecessary to start 
> Hazelcast cluster manually.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7549) Addign Hazelcast based implementation for DistributedMapCacheClient

2020-06-18 Thread Simon Bence (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Bence updated NIFI-7549:
--
Description: Adding Hazelcast support in the same fashion as in case of 
Redis for example would be useful. Even further: in order to make it easier to 
use, embedded Hazelcast support should be added, which makes it unnecessary to 
start Hazelcast cluster manually.

> Addign Hazelcast based implementation for DistributedMapCacheClient
> ---
>
> Key: NIFI-7549
> URL: https://issues.apache.org/jira/browse/NIFI-7549
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
>
> Adding Hazelcast support in the same fashion as in case of Redis for example 
> would be useful. Even further: in order to make it easier to use, embedded 
> Hazelcast support should be added, which makes it unnecessary to start 
> Hazelcast cluster manually.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support

2020-06-18 Thread GitBox


arpadboda commented on a change in pull request #784:
URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r442041040



##
File path: extensions/script/python/ExecutePythonProcessor.cpp
##
@@ -35,155 +35,184 @@ namespace python {
 namespace processors {
 
 core::Property ExecutePythonProcessor::ScriptFile("Script File",  // NOLINT
-R"(Path to script file to execute)", "");
+R"(Path to script file to execute. Only one of Script File or Script Body 
may be used)", "");
+core::Property ExecutePythonProcessor::ScriptBody("Script Body",  // NOLINT

Review comment:
   Using property builder (withdescription, blah blah) can help breaking 
these lines without linter errors. And improve readability as well. 
   You can also specify some to be required.

##
File path: extensions/script/python/ExecutePythonProcessor.cpp
##
@@ -35,155 +35,184 @@ namespace python {
 namespace processors {
 
 core::Property ExecutePythonProcessor::ScriptFile("Script File",  // NOLINT
-R"(Path to script file to execute)", "");
+R"(Path to script file to execute. Only one of Script File or Script Body 
may be used)", "");
+core::Property ExecutePythonProcessor::ScriptBody("Script Body",  // NOLINT
+R"(Script to execute. Only one of Script File or Script Body may be 
used)", "");
 core::Property ExecutePythonProcessor::ModuleDirectory("Module Directory",  // 
NOLINT
-R"(Comma-separated list of paths to files and/or directories which
- contain modules required by 
the script)", "");
+R"(Comma-separated list of paths to files and/or directories which contain 
modules required by the script)", "");
 
 core::Relationship ExecutePythonProcessor::Success("success", "Script 
successes");  // NOLINT
 core::Relationship ExecutePythonProcessor::Failure("failure", "Script 
failures");  // NOLINT
 
 void ExecutePythonProcessor::initialize() {
   // initialization requires that we do a little leg work prior to onSchedule
   // so that we can provide manifest our processor identity
-  std::set properties;
-
-  std::string prop;
-  getProperty(ScriptFile.getName(), prop);
-
-  properties.insert(ScriptFile);
-  properties.insert(ModuleDirectory);
-  setSupportedProperties(properties);
-
-  std::set relationships;
-  relationships.insert(Success);
-  relationships.insert(Failure);
-  setSupportedRelationships(std::move(relationships));
-  setAcceptAllProperties();
-  if (!prop.empty()) {
-setProperty(ScriptFile, prop);
-std::shared_ptr engine;
-python_logger_ = 
logging::LoggerFactory::getAliasedLogger(getName());
+  if (getProperties().empty()) {
+setSupportedProperties({
+  ScriptFile,
+  ScriptBody,
+  ModuleDirectory
+});
+setAcceptAllProperties();
+setSupportedRelationships({
+  Success,
+  Failure
+});
+valid_init_ = false;
+return;
+  }
 
-engine = createEngine();
+  python_logger_ = 
logging::LoggerFactory::getAliasedLogger(getName());
 
-if (engine == nullptr) {
-  throw std::runtime_error("No script engine available");
-}
+  getProperty(ModuleDirectory.getName(), module_directory_);
 
-try {
-  engine->evalFile(prop);
-  auto me = shared_from_this();
-  triggerDescribe(engine, me);
-  triggerInitialize(engine, me);
+  valid_init_ = false;
+  appendPathForImportModules();
+  loadScript();
+  try {
+if (script_to_exec_.size()) {
+  std::shared_ptr engine = getScriptEngine();
+  engine->eval(script_to_exec_);
+  auto shared_this = shared_from_this();
+  engine->describe(shared_this);
+  engine->onInitialize(shared_this);
+  handleEngineNoLongerInUse(std::move(engine));
   valid_init_ = true;
-} catch (std::exception ) {
-  logger_->log_error("Caught Exception %s", exception.what());
-  engine = nullptr;
-  std::rethrow_exception(std::current_exception());
-  valid_init_ = false;
-} catch (...) {
-  logger_->log_error("Caught Exception");
-  engine = nullptr;
-  std::rethrow_exception(std::current_exception());
-  valid_init_ = false;
 }
-
+  }
+  catch (const std::exception& exception) {
+logger_->log_error("Caught Exception: %s", exception.what());
+std::rethrow_exception(std::current_exception());
+  }
+  catch (...) {
+logger_->log_error("Caught Exception");
+std::rethrow_exception(std::current_exception());
   }
 }
 
 void ExecutePythonProcessor::onSchedule(const 
std::shared_ptr , const 
std::shared_ptr ) {
   if (!valid_init_) {
-throw std::runtime_error("Could not correctly in initialize " + getName());
-  }
-  context->getProperty(ScriptFile.getName(), script_file_);
-  context->getProperty(ModuleDirectory.getName(), module_directory_);
-  if (script_file_.empty() && script_engine_.empty()) {
-logger_->log_error("Script File must be defined");
-return;
+throw std::runtime_error("Could not correctly initialize " + 

[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-18 Thread GitBox


fgerlits commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r442050285



##
File path: extensions/libarchive/MergeContent.cpp
##
@@ -39,14 +39,29 @@ namespace nifi {
 namespace minifi {
 namespace processors {
 
-core::Property MergeContent::MergeStrategy("Merge Strategy", "Defragment or 
Bin-Packing Algorithm", MERGE_STRATEGY_DEFRAGMENT);
-core::Property MergeContent::MergeFormat("Merge Format", "Merge Format", 
MERGE_FORMAT_CONCAT_VALUE);
+core::Property MergeContent::MergeStrategy(
+  core::PropertyBuilder::createProperty("Merge Strategy")
+  ->withDescription("Defragment or Bin-Packing Algorithm")
+  ->withAllowableValues({MERGE_STRATEGY_DEFRAGMENT, 
MERGE_STRATEGY_BIN_PACK})
+  ->withDefaultValue(MERGE_STRATEGY_DEFRAGMENT)->build());
+core::Property MergeContent::MergeFormat(
+  core::PropertyBuilder::createProperty("Merge Format")
+  ->withDescription("Merge Format")
+  ->withAllowableValues({MERGE_FORMAT_CONCAT_VALUE, 
MERGE_FORMAT_TAR_VALUE, MERGE_FORMAT_ZIP_VALUE})
+  ->withDefaultValue(MERGE_FORMAT_CONCAT_VALUE)->build());
 core::Property MergeContent::CorrelationAttributeName("Correlation Attribute 
Name", "Correlation Attribute Name", "");
-core::Property MergeContent::DelimiterStratgey("Delimiter Strategy", 
"Determines if Header, Footer, and Demarcator should point to files", 
DELIMITER_STRATEGY_FILENAME);
+core::Property MergeContent::DelimiterStratgey(

Review comment:
   typo: stratgey -> strategy -- this was in the old code, but we might as 
well fix it now





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pvillard31 commented on pull request #3887: NIFI-6032 - CDC for Oracle using xstream

2020-06-18 Thread GitBox


pvillard31 commented on pull request #3887:
URL: https://github.com/apache/nifi/pull/3887#issuecomment-645840355


   @shivamkh90 - that would greatly help if you can checkout this pull request, 
give it a try and report your results here. An extensive set of tests with your 
flow definitions and screenshots of the results and expectations would help 
others to also chime in. We have tons of PR and we need help from the community 
to get the PRs merged into NiFi.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #812: MINIFICPP-1247 - Enhance logging for CWEL

2020-06-18 Thread GitBox


arpadboda commented on a change in pull request #812:
URL: https://github.com/apache/nifi-minifi-cpp/pull/812#discussion_r442015928



##
File path: extensions/windows-event-log/ConsumeWindowsEventLog.cpp
##
@@ -381,23 +405,27 @@ void ConsumeWindowsEventLog::onTrigger(const 
std::shared_ptrlog_trace("Finish enumerating events.");

Review comment:
   This can mean that the processor gathered a bunch of events (and those 
events has been generated since the last ontrigger call), so I don't think 
waiting  for a longer time helps in this case.
   Yield might be  considered in case the ontrigger call generated 0 new flow 
files.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #812: MINIFICPP-1247 - Enhance logging for CWEL

2020-06-18 Thread GitBox


arpadboda commented on a change in pull request #812:
URL: https://github.com/apache/nifi-minifi-cpp/pull/812#discussion_r442015928



##
File path: extensions/windows-event-log/ConsumeWindowsEventLog.cpp
##
@@ -381,23 +405,27 @@ void ConsumeWindowsEventLog::onTrigger(const 
std::shared_ptrlog_trace("Finish enumerating events.");

Review comment:
   This can mean that the processor gathered a bunch of events (and those 
events has been generated since the last ontrigger call), so I don't think 
waiting  for a longer time helps in this case.
   Yield might be  considered in case the ontrigger call generated 0 new events.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] shivamkh90 commented on pull request #3887: Nifi 6032

2020-06-18 Thread GitBox


shivamkh90 commented on pull request #3887:
URL: https://github.com/apache/nifi/pull/3887#issuecomment-645818736


   Eagerly waiting for this feature  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7523) Use SSL Context Service for Atlas HTTPS connection in Atlas reporting task

2020-06-18 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-7523:
--
Status: Patch Available  (was: In Progress)

> Use SSL Context Service for Atlas HTTPS connection in Atlas reporting task
> --
>
> Key: NIFI-7523
> URL: https://issues.apache.org/jira/browse/NIFI-7523
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SSL Context Service property is currently used only for the Kafka connection.
> Use the truststore from SSL Context Service when connecting to Atlas via 
> HTTPS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] turcsanyip opened a new pull request #4348: NIFI-7523: Use SSL Context Service for Atlas HTTPS connection in Atla…

2020-06-18 Thread GitBox


turcsanyip opened a new pull request #4348:
URL: https://github.com/apache/nifi/pull/4348


   …s reporting task
   
   Also fixing ControllerServiceDisabledException-s when validating the 
Kerberos config
   
   https://issues.apache.org/jira/browse/NIFI-7523
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org