[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-29 Thread GitBox


adamdebreceni commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447453017



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";

Review comment:
   interesting will rollback
   - do we have a list of platforms we plan to support?
   - is the `/var/tmp` the second best place? according to 
[this](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s15.html) they are 
preserved between reboots
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7589) Unpack processor tar unpacker creates incorrect path value

2020-06-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamás Bunth updated NIFI-7589:
--
Priority: Minor  (was: Major)

> Unpack processor tar unpacker creates incorrect path value
> --
>
> Key: NIFI-7589
> URL: https://issues.apache.org/jira/browse/NIFI-7589
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamás Bunth
>Assignee: Tamás Bunth
>Priority: Minor
>
> *Steps to reproduce*:
>  * Create a tarball with one file in it.
>  * Create a flow:
> {code:java}
> Getfile -> UnpackContent -> logAttribute
> {code}
>  ** Set Getfile "directory" to read the tarball
>  ** Set UnpackContent "PackagingFormat" to tar.
>  * Examine the 'path' flowfile attribute in the logs.
> *Expected results*:
> Since the file is in the tarball root, "path" attribute should be "." or "/".
> *Current results*:
> "null/"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7589) Unpack processor tar unpacker creates incorrect path value

2020-06-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamás Bunth updated NIFI-7589:
--
Description: 
*Steps to reproduce*:
 * Create a tarball with one file in it.
 * Create a flow:
{code:java}
Getfile -> UnpackContent -> logAttribute
{code}

 ** Set Getfile "directory" to read the tarball
 ** Set UnpackContent "PackagingFormat" to tar.
 * Examine the 'path' flowfile attribute in the logs.

*Expected results*:

Since the file is in the tarball root, "path" attribute should be "." or "/".

*Current results*:

"null/"

  was:
Steps to reproduce:
 # Create a tarball with one file in it.
 # Create a flow:
{code:java}
Getfile -> UnpackContent -> logAttribute
{code}
- Set Getfile "directory" to read the tarball
- Set UnpackContent "PackagingFormat" to tar.
 # Examine the 'path' flowfile attribute in the logs.

Expected results:

Since the file is in the tarball root, "path" attribute should be "." or "/".

Current results:

"null/"


> Unpack processor tar unpacker creates incorrect path value
> --
>
> Key: NIFI-7589
> URL: https://issues.apache.org/jira/browse/NIFI-7589
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamás Bunth
>Assignee: Tamás Bunth
>Priority: Major
>
> *Steps to reproduce*:
>  * Create a tarball with one file in it.
>  * Create a flow:
> {code:java}
> Getfile -> UnpackContent -> logAttribute
> {code}
>  ** Set Getfile "directory" to read the tarball
>  ** Set UnpackContent "PackagingFormat" to tar.
>  * Examine the 'path' flowfile attribute in the logs.
> *Expected results*:
> Since the file is in the tarball root, "path" attribute should be "." or "/".
> *Current results*:
> "null/"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7589) Unpack processor tar unpacker creates incorrect path value

2020-06-29 Thread Jira
Tamás Bunth created NIFI-7589:
-

 Summary: Unpack processor tar unpacker creates incorrect path value
 Key: NIFI-7589
 URL: https://issues.apache.org/jira/browse/NIFI-7589
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Tamás Bunth
Assignee: Tamás Bunth


Steps to reproduce:
 # Create a tarball with one file in it.
 # Create a flow:
{code:java}
Getfile -> UnpackContent -> logAttribute
{code}
- Set Getfile "directory" to read the tarball
- Set UnpackContent "PackagingFormat" to tar.
 # Examine the 'path' flowfile attribute in the logs.

Expected results:

Since the file is in the tarball root, "path" attribute should be "." or "/".

Current results:

"null/"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-29 Thread GitBox


adamdebreceni commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447447817



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;

Review comment:
   indeed, it is only used once (previously it was used multiple times) 
will remove





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] r65535 commented on pull request #4336: NIFI-7515 Added 7Zip support to UnpackContent

2020-06-29 Thread GitBox


r65535 commented on pull request #4336:
URL: https://github.com/apache/nifi/pull/4336#issuecomment-651577043


   @MikeThomsen - The only solution I've found is to write the file to disk, 
and pull the entries out from there:
   ```
   SevenZFile sevenZFile = new SevenZFile(new File("archive.7z"));
   SevenZArchiveEntry entry = sevenZFile.getNextEntry();
   ```
   Is this a better way of handling the data or should I add a warning in the 
description, and add the tag:`@SystemResourceConsideration(resource = 
SystemResource.MEMORY)`?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7579) Create a GetS3Object Processor

2020-06-29 Thread Wouter de Vries (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148342#comment-17148342
 ] 

Wouter de Vries commented on NIFI-7579:
---

[~ArpStorm1]do you have a specific reason why that should not happen? 

As far as I can see the code of this new processor would be by and large 
identical to the existing FetchS3Object processor, is that not the case?

> Create a GetS3Object Processor
> --
>
> Key: NIFI-7579
> URL: https://issues.apache.org/jira/browse/NIFI-7579
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: ArpStorm1
>Assignee: YoungGyu Chun
>Priority: Major
>
> Sometimes the client needs to get only specific object or a subset of objects 
> from its bucket. Now, the only way to do it is using ListS3 Processor and 
> after that using FetchS3Object processor. Creating a GetS3Object processor 
> for such cases can be great 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] javiroman commented on pull request #4358: NIFI-7578 nifi-toolkit CLI Process Group Create command

2020-06-29 Thread GitBox


javiroman commented on pull request #4358:
URL: https://github.com/apache/nifi/pull/4358#issuecomment-651557340


   Unfortunately I'm not getting checkstyle errors. The build command is:
   
   
![build-command](https://user-images.githubusercontent.com/1099214/86088318-64ff1e80-baa6-11ea-82be-9b239b5cb5a2.png)
   
   
![build-sucess](https://user-images.githubusercontent.com/1099214/86088389-8c55eb80-baa6-11ea-9e18-a40d8d97594b.png)
   
   Anyway I've just removed some unused imports.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] esecules edited a comment on pull request #4286: NIFI-7386: Azurite emulator support

2020-06-29 Thread GitBox


esecules edited a comment on pull request #4286:
URL: https://github.com/apache/nifi/pull/4286#issuecomment-651455226


   Bump! Having this feature will cut down on how much my test suite wait for 
network I/O by letting me target an emulator running on the same machine as the 
tests.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] esecules commented on pull request #4286: NIFI-7386: Azurite emulator support

2020-06-29 Thread GitBox


esecules commented on pull request #4286:
URL: https://github.com/apache/nifi/pull/4286#issuecomment-651455226


   Bump! Having this feature will cut down on how much my test suite wait for 
network I/O.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-5595) Add filter to template endpoint

2020-06-29 Thread Lucas Moten (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148234#comment-17148234
 ] 

Lucas Moten commented on NIFI-5595:
---

I may have encountered a bug on this, or else I'm misconfigured and could 
benefit from the outcomes of NIFI-6080.  But I have a feeling that the CORS 
check is not handling the port the call is made on when coming through a proxy. 
 My current workaround is to force the Origin header in Nginx to be 
[https://127.0.0.1|https://127.0.0.1/] without specifying the port that NiFi 
itself is listening on.

> Add filter to template endpoint
> ---
>
> Key: NIFI-5595
> URL: https://issues.apache.org/jira/browse/NIFI-5595
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
>  Labels: cors, security
> Fix For: 1.8.0
>
>
> The template endpoint needs a CORS filter applied.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] adamfisher commented on a change in pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor

2020-06-29 Thread GitBox


adamfisher commented on a change in pull request #3317:
URL: https://github.com/apache/nifi/pull/3317#discussion_r447339535



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/DetectDuplicateRecord.java
##
@@ -0,0 +1,620 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.google.common.base.Joiner;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnels;
+import org.apache.commons.codec.binary.Hex;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.apache.commons.codec.digest.MessageDigestAlgorithms;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.*;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.*;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathPropertyNameValidator;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.*;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.*;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.MessageDigest;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static java.util.stream.Collectors.toList;
+import static org.apache.commons.codec.binary.StringUtils.getBytesUtf8;
+import static org.apache.commons.lang3.StringUtils.*;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@SystemResourceConsideration(resource = SystemResource.MEMORY,
+description = "Caches records from each incoming FlowFile and 
determines if the cached record has " +
+"already been seen. The name of user-defined properties 
determines the RecordPath values used to " +
+"determine if a record is unique. If no user-defined 
properties are present, the entire record is " +
+"used as the input to determine uniqueness. All duplicate 
records are routed to 'duplicate'. " +
+"If the record is not determined to be a duplicate, the 
Processor routes the record to 'non-duplicate'.")
+@Tags({"text", "record", "update", "change", "replace", "modify", "distinct", 
"unique",
+"filter", "hash", "dupe", "duplicate", "dedupe"})
+@CapabilityDescription("Caches records from each incoming FlowFile and 
determines if the cached record has " +
+"already been seen. The name of user-defined properties determines the 
RecordPath values used to " +
+"determine if a record is unique. If no user-defined properties are 
present, the entire record is " +
+"used as the input to determine uniqueness. All duplicate records are 
routed to 'duplicate'. " +
+"If the record is not determined to be a dupli

[GitHub] [nifi] sjyang18 commented on pull request #4253: NIFI-7406: PutAzureCosmosRecord Processor

2020-06-29 Thread GitBox


sjyang18 commented on pull request #4253:
URL: https://github.com/apache/nifi/pull/4253#issuecomment-651441085


   @jfrazee I have rebased, resolved merge conflict, and updated with Cosmos 
SDK release version. I did forced-push since github rejected my update with the 
following error 'Updates were rejected because the tip of your current branch 
is behind its remote counterpart'. Will this work? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-29 Thread GitBox


arpadboda closed pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] adamfisher commented on a change in pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor

2020-06-29 Thread GitBox


adamfisher commented on a change in pull request #3317:
URL: https://github.com/apache/nifi/pull/3317#discussion_r447318893



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/DetectDuplicateRecord.java
##
@@ -0,0 +1,620 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.google.common.base.Joiner;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnels;
+import org.apache.commons.codec.binary.Hex;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.apache.commons.codec.digest.MessageDigestAlgorithms;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.*;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.*;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathPropertyNameValidator;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.*;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.*;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.MessageDigest;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static java.util.stream.Collectors.toList;
+import static org.apache.commons.codec.binary.StringUtils.getBytesUtf8;
+import static org.apache.commons.lang3.StringUtils.*;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@SystemResourceConsideration(resource = SystemResource.MEMORY,
+description = "Caches records from each incoming FlowFile and 
determines if the cached record has " +
+"already been seen. The name of user-defined properties 
determines the RecordPath values used to " +
+"determine if a record is unique. If no user-defined 
properties are present, the entire record is " +
+"used as the input to determine uniqueness. All duplicate 
records are routed to 'duplicate'. " +
+"If the record is not determined to be a duplicate, the 
Processor routes the record to 'non-duplicate'.")
+@Tags({"text", "record", "update", "change", "replace", "modify", "distinct", 
"unique",
+"filter", "hash", "dupe", "duplicate", "dedupe"})
+@CapabilityDescription("Caches records from each incoming FlowFile and 
determines if the cached record has " +

Review comment:
   File added in commits:
   
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org/apache/nifi/processors/standard/DetectDuplicateRecord/additionalDetails.html





This is an automated message from the Apache Git Service.
To respond to the message, please log o

[GitHub] [nifi] adamfisher commented on a change in pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor

2020-06-29 Thread GitBox


adamfisher commented on a change in pull request #3317:
URL: https://github.com/apache/nifi/pull/3317#discussion_r447317427



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestDetectDuplicateRecord.java
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.serialization.record.MockRecordParser;
+import org.apache.nifi.serialization.record.MockRecordWriter;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.util.*;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.nifi.processors.standard.DetectDuplicateRecord.*;
+import static org.junit.Assert.assertEquals;
+
+public class TestDetectDuplicateRecord {
+
+static {
+System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "info");

Review comment:
   @MikeThomsen What would the values be for the `@AfterClass`? That whole 
section was copied from another test and I don't see any `@BeforeClasss` or 
`@AfterClass` decorators on them.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] adamfisher commented on a change in pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor

2020-06-29 Thread GitBox


adamfisher commented on a change in pull request #3317:
URL: https://github.com/apache/nifi/pull/3317#discussion_r447314976



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestDetectDuplicateRecord.java
##
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.serialization.record.MockRecordParser;
+import org.apache.nifi.serialization.record.MockRecordWriter;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.util.*;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.nifi.processors.standard.DetectDuplicateRecord.*;
+import static org.junit.Assert.assertEquals;
+
+public class TestDetectDuplicateRecord {
+
+private TestRunner runner;
+private MockCacheService cache;
+private MockRecordParser reader;
+private MockRecordWriter writer;
+
+@BeforeClass
+public static void beforeClass() {
+System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "info");
+System.setProperty("org.slf4j.simpleLogger.showDateTime", "true");
+System.setProperty("org.slf4j.simpleLogger.log.nifi.io.nio", "debug");
+
System.setProperty("org.slf4j.simpleLogger.log.nifi.processors.standard.DetectDuplicateRecord",
 "debug");
+
System.setProperty("org.slf4j.simpleLogger.log.nifi.processors.standard.TestDetectDuplicateRecord",
 "debug");
+}
+
+@Before
+public void setup() throws InitializationException {
+runner = TestRunners.newTestRunner(DetectDuplicateRecord.class);
+
+// RECORD_READER, RECORD_WRITER
+reader = new MockRecordParser();
+writer = new MockRecordWriter("header", false);
+
+runner.addControllerService("reader", reader);
+runner.enableControllerService(reader);
+runner.addControllerService("writer", writer);
+runner.enableControllerService(writer);
+
+runner.setProperty(RECORD_READER, "reader");
+runner.setProperty(RECORD_WRITER, "writer");
+
+reader.addSchemaField("firstName", RecordFieldType.STRING);
+reader.addSchemaField("middleName", RecordFieldType.STRING);
+reader.addSchemaField("lastName", RecordFieldType.STRING);
+
+// INCLUDE_ZERO_RECORD_FLOWFILES
+runner.setProperty(INCLUDE_ZERO_RECORD_FLOWFILES, "true");
+
+// CACHE_IDENTIFIER
+runner.setProperty(CACHE_IDENTIFIER, "true");
+
+// DISTRIBUTED_CACHE_SERVICE
+cache = new MockCacheService();
+runner.addControllerService("cache", cache);
+runner.setProperty(DISTRIBUTED_CACHE_SERVICE, "cache");
+runner.enableControllerService(cache);
+
+// CACHE_ENTRY_IDENTIFIER
+final Map props = new HashMap<>();
+props.put("hash.value", "1000");
+runner.enqueue(new byte[]{}, props);
+
+// AGE_OFF_DURATION
+runner.setProperty(AGE_OFF_DURATION, "48 hours");
+
+runner.assertValid();
+}
+
+ @Test
+ public void testDetectDuplicatesHashSet() {
+runner.setProperty(FILTER_TYPE, HASH_SET_VALUE);
+runner.setProperty("/middleName", "${field.value}");
+reader.addRecord("John", "Q", "Smith");
+reader.addRecord("John", "Q", "Smith");
+reader.addRecord("Jane", "X", "Doe");
+
+runner.enqueue("");
+runner.run();
+
+doCountTests(0, 1, 1, 1, 2, 1);
+}
+
+@Test
+public void testDetectDuplicatesBloomFilter() {
+runner.setProperty(FILTER_TYPE, BLOOM_FILTER_VALUE);
+runner.setProperty(BLOOM_FILTER_FPP, "0.10");
+runner.setProperty("/middleName", "${field.value}");
+reader.addRecord("John", "Q", "Smith");
+reader.addRecord("John", "Q", "Smith");
+reader.addRecord("Jane", "X", "Doe");
+
+runner.enqueue("");
+runner.run();
+
+doCountTests(0, 1, 1, 1, 2, 1);
+}
+
+@Test
+public 

[GitHub] [nifi] jfrazee commented on a change in pull request #4352: NIFI-7563 Optimize the usage of JMS sessions and message producers

2020-06-29 Thread GitBox


jfrazee commented on a change in pull request #4352:
URL: https://github.com/apache/nifi/pull/4352#discussion_r447314728



##
File path: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/test/java/org/apache/nifi/jms/processors/helpers/ConnectionFactoryInvocationHandler.java
##
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.jms.processors.helpers;
+
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+import java.util.List;
+import java.util.Objects;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import javax.annotation.concurrent.ThreadSafe;
+import javax.jms.Connection;
+import javax.jms.ConnectionFactory;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * {@link ConnectionFactory}'s invocation handler to be used to create {@link 
Proxy} instances. This handler stores useful information to validate the proper 
resources handling of underlying
+ * connection factory.
+ */
+@ThreadSafe
+public final class ConnectionFactoryInvocationHandler implements 
InvocationHandler {
+
+private static final Logger LOGGER = 
LoggerFactory.getLogger(ConnectionFactoryInvocationHandler.class);
+
+private final ConnectionFactory connectionFactory;
+private final List handlers = new 
CopyOnWriteArrayList<>();
+private final AtomicInteger openedConnections = new AtomicInteger();
+
+public ConnectionFactoryInvocationHandler(ConnectionFactory 
connectionFactory) {
+this.connectionFactory = Objects.requireNonNull(connectionFactory);
+}
+
+@Override
+public Object invoke(Object proxy, Method method, Object[] args) throws 
Throwable {
+final Object o = 
connectionFactory.getClass().getMethod(method.getName(), 
method.getParameterTypes()).invoke(connectionFactory, args);
+LOGGER.debug("Method {} called on {}", method.getName(), 
connectionFactory);

Review comment:
   Minor nit. I know this is only used for a test, but can we wrap this in 
`LOGGER.isDebugEnabled()`? Same for `ConnectionInvocationHandler`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] jfrazee commented on a change in pull request #4352: NIFI-7563 Optimize the usage of JMS sessions and message producers

2020-06-29 Thread GitBox


jfrazee commented on a change in pull request #4352:
URL: https://github.com/apache/nifi/pull/4352#discussion_r447314098



##
File path: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/test/java/org/apache/nifi/jms/processors/PublishJMSIT.java
##
@@ -385,6 +390,66 @@ protected TcpTransport createTcpTransport(WireFormat wf, 
SocketFactory socketFac
 }
 }
 
+/**
+ * 
+ * This test validates the optimal resources usage. To process one message 
is expected to create only one connection, one session and one message producer.
+ * 
+ * 
+ * See https://issues.apache.org/jira/browse/NIFI-7563 for 
details.
+ * 
+ * @throws Exception any error related to the broker.
+ */
+@Test(timeout = 1)
+public void validateNIFI7563() throws Exception {
+BrokerService broker = new BrokerService();
+try {
+broker.setPersistent(false);
+TransportConnector connector = 
broker.addConnector("tcp://127.0.0.1:0");
+int port = connector.getServer().getSocketAddress().getPort();
+broker.start();
+
+final ActiveMQConnectionFactory innerCf = new 
ActiveMQConnectionFactory("tcp://127.0.0.1:" + port);
+ConnectionFactoryInvocationHandler connectionFactoryProxy = new 
ConnectionFactoryInvocationHandler(innerCf);
+
+// Create a connection Factory proxy to catch metrics and usage.
+ConnectionFactory cf = (ConnectionFactory) 
Proxy.newProxyInstance(ConnectionFactory.class.getClassLoader(), new Class[] { 
ConnectionFactory.class }, connectionFactoryProxy);
+
+TestRunner runner = TestRunners.newTestRunner(new PublishJMS());
+JMSConnectionFactoryProviderDefinition cs = 
mock(JMSConnectionFactoryProviderDefinition.class);
+when(cs.getIdentifier()).thenReturn("cfProvider");
+when(cs.getConnectionFactory()).thenReturn(cf);
+runner.addControllerService("cfProvider", cs);
+runner.enableControllerService(cs);
+
+runner.setProperty(PublishJMS.CF_SERVICE, "cfProvider");
+
+String destinationName = "myDestinationName";
+// The destination option according current implementation should 
contain topic or queue to infer the destination type
+// from the name. Check 
https://issues.apache.org/jira/browse/NIFI-7561. Once that is fixed, the name 
can be
+// randomly created.
+String topicNameInHeader = "topic-foo";
+runner.setProperty(PublishJMS.DESTINATION, destinationName);
+runner.setProperty(PublishJMS.DESTINATION_TYPE, PublishJMS.QUEUE);
+
+Map flowFileAttributes = new HashMap<>();
+// This method will be removed once 
https://issues.apache.org/jira/browse/NIFI-7564 is fixed.
+flowFileAttributes.put(JmsHeaders.DESTINATION, topicNameInHeader);
+flowFileAttributes.put(JmsHeaders.REPLY_TO, topicNameInHeader);
+runner.enqueue("hi".getBytes(), flowFileAttributes);
+runner.enqueue("h2".getBytes(), flowFileAttributes);
+runner.setThreadCount(1);

Review comment:
   Could we add a test that tests the scenario with more threads? I believe 
your comment in the JIRA is accurate but I think the threading is the only real 
concern in this PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] adamfisher commented on a change in pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor

2020-06-29 Thread GitBox


adamfisher commented on a change in pull request #3317:
URL: https://github.com/apache/nifi/pull/3317#discussion_r447314009



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/DetectDuplicateRecord.java
##
@@ -0,0 +1,646 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.google.common.base.Joiner;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnels;
+import org.apache.commons.codec.binary.Hex;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.apache.commons.codec.digest.MessageDigestAlgorithms;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.*;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.*;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.RecordPathResult;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathPropertyNameValidator;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.*;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+
+import java.io.*;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.security.MessageDigest;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static java.util.stream.Collectors.toList;
+import static org.apache.commons.codec.binary.StringUtils.getBytesUtf8;
+import static org.apache.commons.lang3.StringUtils.*;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@SystemResourceConsideration(resource = SystemResource.MEMORY,
+description = "The HashSet filter type will grow memory space 
proportionate to the number of unique records processed. " +
+"The BloomFilter type will use constant memory regardless of the 
number of records processed.")
+@Tags({"text", "record", "update", "change", "replace", "modify", "distinct", 
"unique",
+"filter", "hash", "dupe", "duplicate", "dedupe"})
+@CapabilityDescription("Caches records from each incoming FlowFile and 
determines if the record " +
+"has already been seen. If so, routes the record to 'duplicate'. If the 
record is " +
+"not determined to be a duplicate, it is routed to 'non-duplicate'."
+)
+@WritesAttribute(attribute = "record.count", description = "The number of 
records processed.")
+@DynamicProperty(
+name = "RecordPath",
+value = "An expression language statement used to determine how the 
RecordPath is resolved. " +
+"The following variables are availble: ${field.name}, 
${field.value}, ${field.type}",
+description = "The name of each user-defined property must be a valid 
RecordPath.")
+@SeeAlso(classNames = {
+
"org.apache.nifi.distributed.cache.client.DistributedMap

[GitHub] [nifi] turcsanyip commented on pull request #4368: NIFI-7586 Add socket-level timeout properties for CassandraSessionProvider

2020-06-29 Thread GitBox


turcsanyip commented on pull request #4368:
URL: https://github.com/apache/nifi/pull/4368#issuecomment-651396020


   After changing the timeouts, I get the following errors from QueryCassandra 
/ PutCassandraQL:
   ```
   ERROR [Timer-Driven Process Thread-10] o.a.n.p.cassandra.QueryCassandra 
QueryCassandra[id=01ad5a7c-0173-1000-7fb3-a34af5274655] No host in the 
Cassandra cluster can be contacted successfully to execute this query: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (no host was tried)
   ```
   ```
   com.datastax.driver.core.exceptions.DriverInternalError: Unexpected 
exception thrown
   ```
   
   Did it work for you?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-29 Thread GitBox


arpadboda commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447271999



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}

Review comment:
   Maybe it's a bit too late, but it seems like a busy wait for me. 
   Which naturally works, but I would avoid that on hour already overloaded CI 
infra. 
   
   I don't expect a CV here, sleeping 1ms in the body of the loop is good 
enough :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-29 Thread GitBox


arpadboda commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447271999



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}

Review comment:
   Maybe it's a bit too late, but it seems like a busy wait for me. 
   Which naturally works, but I would avoid that on hour already overloaded CI 
infra. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-29 Thread GitBox


arpadboda commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447269648



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);

Review comment:
   More like ```REQUIRE(ff_repository->initialize(config))``` imho :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-29 Thread GitBox


arpadboda commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447269150



##
File path: libminifi/src/FlowFileRecord.cpp
##
@@ -366,7 +366,7 @@ bool FlowFileRecord::DeSerialize(const uint8_t *buffer, 
const int bufferSize) {
 return false;
   }
 
-  if (nullptr == claim_) {

Review comment:
   Yep, the goal would be to *always* have claim, even if the content is 
empty.
   The reason behind is that we don't have to deal with failing streams and 
claims when handling flowfiles. A good example of this is PutFile or 
PublishKafka, where handling both types of "empty" flowfiles is a pain. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende commented on pull request #4358: NIFI-7578 nifi-toolkit CLI Process Group Create command

2020-06-29 Thread GitBox


bbende commented on pull request #4358:
URL: https://github.com/apache/nifi/pull/4358#issuecomment-651336958


   Running a build inside `nifi-toolkit` with `-Pcontrib-check` shows some 
checkstyle errors that would need to be resolved. For some reason there is an 
issue where builds through Github Actions are not catching these so it shows 
everything as passing, we are looking into that.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7579) Create a GetS3Object Processor

2020-06-29 Thread Mark Payne (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148081#comment-17148081
 ] 

Mark Payne commented on NIFI-7579:
--

I'm not sure that we should introduce another processor just to avoid needing 
to connect a List/Fetch pair of processors. The pattern of GetXYZ is an older 
pattern and most of the newer processors that are responsible for gathering 
files/blobs of data and the like tend to follow the List/Fetch pattern. This 
pattern has proven to provide many advantages over the Get pattern. It allows 
for easy and powerful filtering of data before fetching the data. It separates 
the concerns of listing and maintaining state about what's been seen from 
actually gathering data. It provides a very powerful mechanism for distributing 
the data and processing load across the cluster. It makes it far easier to 
handle flows that are more batch-oriented, with the introduction of NIFI-7476.

I would be a -1 on adding a new processor just to avoid needing to connect an 
upstream List processor. It would mean additional code that must be maintained 
and would lead to confusion for users when trying to determine which Processor 
they need, especially for newer users.

> Create a GetS3Object Processor
> --
>
> Key: NIFI-7579
> URL: https://issues.apache.org/jira/browse/NIFI-7579
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: ArpStorm1
>Assignee: YoungGyu Chun
>Priority: Major
>
> Sometimes the client needs to get only specific object or a subset of objects 
> from its bucket. Now, the only way to do it is using ListS3 Processor and 
> after that using FetchS3Object processor. Creating a GetS3Object processor 
> for such cases can be great 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] MikeThomsen commented on pull request #4336: NIFI-7515 Added 7Zip support to UnpackContent

2020-06-29 Thread GitBox


MikeThomsen commented on pull request #4336:
URL: https://github.com/apache/nifi/pull/4336#issuecomment-651299876


   @r65535 checkin in... any updates? Do you need any help?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7588) InvokeHTTP ignoring custom parameters when disabling sslcontext

2020-06-29 Thread Jira
Alejandro Fiel Martínez created NIFI-7588:
-

 Summary: InvokeHTTP ignoring custom parameters when disabling 
sslcontext
 Key: NIFI-7588
 URL: https://issues.apache.org/jira/browse/NIFI-7588
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.11.4
 Environment: Amazon Linux
Reporter: Alejandro Fiel Martínez


I have an InvokeHTTP processor, with 3 custom paramenters to be passed as 
headers, If I add an SSL Context Service and remove it, the processor stop 
using those 3 paramenters and I have to delete and recreate then, they are 
there, but I see in DEBUG that there are not used in the GET request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7587) PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers is unstable

2020-06-29 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-7587:
---

happened in Github Actions 
https://github.com/apache/nifi/runs/812306935?check_suite_focus=true

> PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers 
> is unstable
> -
>
> Key: NIFI-7587
> URL: https://issues.apache.org/jira/browse/NIFI-7587
> Project: Apache NiFi
>  Issue Type: Test
>Reporter: Joe Witt
>Priority: Major
> Fix For: 1.12.0
>
>
> Unstable and/or broken test or code
> [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 22.765 s <<< FAILURE! - in org.apache.nifi.remote.client.PeerSelectorTest
> [ERROR] 
> testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(org.apache.nifi.remote.client.PeerSelectorTest)
>   Time elapsed: 1.095 s  <<< FAILURE!
> org.codehaus.groovy.runtime.powerassert.PowerAssertionError: 
> assert count >= lowerBound && count <= upperBound
>| |  |  |  | |  |
>5103  |  4900.0 |  5103  |  5100.0
>  true  falsefalse
>   at 
> org.apache.nifi.remote.client.PeerSelectorTest.assertDistributionPercentages(PeerSelectorTest.groovy:147)
>   at 
> org.apache.nifi.remote.client.PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(PeerSelectorTest.groovy:759)
> [ERROR] Failures: 
> [ERROR]   
> PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers:759->assertDistributionPercentages:147
>  assert count >= lowerBound && count <= upperBound
>| |  |  |  | |  |
>5103  |  4900.0 |  5103  |  5100.0
>  true  falsefalse
> [ERROR] Tests run: 92, Failures: 1, Errors: 0, Skipped: 2
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (default-test) on 
> project nifi-site-to-site-client: There are test failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7587) PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers is unstable

2020-06-29 Thread Joe Witt (Jira)
Joe Witt created NIFI-7587:
--

 Summary: 
PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers 
is unstable
 Key: NIFI-7587
 URL: https://issues.apache.org/jira/browse/NIFI-7587
 Project: Apache NiFi
  Issue Type: Test
Reporter: Joe Witt
 Fix For: 1.12.0


Unstable and/or broken test or code

[ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 22.765 
s <<< FAILURE! - in org.apache.nifi.remote.client.PeerSelectorTest
[ERROR] 
testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(org.apache.nifi.remote.client.PeerSelectorTest)
  Time elapsed: 1.095 s  <<< FAILURE!
org.codehaus.groovy.runtime.powerassert.PowerAssertionError: 
assert count >= lowerBound && count <= upperBound
   | |  |  |  | |  |
   5103  |  4900.0 |  5103  |  5100.0
 true  falsefalse
at 
org.apache.nifi.remote.client.PeerSelectorTest.assertDistributionPercentages(PeerSelectorTest.groovy:147)
at 
org.apache.nifi.remote.client.PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(PeerSelectorTest.groovy:759)

[ERROR] Failures: 
[ERROR]   
PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers:759->assertDistributionPercentages:147
 assert count >= lowerBound && count <= upperBound
   | |  |  |  | |  |
   5103  |  4900.0 |  5103  |  5100.0
 true  falsefalse
[ERROR] Tests run: 92, Failures: 1, Errors: 0, Skipped: 2
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (default-test) on 
project nifi-site-to-site-client: There are test failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #824: MINIFICPP-1256 - Apply fixes recommended by modernize-equals-default clang tidy check

2020-06-29 Thread GitBox


arpadboda closed pull request #824:
URL: https://github.com/apache/nifi-minifi-cpp/pull/824


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #821: MINIFICPP-1251 - Implement and test RetryFlowFile processor

2020-06-29 Thread GitBox


szaszm commented on a change in pull request #821:
URL: https://github.com/apache/nifi-minifi-cpp/pull/821#discussion_r447110192



##
File path: extensions/standard-processors/tests/unit/RetryFlowFileTests.cpp
##
@@ -0,0 +1,221 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+#include "TestBase.h"
+
+#include "processors/GenerateFlowFile.h"
+#include "processors/UpdateAttribute.h"
+#include "processors/RetryFlowFile.h"
+#include "processors/PutFile.h"
+#include "processors/LogAttribute.h"
+#include "utils/file/FileUtils.h"
+#include "utils/OptionalUtils.h"
+#include "utils/TestUtils.h"
+
+namespace {
+using org::apache::nifi::minifi::utils::createTempDir;
+using org::apache::nifi::minifi::utils::optional;
+
+std::vector> list_dir_all(const 
std::string& dir, const std::shared_ptr& logger, bool 
recursive = true) {
+  return org::apache::nifi::minifi::utils::file::FileUtils::list_dir_all(dir, 
logger, recursive);
+}
+
+class RetryFlowFileTest {
+ public:
+  using Processor = org::apache::nifi::minifi::core::Processor;
+  using GenerateFlowFile = 
org::apache::nifi::minifi::processors::GenerateFlowFile;
+  using UpdateAttribute = 
org::apache::nifi::minifi::processors::UpdateAttribute;
+  using RetryFlowFile = org::apache::nifi::minifi::processors::RetryFlowFile;
+  using PutFile = org::apache::nifi::minifi::processors::PutFile;
+  using LogAttribute = org::apache::nifi::minifi::processors::LogAttribute;
+  RetryFlowFileTest() :
+logTestController_(LogTestController::getInstance()),
+
logger_(logging::LoggerFactory::getLogger())
 {
+reInitialize();
+  }
+  virtual ~RetryFlowFileTest() {
+logTestController_.reset();
+  }
+
+ protected:
+  void reInitialize() {
+testController_.reset(new TestController());
+plan_ = testController_->createPlan();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+  }
+

Review comment:
   There are no rules against ASCII art that I know of. I previously 
expressed that I didn't like them personally, but at the end of the discussion 
the conclusion was that they provide value to the reader, therefore they are 
useful.
   
   I am just a contributor to the project and I happened to contribute many 
reviews lately, because I believe ensuring that no easily preventable issues 
are committed, and the code is of the highest quality obtainable with relative 
ease is usually the most efficient use of my time. I believe it provides more 
value on the long term by enabling contributors (myself and others) to work 
with the code more effectively.
   
   Having said that, my opinions and review suggestions are my attempt at being 
helpful, not rules. I think they become de jure rules when someone proposes it 
and the community agrees. On the other hand, they may easily become de facto 
"rules", that is, if I (or anybody) made a suggestion that improved code in 
some way, then I would expect the same contributors to try to submit an already 
"improved" version of similar code in their future contributions. I usually 
consider de facto "rules" conventions rather than rules.
   
   I'm writing this because I don't want you or any contributors to be 
discouraged by my past reviews.
   
   tl;dr: my comments are not rules





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #821: MINIFICPP-1251 - Implement and test RetryFlowFile processor

2020-06-29 Thread GitBox


szaszm commented on a change in pull request #821:
URL: https://github.com/apache/nifi-minifi-cpp/pull/821#discussion_r447110192



##
File path: extensions/standard-processors/tests/unit/RetryFlowFileTests.cpp
##
@@ -0,0 +1,221 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+#include "TestBase.h"
+
+#include "processors/GenerateFlowFile.h"
+#include "processors/UpdateAttribute.h"
+#include "processors/RetryFlowFile.h"
+#include "processors/PutFile.h"
+#include "processors/LogAttribute.h"
+#include "utils/file/FileUtils.h"
+#include "utils/OptionalUtils.h"
+#include "utils/TestUtils.h"
+
+namespace {
+using org::apache::nifi::minifi::utils::createTempDir;
+using org::apache::nifi::minifi::utils::optional;
+
+std::vector> list_dir_all(const 
std::string& dir, const std::shared_ptr& logger, bool 
recursive = true) {
+  return org::apache::nifi::minifi::utils::file::FileUtils::list_dir_all(dir, 
logger, recursive);
+}
+
+class RetryFlowFileTest {
+ public:
+  using Processor = org::apache::nifi::minifi::core::Processor;
+  using GenerateFlowFile = 
org::apache::nifi::minifi::processors::GenerateFlowFile;
+  using UpdateAttribute = 
org::apache::nifi::minifi::processors::UpdateAttribute;
+  using RetryFlowFile = org::apache::nifi::minifi::processors::RetryFlowFile;
+  using PutFile = org::apache::nifi::minifi::processors::PutFile;
+  using LogAttribute = org::apache::nifi::minifi::processors::LogAttribute;
+  RetryFlowFileTest() :
+logTestController_(LogTestController::getInstance()),
+
logger_(logging::LoggerFactory::getLogger())
 {
+reInitialize();
+  }
+  virtual ~RetryFlowFileTest() {
+logTestController_.reset();
+  }
+
+ protected:
+  void reInitialize() {
+testController_.reset(new TestController());
+plan_ = testController_->createPlan();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+  }
+

Review comment:
   There are no rules against ASCII art that I know of. I previously 
expressed that I didn't like them personally, but at the end of the discussion 
the conclusion was that they provide value to the reader, therefore they are 
useful.
   
   I am just a contributor to the project and I happened to contribute many 
reviews lately, because I believe ensuring that no easily preventable issues 
are submitted, and the code is of the highest quality obtainable with relative 
ease is usually the most efficient use of my time. I believe it provides more 
value on the long term by enabling contributors (myself and others) to work 
with the code more effectively.
   
   Having said that, my opinions and review suggestions are my attempt at being 
helpful, not rules. I think they become de jure rules when someone proposes it 
and the community agrees. On the other hand, they may easily become de facto 
"rules", that is, if I (or anybody) made a suggestion that improved code in 
some way, then I would expect the same contributors to try to submit an already 
"improved" version of similar code in their future contributions. I usually 
consider de facto "rules" conventions rather than rules.
   
   I'm writing this because I don't want you or any contributors to be 
discouraged by my past reviews.
   
   tl;dr: my comments are not rules





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7584) LOG OUT button does not work when OpenID Connect is used for authentication

2020-06-29 Thread W Chang (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17147917#comment-17147917
 ] 

W Chang commented on NIFI-7584:
---

Great!  Glad to hear that the issue is currently being investigated.

I expect that when a user logs out, a NiFi page that informs the logout status 
or the OIDC SignIn page is displayed.  Thank you.

> LOG OUT button does not work when OpenID Connect is used for authentication
> ---
>
> Key: NIFI-7584
> URL: https://issues.apache.org/jira/browse/NIFI-7584
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.11.4
> Environment: CentOS Linux 7
>Reporter: W Chang
>Priority: Critical
>  Labels: UI, bug, logout, oidc
>
> When nifi-1.11.4 is integrated with Okta OpenID Connect for authentication, 
> 'LOG OUT' button on the front page does not work.  It does not log a user out 
> properly without redirecting to the Logout Redirect URL.  
> When the button is clicked, the following message is displayed on the browser
> {code:java}
> {"errorCode":"invalid_client","errorSummary":"Invalid value for 'client_id' 
> parameter.","errorLink":"invalid_client","errorId":"oae_YfJRUHCQe-BqYnPw6opFg","errorCauses":[]}{code}
> The button makes a GET request to the following address.
> [https://\{hostname}.okta.com/oauth2/v1/logout?post_logout_redirect_uri=https%3A%2F%2F\{nifi
>  server dns name}%3A\{port 
> number}%2Fnifi-api%2F..%2Fnifi|https://dev-309877.okta.com/oauth2/v1/logout?post_logout_redirect_uri=https%3A%2F%2Fplanet-dl-dev-1.mitre.org%3A9443%2Fnifi-api%2F..%2Fnifi]
> According to Okta document 
> [https://developer.okta.com/blog/2020/03/27/spring-oidc-logout-options,] the 
> logout endpoint format should be as shown below:
> {{[https://dev-123456.okta.com/oauth2/default/v1/logout?id_token_hint=]&post_logout_redirect_uri=[http://localhost:8080/]}}
>  
> {{And it seems that post_logout_redirect_uri should be  "https://\{nifi 
> server dns name}:\{port number}/nifi-api/access/oidc/logout"}}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-29 Thread GitBox


szaszm commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447059548



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);

Review comment:
   Errors should make the test fail early.
   ```suggestion
   const bool init_success = ff_repository->initialize(config);
   REQUIRE(init_success);
   ```

##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap)

[GitHub] [nifi] tpalfy opened a new pull request #4369: NIFI-7581 Add CS-based credential settings support for ADLS processors

2020-06-29 Thread GitBox


tpalfy opened a new pull request #4369:
URL: https://github.com/apache/nifi/pull/4369


   https://issues.apache.org/jira/browse/NIFI-7581
   
   Added ControllerService-based credential settings support for ADLS processors
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1276) CAPITests should not leave any trash after running

2020-06-29 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1276:

Priority: Minor  (was: Major)

> CAPITests should not leave any trash after running
> --
>
> Key: MINIFICPP-1276
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1276
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Ádám Markovics
>Assignee: Ádám Markovics
>Priority: Minor
>
> *Background:*
> Running tests on MiNiFi currently leaves untracked files in the working 
> directory.
> {code:bash|title=Test example generating untracked files}
> adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
>  On branch MINIFICPP-1183
> Your branch is up to date with 'origin/MINIFICPP-1183'.
>  nothing to commit, working tree clean
> adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests
> Test project /home/adam/work/minifi-cpp/build
> Start 30: CAPITests
> 1/1 Test #30: CAPITests    Passed0.01 sec
>  100% tests passed, 0 tests failed out of 1
>  Total Test time (real) =   0.01
> secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
> On branch MINIFICPP-1183Your branch is up to date with 
> 'origin/MINIFICPP-1183'.
>  Untracked files:  (use "git add ..." to include in what will be 
> committed)
>  \ ../libminifi/test/1593438783981-9
>  nothing added to commit but untracked files present (use "git add" to track)
> {code}
> *Proposal:*
> There are utility functions available that create temporary directories for 
> all target platforms. Change the tests so that they use this functionality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7579) Create a GetS3Object Processor

2020-06-29 Thread ArpStorm1 (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17147862#comment-17147862
 ] 

ArpStorm1 commented on NIFI-7579:
-

I don't think this behavior should be merged into the FetchS3Object processor. 
My suggestion is to create a processor that shouldn't need upstream connection 
for working with S3

> Create a GetS3Object Processor
> --
>
> Key: NIFI-7579
> URL: https://issues.apache.org/jira/browse/NIFI-7579
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: ArpStorm1
>Assignee: YoungGyu Chun
>Priority: Major
>
> Sometimes the client needs to get only specific object or a subset of objects 
> from its bucket. Now, the only way to do it is using ListS3 Processor and 
> after that using FetchS3Object processor. Creating a GetS3Object processor 
> for such cases can be great 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] tpalfy opened a new pull request #4368: NIFI-7586 Add socket-level timeout properties for CassandraSessionProvider

2020-06-29 Thread GitBox


tpalfy opened a new pull request #4368:
URL: https://github.com/apache/nifi/pull/4368


   https://issues.apache.org/jira/browse/NIFI-7586
   
   In CassandraSesionProvider added properties to set socket-level read timeout 
and connect timeout. In QueryCassandra when writing flowfile to the sesion it's 
done on the raw OutputStream. Wrapped it in a BufferedOutputStream for better 
performance.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1277) Create cross-platform API for querying memory usage

2020-06-29 Thread Jira
Ádám Markovics created MINIFICPP-1277:
-

 Summary: Create cross-platform API for querying memory usage
 Key: MINIFICPP-1277
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1277
 Project: Apache NiFi MiNiFi C++
  Issue Type: New Feature
Reporter: Ádám Markovics
Assignee: Ádám Markovics


We would like to monitor memory usage of MiNiFi process, therefore a 
cross-platform API is a necessity. It should work for Windows, Linux and Mac. 
Also a test is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7586) Add socket-level timeout properties for CassandraSessionProvider

2020-06-29 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy reassigned NIFI-7586:
-

Assignee: Tamas Palfy

> Add socket-level timeout properties for CassandraSessionProvider
> 
>
> Key: NIFI-7586
> URL: https://issues.apache.org/jira/browse/NIFI-7586
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>
> The DataStax library used by NiFi to connect to Cassandra would allow the 
> setting of socket level read timeout and connect timeout but NiFi doesn't 
> expose them as properties or any other way.
> The default values are a couple of seconds which is probably enough most of 
> the time but not always.
> We should allow the users to provide their own configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1276) CAPITests should not leave any trash after running

2020-06-29 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1276:

Description: 
*Background:*

Running tests on MiNiFi currently leaves untracked files in the working 
directory.
{code:bash|title=Test example generating untracked files}
adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
 On branch MINIFICPP-1183

Your branch is up to date with 'origin/MINIFICPP-1183'.
 nothing to commit, working tree clean

adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests

Test project /home/adam/work/minifi-cpp/build

Start 30: CAPITests

1/1 Test #30: CAPITests    Passed0.01 sec
 100% tests passed, 0 tests failed out of 1
 Total Test time (real) =   0.01

secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st

On branch MINIFICPP-1183Your branch is up to date with 'origin/MINIFICPP-1183'.
 Untracked files:  (use "git add ..." to include in what will be 
committed)
 \ ../libminifi/test/1593438783981-9
 nothing added to commit but untracked files present (use "git add" to track)
{code}
*Proposal:*

There are utility functions available that create temporary directories for all 
target platforms. Change the tests so that they use this functionality.

  was:
*Background:*

Running tests on MiNiFi currently leaves untracked files in the working 
directory.
{code:bash|title=Test example generating untracked files}
adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
 On branch MINIFICPP-1183

Your branch is up to date with 'origin/MINIFICPP-1183'.
 nothing to commit, working tree clean

adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests

Test project /home/adam/work/minifi-cpp/build

Start 30: CAPITests

1/1 Test #30: CAPITests    Passed0.01 sec
 100% tests passed, 0 tests failed out of 1
 Total Test time (real) =   0.01

secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st

On branch MINIFICPP-1183Your branch is up to date with 'origin/MINIFICPP-1183'.
 Untracked files:  (use "git add ..." to include in what will be 
committed)
 \ ../libminifi/test/1593438783981-9
 nothing added to commit but untracked files present (use "git add" to track)
{code}
*Proposal:*
 There are utility functions available that create temporary directories for 
all target platforms. Change the tests so that they use this functionality.


> CAPITests should not leave any trash after running
> --
>
> Key: MINIFICPP-1276
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1276
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Ádám Markovics
>Assignee: Ádám Markovics
>Priority: Major
>
> *Background:*
> Running tests on MiNiFi currently leaves untracked files in the working 
> directory.
> {code:bash|title=Test example generating untracked files}
> adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
>  On branch MINIFICPP-1183
> Your branch is up to date with 'origin/MINIFICPP-1183'.
>  nothing to commit, working tree clean
> adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests
> Test project /home/adam/work/minifi-cpp/build
> Start 30: CAPITests
> 1/1 Test #30: CAPITests    Passed0.01 sec
>  100% tests passed, 0 tests failed out of 1
>  Total Test time (real) =   0.01
> secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
> On branch MINIFICPP-1183Your branch is up to date with 
> 'origin/MINIFICPP-1183'.
>  Untracked files:  (use "git add ..." to include in what will be 
> committed)
>  \ ../libminifi/test/1593438783981-9
>  nothing added to commit but untracked files present (use "git add" to track)
> {code}
> *Proposal:*
> There are utility functions available that create temporary directories for 
> all target platforms. Change the tests so that they use this functionality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1276) CAPITests should not leave any trash after running

2020-06-29 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1276:

Description: 
*Background:*

Running tests on MiNiFi currently leaves untracked files in the working 
directory.
{code:bash|title=Test example generating untracked files}
adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
 On branch MINIFICPP-1183

Your branch is up to date with 'origin/MINIFICPP-1183'.
 nothing to commit, working tree clean

adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests

Test project /home/adam/work/minifi-cpp/build

Start 30: CAPITests

1/1 Test #30: CAPITests    Passed0.01 sec
 100% tests passed, 0 tests failed out of 1
 Total Test time (real) =   0.01

secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st

On branch MINIFICPP-1183Your branch is up to date with 'origin/MINIFICPP-1183'.
 Untracked files:  (use "git add ..." to include in what will be 
committed)
 \ ../libminifi/test/1593438783981-9
 nothing added to commit but untracked files present (use "git add" to track)
{code}
*Proposal:*
 There are utility functions available that create temporary directories for 
all target platforms. Change the tests so that they use this functionality.

  was:
{{adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st}}
{{On branch MINIFICPP-1183}}

{{Your branch is up to date with 'origin/MINIFICPP-1183'.}}
{{nothing to commit, working tree clean}}

{{adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests}}

{{Test project /home/adam/work/minifi-cpp/build}}

{{Start 30: CAPITests}}

{{1/1 Test #30: CAPITests    Passed    0.01 sec}}
{{100% tests passed, 0 tests failed out of 1}}
{{Total Test time (real) =   0.01}}

{{secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st}}

{{On branch MINIFICPP-1183Your branch is up to date with 
'origin/MINIFICPP-1183'.}}
{{Untracked files:  (use "git add ..." to include in what will be 
committed)}}
{{ ../libminifi/test/1593438783981-9}}
{{nothing added to commit but untracked files present (use "git add" to track)}}


> CAPITests should not leave any trash after running
> --
>
> Key: MINIFICPP-1276
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1276
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Ádám Markovics
>Assignee: Ádám Markovics
>Priority: Major
>
> *Background:*
> Running tests on MiNiFi currently leaves untracked files in the working 
> directory.
> {code:bash|title=Test example generating untracked files}
> adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
>  On branch MINIFICPP-1183
> Your branch is up to date with 'origin/MINIFICPP-1183'.
>  nothing to commit, working tree clean
> adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests
> Test project /home/adam/work/minifi-cpp/build
> Start 30: CAPITests
> 1/1 Test #30: CAPITests    Passed0.01 sec
>  100% tests passed, 0 tests failed out of 1
>  Total Test time (real) =   0.01
> secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st
> On branch MINIFICPP-1183Your branch is up to date with 
> 'origin/MINIFICPP-1183'.
>  Untracked files:  (use "git add ..." to include in what will be 
> committed)
>  \ ../libminifi/test/1593438783981-9
>  nothing added to commit but untracked files present (use "git add" to track)
> {code}
> *Proposal:*
>  There are utility functions available that create temporary directories for 
> all target platforms. Change the tests so that they use this functionality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1276) CAPITests should not leave any trash after running

2020-06-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Markovics reassigned MINIFICPP-1276:
-

Assignee: Ádám Markovics

> CAPITests should not leave any trash after running
> --
>
> Key: MINIFICPP-1276
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1276
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Ádám Markovics
>Assignee: Ádám Markovics
>Priority: Major
>
> {{adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st}}
> {{On branch MINIFICPP-1183}}
> {{Your branch is up to date with 'origin/MINIFICPP-1183'.}}
> {{nothing to commit, working tree clean}}
> {{adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests}}
> {{Test project /home/adam/work/minifi-cpp/build}}
> {{Start 30: CAPITests}}
> {{1/1 Test #30: CAPITests    Passed    0.01 sec}}
> {{100% tests passed, 0 tests failed out of 1}}
> {{Total Test time (real) =   0.01}}
> {{secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st}}
> {{On branch MINIFICPP-1183Your branch is up to date with 
> 'origin/MINIFICPP-1183'.}}
> {{Untracked files:  (use "git add ..." to include in what will be 
> committed)}}
> {{ ../libminifi/test/1593438783981-9}}
> {{nothing added to commit but untracked files present (use "git add" to 
> track)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1276) CAPITests should not leave any trash after running

2020-06-29 Thread Jira
Ádám Markovics created MINIFICPP-1276:
-

 Summary: CAPITests should not leave any trash after running
 Key: MINIFICPP-1276
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1276
 Project: Apache NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Ádám Markovics


{{adam@amarkovics-Linux:~/work/minifi-cpp/build$ git st}}
{{On branch MINIFICPP-1183}}

{{Your branch is up to date with 'origin/MINIFICPP-1183'.}}
{{nothing to commit, working tree clean}}

{{adam@amarkovics-Linux:~/work/minifi-cpp/build$ ctest -j24 -R CAPITests}}

{{Test project /home/adam/work/minifi-cpp/build}}

{{Start 30: CAPITests}}

{{1/1 Test #30: CAPITests    Passed    0.01 sec}}
{{100% tests passed, 0 tests failed out of 1}}
{{Total Test time (real) =   0.01}}

{{secadam@amarkovics-Linux:~/work/minifi-cpp/build$ git st}}

{{On branch MINIFICPP-1183Your branch is up to date with 
'origin/MINIFICPP-1183'.}}
{{Untracked files:  (use "git add ..." to include in what will be 
committed)}}
{{ ../libminifi/test/1593438783981-9}}
{{nothing added to commit but untracked files present (use "git add" to track)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7586) Add socket-level timeout properties for CassandraSessionProvider

2020-06-29 Thread Tamas Palfy (Jira)
Tamas Palfy created NIFI-7586:
-

 Summary: Add socket-level timeout properties for 
CassandraSessionProvider
 Key: NIFI-7586
 URL: https://issues.apache.org/jira/browse/NIFI-7586
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Tamas Palfy


The DataStax library used by NiFi to connect to Cassandra would allow the 
setting of socket level read timeout and connect timeout but NiFi doesn't 
expose them as properties or any other way.
The default values are a couple of seconds which is probably enough most of the 
time but not always.
We should allow the users to provide their own configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-29 Thread GitBox


szaszm commented on a change in pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816#discussion_r446966421



##
File path: libminifi/include/utils/ScopeGuard.h
##
@@ -33,10 +33,6 @@ namespace utils {
 
 struct ScopeGuard : ::gsl::final_action> {
   using ::gsl::final_action>::final_action;
-
-  void disable() noexcept {
-dismiss();
-  }

Review comment:
   Ok. I didn't think this was harmful, but if you think so, then 
deprecation sounds like a good idea.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-29 Thread GitBox


szaszm commented on a change in pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816#discussion_r446965308



##
File path: libminifi/include/io/tls/TLSSocket.h
##
@@ -80,10 +77,13 @@ class TLSContext : public SocketContext {
   int16_t initialize(bool server_method = false);
 
  private:
+  static void deleteContext(SSL_CTX* ptr) { SSL_CTX_free(ptr); }
+
   std::shared_ptr logger_;
   std::shared_ptr configure_;
   std::shared_ptr ssl_service_;
-  SSL_CTX *ctx;
+  using Context = std::unique_ptr;
+  Context ctx;

Review comment:
   Not a big deal, just my opinion/preference. I prefer to use convenience 
aliases only if they bring a large benefit. This is because in the past I was 
working with a codebase that really overused type aliases and I had no idea 
what's what while reading the code.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-29 Thread GitBox


adamdebreceni opened a new pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1275) Remove deprecated ScopeGuard class

2020-06-29 Thread Adam Hunyadi (Jira)
Adam Hunyadi created MINIFICPP-1275:
---

 Summary: Remove deprecated ScopeGuard class
 Key: MINIFICPP-1275
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1275
 Project: Apache NiFi MiNiFi C++
  Issue Type: Task
Affects Versions: 0.7.0
Reporter: Adam Hunyadi
Assignee: Adam Hunyadi
 Fix For: 1.0.0


*Background:*

ScopeGuard was deprecated and marked for removal in 1.0.0 due to equivalent 
functionality ported in gsl.

*Proposal:*

On the major version change we should remove this class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1275) [Due to 1.0.0] Remove deprecated ScopeGuard class

2020-06-29 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1275:

Summary: [Due to 1.0.0] Remove deprecated ScopeGuard class  (was: Remove 
deprecated ScopeGuard class)

> [Due to 1.0.0] Remove deprecated ScopeGuard class
> -
>
> Key: MINIFICPP-1275
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1275
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Trivial
> Fix For: 1.0.0
>
>
> *Background:*
> ScopeGuard was deprecated and marked for removal in 1.0.0 due to equivalent 
> functionality ported in gsl.
> *Proposal:*
> On the major version change we should remove this class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1271) Disable linter check for [build/include_subdir]

2020-06-29 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi resolved MINIFICPP-1271.
-
Resolution: Done

> Disable linter check for [build/include_subdir]
> ---
>
> Key: MINIFICPP-1271
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1271
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Trivial
> Fix For: 1.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> *Background:*
> Including some of our dependencies produce linter errors, both by having them 
> included int ""-s and <>-s. For example:
> {code:c++}
> #include "concurrentqueue.h"
> {code}
> {quote}error cpplint Include the directory when naming .h files 
> [build/include_subdir] [4]
> {quote}
> {code:c++}
> #include 
> {code}
> {quote}error cpplint Found C system header after C++ system header. Should 
> be: ExampleHeader.h, c system, c++ system, other. [build/include_order] [4]
> {quote}
> The issue is that the google styleguide expects all dependency headers 
> included in subfolders, while our project does not have this dependency.
> *Proposal:*
> As we do not gain much by enforcing a subfolder rule for every header file 
> included, we agreed to disable this linter rule globally on the project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7579) Create a GetS3Object Processor

2020-06-29 Thread Wouter de Vries (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17147661#comment-17147661
 ] 

Wouter de Vries commented on NIFI-7579:
---

Would that not be solved with the GenerateFlowFile processor?

If that does not solve it, I would still argue that this behavior should be 
merged with the FetchS3Object processor, so that it supports being triggered 
with and without an upstream connection.

> Create a GetS3Object Processor
> --
>
> Key: NIFI-7579
> URL: https://issues.apache.org/jira/browse/NIFI-7579
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: ArpStorm1
>Assignee: YoungGyu Chun
>Priority: Major
>
> Sometimes the client needs to get only specific object or a subset of objects 
> from its bucket. Now, the only way to do it is using ListS3 Processor and 
> after that using FetchS3Object processor. Creating a GetS3Object processor 
> for such cases can be great 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1274) Flow restart could double-spend flowfiles

2020-06-29 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni reassigned MINIFICPP-1274:
-

Assignee: Adam Debreceni

> Flow restart could double-spend flowfiles
> -
>
> Key: MINIFICPP-1274
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1274
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Adam Debreceni
>Assignee: Adam Debreceni
>Priority: Major
>
> Flowfiles are async-deleted from the FlowFileRepository and no flush happens 
> after shutdown, leaving these marked files in the repository. If we restart 
> the agent, this allows these zombie files to be resurrected and be put back 
> into their last connections. This could cause files to be processed multiple 
> times even if we marked them for deletion.
> Solution proposal: flush the FlowFileRepository after shutdown, so all marked 
> files are actually deleted (this won't save us from double-processing 
> flowfiles after a crash)
> (also make sure that the FlowFileRepository shutdown happens after no more 
> processors are running)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-7555) LDAP user group provider throws PartialResults Exception when referral strategy = IGNORE

2020-06-29 Thread Melissa Clark (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17147636#comment-17147636
 ] 

Melissa Clark edited comment on NIFI-7555 at 6/29/20, 9:10 AM:
---

Hi [~pvillard] - added to the description


was (Author: mlclark):
Hi [~pvillard]!  Added to the description!

> LDAP user group provider throws PartialResults Exception when referral 
> strategy = IGNORE
> 
>
> Key: NIFI-7555
> URL: https://issues.apache.org/jira/browse/NIFI-7555
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.11.4
> Environment: AWS Instance running Ubuntu 18.04
> LDAPS user group provider
>Reporter: Melissa Clark
>Priority: Major
>
> We are using LDAPS for authentication and for user-group provision.
> At the level I need to search the LDAP tree, two referrals are returned. If I 
> tell Nifi to IGNORE referrals in the "referral strategy" option in 
> authorizers.xml, Nifi throws a "PartialResults" exception and stops.
> This is unexpected as, by saying that referrals should be ignored, I have 
> indicated that I am happy with the partial results.
>  
> {noformat}
> 2020-06-29 08:57:09,052 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setKnoxAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'knoxAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is org.springframework.ldap.PartialResultException: 
> Unprocessed Continuation Reference(s); nested exception is 
> javax.naming.PartialResultException: Unprocessed Continuation Reference(s); 
> remaining name 'dc=myroot, dc=com'
>   at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
>   at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
>   at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
>   at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
>   at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
>   at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
>   at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
>   at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
>   at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
>   at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
>   at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
>   at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
>   at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
>   at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
>   at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
>   at 
> org.springf

[jira] [Commented] (NIFI-7555) LDAP user group provider throws PartialResults Exception when referral strategy = IGNORE

2020-06-29 Thread Melissa Clark (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17147636#comment-17147636
 ] 

Melissa Clark commented on NIFI-7555:
-

Hi [~pvillard]!  Added to the description!

> LDAP user group provider throws PartialResults Exception when referral 
> strategy = IGNORE
> 
>
> Key: NIFI-7555
> URL: https://issues.apache.org/jira/browse/NIFI-7555
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.11.4
> Environment: AWS Instance running Ubuntu 18.04
> LDAPS user group provider
>Reporter: Melissa Clark
>Priority: Major
>
> We are using LDAPS for authentication and for user-group provision.
> At the level I need to search the LDAP tree, two referrals are returned. If I 
> tell Nifi to IGNORE referrals in the "referral strategy" option in 
> authorizers.xml, Nifi throws a "PartialResults" exception and stops.
> This is unexpected as, by saying that referrals should be ignored, I have 
> indicated that I am happy with the partial results.
>  
> {noformat}
> 2020-06-29 08:57:09,052 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setKnoxAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'knoxAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is org.springframework.ldap.PartialResultException: 
> Unprocessed Continuation Reference(s); nested exception is 
> javax.naming.PartialResultException: Unprocessed Continuation Reference(s); 
> remaining name 'dc=myroot, dc=com'
>   at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
>   at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
>   at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
>   at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
>   at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
>   at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
>   at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
>   at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
>   at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
>   at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
>   at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
>   at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
>   at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
>   at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
>   at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
>   at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107)
>   at 
> org.eclips

[jira] [Updated] (NIFI-7555) LDAP user group provider throws PartialResults Exception when referral strategy = IGNORE

2020-06-29 Thread Melissa Clark (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Melissa Clark updated NIFI-7555:

Description: 
We are using LDAPS for authentication and for user-group provision.

At the level I need to search the LDAP tree, two referrals are returned. If I 
tell Nifi to IGNORE referrals in the "referral strategy" option in 
authorizers.xml, Nifi throws a "PartialResults" exception and stops.

This is unexpected as, by saying that referrals should be ignored, I have 
indicated that I am happy with the partial results.

 
{noformat}
2020-06-29 08:57:09,052 ERROR [main] o.s.web.context.ContextLoader Context 
initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 
'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
 Unsatisfied dependency expressed through method 
'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
org.springframework.beans.factory.BeanExpressionException: Expression parsing 
failed; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': 
Unsatisfied dependency expressed through method 'setKnoxAuthenticationProvider' 
parameter 0; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'knoxAuthenticationProvider' defined in class path resource 
[nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' 
while setting constructor argument; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'authorizer': FactoryBean threw exception on object creation; nested 
exception is org.springframework.ldap.PartialResultException: Unprocessed 
Continuation Reference(s); nested exception is 
javax.naming.PartialResultException: Unprocessed Continuation Reference(s); 
remaining name 'dc=myroot, dc=com'
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
at 
org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
at 
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at 
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
at 
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at 
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
at 
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
at 
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
at 
org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
at 
org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
at 
org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107)
at 
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:959)
at 
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:553)
at 
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:924)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:365)
at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:854)
at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletCo

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-29 Thread GitBox


adamdebreceni commented on a change in pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#discussion_r446873735



##
File path: libminifi/include/utils/ValueParser.h
##
@@ -0,0 +1,183 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenseas/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBMINIFI_INCLUDE_UTILS_VALUEPARSER_H_
+#define LIBMINIFI_INCLUDE_UTILS_VALUEPARSER_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "PropertyErrors.h"
+#include "GeneralUtils.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+namespace internal {
+
+class ValueParser {
+ private:
+  template
+  struct is_non_narrowing_convertible : std::false_type {
+static_assert(std::is_integral::value && 
std::is_integral::value, "Checks only integral values");
+  };
+
+  template
+  struct is_non_narrowing_convertible()})>> : std::true_type {
+static_assert(std::is_integral::value && 
std::is_integral::value, "Checks only integral values");
+  };

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adam-markovics commented on a change in pull request #820: MINIFICPP-1183 - Cleanup C2 Update tests

2020-06-29 Thread GitBox


adam-markovics commented on a change in pull request #820:
URL: https://github.com/apache/nifi-minifi-cpp/pull/820#discussion_r446872903



##
File path: extensions/http-curl/tests/C2UpdateTest.cpp
##
@@ -16,174 +16,24 @@
  * limitations under the License.
  */
 
-#include 
 #undef NDEBUG
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include "HTTPClient.h"
-#include "InvokeHTTP.h"
-#include "TestBase.h"
-#include "utils/StringUtils.h"
-#include "core/Core.h"
-#include "core/logging/Logger.h"
-#include "core/ProcessGroup.h"
-#include "core/yaml/YamlConfiguration.h"
-#include "FlowController.h"
-#include "properties/Configure.h"
-#include "unit/ProvenanceTestHelper.h"
-#include "io/StreamFactory.h"
-#include "c2/C2Agent.h"
-#include "CivetServer.h"
-#include 
-#include "protocols/RESTSender.h"
-
-void waitToVerifyProcessor() {
-  std::this_thread::sleep_for(std::chrono::seconds(10));
-}
-
-static std::vector responses;
-
-class ConfigHandler : public CivetHandler {
- public:
-  ConfigHandler() {
-calls_ = 0;
-  }
-  bool handlePost(CivetServer *server, struct mg_connection *conn) {
-calls_++;
-if (responses.size() > 0) {
-  std::string top_str = responses.back();
-  responses.pop_back();
-  mg_printf(conn, "HTTP/1.1 200 OK\r\nContent-Type: "
-"text/plain\r\nContent-Length: %lu\r\nConnection: 
close\r\n\r\n",
-top_str.length());
-  mg_printf(conn, "%s", top_str.c_str());
-} else {
-  mg_printf(conn, "HTTP/1.1 500 Internal Server Error\r\n");
-}
-
-return true;
-  }
-
-  bool handleGet(CivetServer *server, struct mg_connection *conn) {
-std::ifstream myfile(test_file_location_.c_str());
-
-if (myfile.is_open()) {
-  std::stringstream buffer;
-  buffer << myfile.rdbuf();
-  std::string str = buffer.str();
-  myfile.close();
-  mg_printf(conn, "HTTP/1.1 200 OK\r\nContent-Type: "
-"text/plain\r\nContent-Length: %lu\r\nConnection: 
close\r\n\r\n",
-str.length());
-  mg_printf(conn, "%s", str.c_str());
-} else {
-  mg_printf(conn, "HTTP/1.1 500 Internal Server Error\r\n");
-}
-
-return true;
-  }
-  std::string test_file_location_;
-  std::atomic calls_;
-};
+#include "HTTPIntegrationBase.h"
+#include "HTTPHandlers.h"
 
 int main(int argc, char **argv) {
-  mg_init_library(0);
-  LogTestController::getInstance().setInfo();
-  LogTestController::getInstance().setDebug();
-  LogTestController::getInstance().setDebug();
-  LogTestController::getInstance().setDebug();
-
-  const char *options[] = { "document_root", ".", "listening_ports", "0", 0 };
-  std::vector cpp_options;
-  for (int i = 0; i < (sizeof(options) / sizeof(options[0]) - 1); i++) {
-cpp_options.push_back(options[i]);
-  }
-
-  CivetServer server(cpp_options);
-
-  std::string port_str = std::to_string(server.getListeningPorts()[0]);
-
-  ConfigHandler h_ex;
-  server.addHandler("/update", h_ex);
-  std::string key_dir, test_file_location;
-  if (argc > 1) {
-h_ex.test_file_location_ = test_file_location = argv[1];
-key_dir = argv[2];
-  }
-  std::string heartbeat_response = "{\"operation\" : 
\"heartbeat\",\"requested_operations\": [  {"
-  "\"operation\" : \"update\", "
-  "\"operationid\" : \"8675309\", "
-  "\"name\": \"configuration\""
-  "}]}";
-
-  responses.push_back(heartbeat_response);
+  const cmd_args args = parse_cmdline_args(argc, argv, "update");
+  C2UpdateHandler handler(args.test_file);
+  VerifyC2Update harness(1);
+  harness.setKeyDir(args.key_dir);
+  harness.setUrl(args.url, &handler);
+  handler.setC2RestResponse(harness.getC2RestUrl(), "configuration");
 
-  std::ifstream myfile(test_file_location.c_str());
-
-  std::string c2_rest_url = "http://localhost:"; + port_str + "/update";
-
-  if (myfile.is_open()) {
-std::stringstream buffer;
-buffer << myfile.rdbuf();
-std::string str = buffer.str();
-myfile.close();
-std::string response = "{\"operation\" : 
\"heartbeat\",\"requested_operations\": [  {"
-"\"operation\" : \"update\", "
-"\"operationid\" : \"8675309\", "
-"\"name\": \"configuration\", \"content\": { \"location\": \"" + 
c2_rest_url + "\"}}]}";
-responses.push_back(response);
-  }
-
-  std::shared_ptr configuration = 
std::make_shared();
-
-  configuration->set("nifi.c2.agent.protocol.class", "RESTSender");
-  configuration->set("nifi.c2.enable", "true");
-  configuration->set("nifi.c2.agent.class", "test");
-  configuration->set("nifi.c2.rest.url", c2_rest_url);
-  configuration->set("nifi.c2.agent.heartbeat.period", "1000");
-
-  std::shared_ptr test_repo = 
std::make_shared();
-  std::shared_ptr test_flow_repo = 
std::make_shared();
-
-  configuration->set(minifi::Configure::nifi_flow_configuration_file, 
test_file_location);
-
-  std::shared_ptr stream_factory = 
minifi::io::StreamFactory::g

[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #820: MINIFICPP-1183 - Cleanup C2 Update tests

2020-06-29 Thread GitBox


fgerlits commented on a change in pull request #820:
URL: https://github.com/apache/nifi-minifi-cpp/pull/820#discussion_r446859252



##
File path: extensions/http-curl/tests/C2UpdateTest.cpp
##
@@ -16,174 +16,24 @@
  * limitations under the License.
  */
 
-#include 
 #undef NDEBUG
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include "HTTPClient.h"
-#include "InvokeHTTP.h"
-#include "TestBase.h"
-#include "utils/StringUtils.h"
-#include "core/Core.h"
-#include "core/logging/Logger.h"
-#include "core/ProcessGroup.h"
-#include "core/yaml/YamlConfiguration.h"
-#include "FlowController.h"
-#include "properties/Configure.h"
-#include "unit/ProvenanceTestHelper.h"
-#include "io/StreamFactory.h"
-#include "c2/C2Agent.h"
-#include "CivetServer.h"
-#include 
-#include "protocols/RESTSender.h"
-
-void waitToVerifyProcessor() {
-  std::this_thread::sleep_for(std::chrono::seconds(10));
-}
-
-static std::vector responses;
-
-class ConfigHandler : public CivetHandler {
- public:
-  ConfigHandler() {
-calls_ = 0;
-  }
-  bool handlePost(CivetServer *server, struct mg_connection *conn) {
-calls_++;
-if (responses.size() > 0) {
-  std::string top_str = responses.back();
-  responses.pop_back();
-  mg_printf(conn, "HTTP/1.1 200 OK\r\nContent-Type: "
-"text/plain\r\nContent-Length: %lu\r\nConnection: 
close\r\n\r\n",
-top_str.length());
-  mg_printf(conn, "%s", top_str.c_str());
-} else {
-  mg_printf(conn, "HTTP/1.1 500 Internal Server Error\r\n");
-}
-
-return true;
-  }
-
-  bool handleGet(CivetServer *server, struct mg_connection *conn) {
-std::ifstream myfile(test_file_location_.c_str());
-
-if (myfile.is_open()) {
-  std::stringstream buffer;
-  buffer << myfile.rdbuf();
-  std::string str = buffer.str();
-  myfile.close();
-  mg_printf(conn, "HTTP/1.1 200 OK\r\nContent-Type: "
-"text/plain\r\nContent-Length: %lu\r\nConnection: 
close\r\n\r\n",
-str.length());
-  mg_printf(conn, "%s", str.c_str());
-} else {
-  mg_printf(conn, "HTTP/1.1 500 Internal Server Error\r\n");
-}
-
-return true;
-  }
-  std::string test_file_location_;
-  std::atomic calls_;
-};
+#include "HTTPIntegrationBase.h"
+#include "HTTPHandlers.h"
 
 int main(int argc, char **argv) {
-  mg_init_library(0);
-  LogTestController::getInstance().setInfo();
-  LogTestController::getInstance().setDebug();
-  LogTestController::getInstance().setDebug();
-  LogTestController::getInstance().setDebug();
-
-  const char *options[] = { "document_root", ".", "listening_ports", "0", 0 };
-  std::vector cpp_options;
-  for (int i = 0; i < (sizeof(options) / sizeof(options[0]) - 1); i++) {
-cpp_options.push_back(options[i]);
-  }
-
-  CivetServer server(cpp_options);
-
-  std::string port_str = std::to_string(server.getListeningPorts()[0]);
-
-  ConfigHandler h_ex;
-  server.addHandler("/update", h_ex);
-  std::string key_dir, test_file_location;
-  if (argc > 1) {
-h_ex.test_file_location_ = test_file_location = argv[1];
-key_dir = argv[2];
-  }
-  std::string heartbeat_response = "{\"operation\" : 
\"heartbeat\",\"requested_operations\": [  {"
-  "\"operation\" : \"update\", "
-  "\"operationid\" : \"8675309\", "
-  "\"name\": \"configuration\""
-  "}]}";
-
-  responses.push_back(heartbeat_response);
+  const cmd_args args = parse_cmdline_args(argc, argv, "update");
+  C2UpdateHandler handler(args.test_file);
+  VerifyC2Update harness(1);
+  harness.setKeyDir(args.key_dir);
+  harness.setUrl(args.url, &handler);
+  handler.setC2RestResponse(harness.getC2RestUrl(), "configuration");
 
-  std::ifstream myfile(test_file_location.c_str());
-
-  std::string c2_rest_url = "http://localhost:"; + port_str + "/update";
-
-  if (myfile.is_open()) {
-std::stringstream buffer;
-buffer << myfile.rdbuf();
-std::string str = buffer.str();
-myfile.close();
-std::string response = "{\"operation\" : 
\"heartbeat\",\"requested_operations\": [  {"
-"\"operation\" : \"update\", "
-"\"operationid\" : \"8675309\", "
-"\"name\": \"configuration\", \"content\": { \"location\": \"" + 
c2_rest_url + "\"}}]}";
-responses.push_back(response);
-  }
-
-  std::shared_ptr configuration = 
std::make_shared();
-
-  configuration->set("nifi.c2.agent.protocol.class", "RESTSender");
-  configuration->set("nifi.c2.enable", "true");
-  configuration->set("nifi.c2.agent.class", "test");
-  configuration->set("nifi.c2.rest.url", c2_rest_url);
-  configuration->set("nifi.c2.agent.heartbeat.period", "1000");
-
-  std::shared_ptr test_repo = 
std::make_shared();
-  std::shared_ptr test_flow_repo = 
std::make_shared();
-
-  configuration->set(minifi::Configure::nifi_flow_configuration_file, 
test_file_location);
-
-  std::shared_ptr stream_factory = 
minifi::io::StreamFactory::getInst

[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-29 Thread GitBox


hunyadi-dev commented on a change in pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816#discussion_r446841984



##
File path: libminifi/include/io/tls/TLSSocket.h
##
@@ -80,10 +77,13 @@ class TLSContext : public SocketContext {
   int16_t initialize(bool server_method = false);
 
  private:
+  static void deleteContext(SSL_CTX* ptr) { SSL_CTX_free(ptr); }
+
   std::shared_ptr logger_;
   std::shared_ptr configure_;
   std::shared_ptr ssl_service_;
-  SSL_CTX *ctx;
+  using Context = std::unique_ptr;
+  Context ctx;

Review comment:
   I am using this for declaring a local context in 
`TLSContext::initialize` that is why I aliased it. Will type it out instead, if 
you prefer that.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7057) Fix Script Permissions For Docker Integration Test

2020-06-29 Thread Makarov Vasiliy Nicolaevich (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Makarov Vasiliy Nicolaevich updated NIFI-7057:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix Script Permissions For Docker Integration Test
> --
>
> Key: NIFI-7057
> URL: https://issues.apache.org/jira/browse/NIFI-7057
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Makarov Vasiliy Nicolaevich
>Priority: Trivial
>
> The following files need to have their permissions updated for the Docker 
> profile to run it's tests. I'll add more files if I find them.
> chmod 755 ./nifi-toolkit-assembly/docker/tests/exit-codes.sh
> chmod 755 ./nifi-toolkit/nifi-toolkit-assembly/docker/tests/tls-toolkit.sh
> chmod 755 ./nifi-docker/dockermaven/integration-test.sh
> chmod 755 ./nifi-docker/dockermaven/sh/*.sh
> chmod 755 ./nifi-docker/dockerhub/sh/*.sh



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7057) Fix Script Permissions For Docker Integration Test

2020-06-29 Thread Makarov Vasiliy Nicolaevich (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17147612#comment-17147612
 ] 

Makarov Vasiliy Nicolaevich commented on NIFI-7057:
---

Moving to resolve status.

> Fix Script Permissions For Docker Integration Test
> --
>
> Key: NIFI-7057
> URL: https://issues.apache.org/jira/browse/NIFI-7057
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Shawn Weeks
>Assignee: Makarov Vasiliy Nicolaevich
>Priority: Trivial
>
> The following files need to have their permissions updated for the Docker 
> profile to run it's tests. I'll add more files if I find them.
> chmod 755 ./nifi-toolkit-assembly/docker/tests/exit-codes.sh
> chmod 755 ./nifi-toolkit/nifi-toolkit-assembly/docker/tests/tls-toolkit.sh
> chmod 755 ./nifi-docker/dockermaven/integration-test.sh
> chmod 755 ./nifi-docker/dockermaven/sh/*.sh
> chmod 755 ./nifi-docker/dockerhub/sh/*.sh



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-29 Thread GitBox


hunyadi-dev commented on a change in pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816#discussion_r446854170



##
File path: libminifi/src/io/tls/TLSSocket.cpp
##
@@ -120,35 +115,35 @@ int16_t TLSContext::initialize(bool server_method) {
 file.close();
 passphrase = password;
   }
-  SSL_CTX_set_default_passwd_cb(ctx, io::tls::pemPassWordCb);
-  SSL_CTX_set_default_passwd_cb_userdata(ctx, &passphrase);
+  SSL_CTX_set_default_passwd_cb(local_context.get(), 
io::tls::pemPassWordCb);
+  SSL_CTX_set_default_passwd_cb_userdata(local_context.get(), &passphrase);
 }
 
-int retp = SSL_CTX_use_PrivateKey_file(ctx, privatekey.c_str(), 
SSL_FILETYPE_PEM);
+int retp = SSL_CTX_use_PrivateKey_file(local_context.get(), 
privatekey.c_str(), SSL_FILETYPE_PEM);
 if (retp != 1) {
   logger_->log_error("Could not create load private key,%i on %s error : 
%s", retp, privatekey, std::strerror(errno));
   error_value = TLS_ERROR_KEY_ERROR;
   return error_value;
 }
 // verify private key
-if (!SSL_CTX_check_private_key(ctx)) {
+if (!SSL_CTX_check_private_key(local_context.get())) {
   logger_->log_error("Private key does not match the public certificate, 
error : %s", std::strerror(errno));
   error_value = TLS_ERROR_KEY_ERROR;
   return error_value;
 }
 // load CA certificates
 if (ssl_service_ != nullptr || 
configure_->get(Configure::nifi_security_client_ca_certificate, caCertificate)) 
{
-  retp = SSL_CTX_load_verify_locations(ctx, caCertificate.c_str(), 0);
+  retp = SSL_CTX_load_verify_locations(local_context.get(), 
caCertificate.c_str(), 0);
   if (retp == 0) {
 logger_->log_error("Can not load CA certificate, Exiting, error : %s", 
std::strerror(errno));
 error_value = TLS_ERROR_CERT_ERROR;
 return error_value;
   }
 }
 
-logger_->log_debug("Load/Verify Client Certificate OK. for %X and %X", 
this, ctx);
+logger_->log_debug("Load/Verify Client Certificate OK. for %X and %X", 
this, local_context.get());
   }
-  ctxGuard.disable();
+  ctx.swap(local_context);

Review comment:
   Adding manually, as I do not want to split the change into two commits. 
The other suggestion has been applied.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-29 Thread GitBox


hunyadi-dev commented on a change in pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816#discussion_r446848748



##
File path: libminifi/include/utils/ScopeGuard.h
##
@@ -33,10 +33,6 @@ namespace utils {
 
 struct ScopeGuard : ::gsl::final_action> {
   using ::gsl::final_action>::final_action;
-
-  void disable() noexcept {
-dismiss();
-  }

Review comment:
   I do not like this pattern being present in the codebase at all. 
Restored, but marked for depracation ASAP (1.0).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-29 Thread GitBox


hunyadi-dev commented on a change in pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816#discussion_r446848748



##
File path: libminifi/include/utils/ScopeGuard.h
##
@@ -33,10 +33,6 @@ namespace utils {
 
 struct ScopeGuard : ::gsl::final_action> {
   using ::gsl::final_action>::final_action;
-
-  void disable() noexcept {
-dismiss();
-  }

Review comment:
   I do not like this pattern being present in the codebase at all. 
Restored, but marked for depracation ASAP (to be removed in 1.0).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #816: MINIFICPP-1261 - Refactor non-trivial usages of ScopeGuard class

2020-06-29 Thread GitBox


hunyadi-dev commented on a change in pull request #816:
URL: https://github.com/apache/nifi-minifi-cpp/pull/816#discussion_r446841976



##
File path: libminifi/src/io/tls/TLSSocket.cpp
##
@@ -72,30 +72,25 @@ int16_t TLSContext::initialize(bool server_method) {
   }
   const SSL_METHOD *method;
   method = server_method ? TLSv1_2_server_method() : TLSv1_2_client_method();
-  ctx = SSL_CTX_new(method);
-  if (ctx == nullptr) {
+  Context local_context = Context(SSL_CTX_new(method), deleteContext);
+  if (local_context == nullptr) {
 logger_->log_error("Could not create SSL context, error: %s.", 
std::strerror(errno));
 error_value = TLS_ERROR_CONTEXT;
 return error_value;
   }
 
-  utils::ScopeGuard ctxGuard([this]() {
-SSL_CTX_free(ctx);
-ctx = nullptr;
-  });
-
   if (needClientCert) {
 std::string certificate;
 std::string privatekey;
 std::string passphrase;
 std::string caCertificate;
 
 if (ssl_service_ != nullptr) {
-  if (!ssl_service_->configure_ssl_context(ctx)) {
+  if (!ssl_service_->configure_ssl_context(local_context.get())) {
 error_value = TLS_ERROR_CERT_ERROR;
 return error_value;
   }
-  ctxGuard.disable();
+  ctx.swap(local_context);

Review comment:
   Adding manually, as I do not want to split the change into two commits. 
The other suggestion has been applied.

##
File path: libminifi/include/io/tls/TLSSocket.h
##
@@ -80,10 +77,13 @@ class TLSContext : public SocketContext {
   int16_t initialize(bool server_method = false);
 
  private:
+  static void deleteContext(SSL_CTX* ptr) { SSL_CTX_free(ptr); }
+
   std::shared_ptr logger_;
   std::shared_ptr configure_;
   std::shared_ptr ssl_service_;
-  SSL_CTX *ctx;
+  using Context = std::unique_ptr;
+  Context ctx;

Review comment:
   Adding manually, as I do not want to split the change into two commits. 
The other suggestion has been applied.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #824: MINIFICPP-1256 - Apply fixes recommended by modernize-equals-default clang tidy check

2020-06-29 Thread GitBox


hunyadi-dev commented on a change in pull request #824:
URL: https://github.com/apache/nifi-minifi-cpp/pull/824#discussion_r446839898



##
File path: libminifi/include/core/state/UpdatePolicy.h
##
@@ -69,11 +69,9 @@ class UpdatePolicy {
   }
 
  protected:
-  explicit UpdatePolicy(const UpdatePolicy &other)
-  : enable_all_(other.enable_all_), properties_(other.properties_) {
-  }
+  UpdatePolicy(const UpdatePolicy &other) = default;
 
-  explicit UpdatePolicy(const UpdatePolicy &&other)
+  UpdatePolicy(UpdatePolicy &&other)

Review comment:
   Changed the move constructor and the move assignment as well.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1273) Clear connections on flow update

2020-06-29 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni updated MINIFICPP-1273:
--
Issue Type: Bug  (was: Improvement)

> Clear connections on flow update
> 
>
> Key: MINIFICPP-1273
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1273
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Adam Debreceni
>Priority: Major
>
> Upon a flow update all the previous connection and processors remain in 
> memory as they circularly reference each other through shared_ptr-s. This 
> wouldn't cause much of a problem as they are in the order of kb-s, but the 
> flowfiles in these connections are also leaked which could be a significant 
> sum. 
> We should drain the connections on flow shutdown. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1274) Flow restart could double-spend flowfiles

2020-06-29 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni updated MINIFICPP-1274:
--
Issue Type: Bug  (was: Improvement)

> Flow restart could double-spend flowfiles
> -
>
> Key: MINIFICPP-1274
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1274
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Adam Debreceni
>Priority: Major
>
> Flowfiles are async-deleted from the FlowFileRepository and no flush happens 
> after shutdown, leaving these marked files in the repository. If we restart 
> the agent, this allows these zombie files to be resurrected and be put back 
> into their last connections. This could cause files to be processed multiple 
> times even if we marked them for deletion.
> Solution proposal: flush the FlowFileRepository after shutdown, so all marked 
> files are actually deleted (this won't save us from double-processing 
> flowfiles after a crash)
> (also make sure that the FlowFileRepository shutdown happens after no more 
> processors are running)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1274) Flow restart could double-spend flowfiles

2020-06-29 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni updated MINIFICPP-1274:
--
Description: 
Flowfiles are async-deleted from the FlowFileRepository and no flush happens 
after shutdown, leaving these marked files in the repository. If we restart the 
agent, this allows these zombie files to be resurrected and be put back into 
their last connections. This could cause files to be processed multiple times 
even if we marked them for deletion.

Solution proposal: flush the FlowFileRepository after shutdown, so all marked 
files are actually deleted (this won't save us from double-processing flowfiles 
after a crash)

(also make sure that the FlowFileRepository shutdown happens after no more 
processors are running)

  was:
Flowfiles are async-deleted from the FlowFileRepository and no flush happens 
after shutdown leaving these marked files in the repository. If we restart the 
agent, this allows these zombie files to be resurrected and be put back into 
their last connections. This could cause files to be processed multiple times 
even if we marked them for deletion.

Solution proposal: flush the FlowFileRepository after shutdown, so all marked 
files are actually deleted (this won't save us from double-processing flowfiles 
after a crash)


> Flow restart could double-spend flowfiles
> -
>
> Key: MINIFICPP-1274
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1274
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Adam Debreceni
>Priority: Major
>
> Flowfiles are async-deleted from the FlowFileRepository and no flush happens 
> after shutdown, leaving these marked files in the repository. If we restart 
> the agent, this allows these zombie files to be resurrected and be put back 
> into their last connections. This could cause files to be processed multiple 
> times even if we marked them for deletion.
> Solution proposal: flush the FlowFileRepository after shutdown, so all marked 
> files are actually deleted (this won't save us from double-processing 
> flowfiles after a crash)
> (also make sure that the FlowFileRepository shutdown happens after no more 
> processors are running)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1274) Flow restart could double-spend flowfiles

2020-06-29 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1274:
-

 Summary: Flow restart could double-spend flowfiles
 Key: MINIFICPP-1274
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1274
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Adam Debreceni


Flowfiles are async-deleted from the FlowFileRepository and no flush happens 
after shutdown leaving these marked files in the repository. If we restart the 
agent, this allows these zombie files to be resurrected and be put back into 
their last connections. This could cause files to be processed multiple times 
even if we marked them for deletion.

Solution proposal: flush the FlowFileRepository after shutdown, so all marked 
files are actually deleted (this won't save us from double-processing flowfiles 
after a crash)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7429) Add Status History capabilities for system level metrics

2020-06-29 Thread Simon Bence (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Bence reassigned NIFI-7429:
-

Assignee: Simon Bence

> Add Status History capabilities for system level metrics
> 
>
> Key: NIFI-7429
> URL: https://issues.apache.org/jira/browse/NIFI-7429
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Simon Bence
>Priority: Major
>
> In order to troubleshoot issues it'd be super useful to have the same Status 
> History capabilities for system level metrics that we have for processors. We 
> would need the ability to see the last 24 hours stats for:
>  * garbage collections
>  * repositories sizes
>  * repositories IO access
>  * load average
>  * etc
> At cluster level with min/max/mean as well as at node level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1273) Clear connections on flow update

2020-06-29 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1273:
-

 Summary: Clear connections on flow update
 Key: MINIFICPP-1273
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1273
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Adam Debreceni


Upon a flow update all the previous connection and processors remain in memory 
as they circularly reference each other through shared_ptr-s. This wouldn't 
cause much of a problem as they are in the order of kb-s, but the flowfiles in 
these connections are also leaked which could be a significant sum. 

We should drain the connections on flow shutdown. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1272) Graceful shutdown on flow update

2020-06-29 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1272:
-

 Summary: Graceful shutdown on flow update
 Key: MINIFICPP-1272
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1272
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Adam Debreceni


Currently when a flow is modified the processors are terminated and all 
flowfiles currently in the connections are lost, if the new flow doesn't 
contain a connection with the same id.

We would like to immediately terminate all producer processors, but keep others 
up to a period of time so they could be written to disk or transferred over the 
network.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)