[jira] [Commented] (NIFI-4898) Remote Process Group in a SSL setup generates Java Exception

2018-02-21 Thread Josef Zahner (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372517#comment-16372517
 ] 

Josef Zahner commented on NIFI-4898:


You were right, I was using a lot of custom NARs/JARs. So I've removed them and 
finally it turns out that the Splunk JAR (splunk-library-javalogging-1.5.2) 
caused the issue. I'll investigate now why this happens. I was confused because 
with NiFi 1.4.0 it worked without any issues or at least without error messages 
in the log. To be honest I haven't checked whether Splunk gets the log 
messages. I'll try to investigate now where the error comes from. Thanks a lot!

> Remote Process Group in a SSL setup generates Java Exception
> 
>
> Key: NIFI-4898
> URL: https://issues.apache.org/jira/browse/NIFI-4898
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
> Environment: NiFi Version 1.5.0
> Java 1.8.0_161-b12
> CentOS Linux release 7.4.1708
>Reporter: Josef Zahner
>Priority: Major
>
> In a SSL secured NiFi setup, doesn't matter whether it is a cluster or not, 
> NiFi creates a Java exception as soon as I create a "Remote Process Group". 
> It doesn't mater which URL (even one which doesn't exists) I insert or if I 
> choose RAW or HTTP, the error is always the same and occurs as soon as I 
> click "add" on the "Remote Process Group".
> On NiFi 1.4.0 this works without any issues.
> Error:
> {code:java}
> 2018-02-21 10:42:10,006 ERROR [Remote Process Group 
> b7bde0cc-0161-1000-2e7f-3167a78d8386 Thread-1] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at java.util.concurrent.FutureTask.report(Unknown Source)
> at java.util.concurrent.FutureTask.get(Unknown Source)
> at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.setupClient(SiteToSiteRestApiClient.java:278)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getHttpClient(SiteToSiteRestApiClient.java:219)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1189)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1237)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.fetchController(SiteToSiteRestApiClient.java:419)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:394)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:361)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:346)
> at 
> org.apache.nifi.remote.StandardRemoteProcessGroup.refreshFlowContents(StandardRemoteProcessGroup.java:842)
> at 
> org.apache.nifi.remote.StandardRemoteProcessGroup.lambda$initialize$0(StandardRemoteProcessGroup.java:193)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
>  Source)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
> ... 3 common frames omitted
> 2018-02-21 10:42:10,009 ERROR [Remote Process Group 
> b7bde0cc-0161-1000-2e7f-3167a78d8386 Thread-1] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at java.util.concurrent.FutureTask.report(Unknown Source)
> at java.util.concurrent.FutureTask.get(Unknown Source)
> at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at 
> 

[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371889#comment-16371889
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2478#discussion_r169749383
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,562 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. "
++ "Could be null (not present) if transfered to FAILURE")
+})
+public class 

[GitHub] nifi pull request #2478: NIFI-4833 Add scanHBase Processor

2018-02-21 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2478#discussion_r169749383
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,562 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hbase", "scan", "fetch", "get"})
+@CapabilityDescription("Scans and fetches rows from an HBase table. This 
processor may be used to fetch rows from hbase table by specifying a range of 
rowkey values (start and/or end ),"
++ "by time range, by filter expression, or any combination of 
them. \n"
++ "Order of records can be controlled by a property 
Reversed"
++ "Number of rows retrieved by the processor can be limited.")
+@WritesAttributes({
+@WritesAttribute(attribute = "hbase.table", description = "The 
name of the HBase table that the row was fetched from"),
+@WritesAttribute(attribute = "hbase.resultset", description = "A 
JSON document/s representing the row/s. This property is only written when a 
Destination of flowfile-attributes is selected."),
+@WritesAttribute(attribute = "mime.type", description = "Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise"),
+@WritesAttribute(attribute = "hbase.rows.count", description = 
"Number of rows in the content of given flow file"),
+@WritesAttribute(attribute = "scanhbase.results.found", 
description = "Indicates whether at least one row has been found in given hbase 
table with provided conditions. "
++ "Could be null (not present) if transfered to FAILURE")
+})
+public class ScanHBase extends AbstractProcessor {
+//enhanced regex for columns to allow "-" in column qualifier names
+static final Pattern COLUMNS_PATTERN = 
Pattern.compile("\\w+(:(\\w|-)+)?(?:,\\w+(:(\\w|-)+)?)*");
+static 

[GitHub] nifi issue #2443: NIFI-4827 Added support for reading queries from the flowf...

2018-02-21 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2443
  
@mattyb149 Passed the test and it says it's ready.


---


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread devriesb
Github user devriesb commented on the issue:

https://github.com/apache/nifi/pull/2416
  
@markap14 yes... under this somewhat unusual circumstance, my proposed 
solution would sacrifice data for consistency.  However, if alwaysSync is set 
false, there isn't a guarantee of no loss anyway.  So, we'd really be no worse 
off than we are, except that the plug getting kicked out of the wall wouldn't 
kill your repo.

Also, I fully agree that an implementation that DOES guarantee no loss of 
CREATEs would be for the best.  My proposal above doesn't address that in any 
way.  It lives inside the "all or nothing" guarantee of the current 
(/previous?) WriteAheadFFR.


---


[GitHub] nifi pull request #2478: NIFI-4833 Add scanHBase Processor

2018-02-21 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2478#discussion_r169732184
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,562 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
--- End diff --

I was thinking about that before. Couldn't really decide, and then I took a 
look at DeleteHBaseRow and FetchHBaseRow and decided to keep it consistent.


---


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread devriesb
Github user devriesb commented on the issue:

https://github.com/apache/nifi/pull/2416
  
Again, I would submit that it isn't violating the data loss guarantees of 
the original implementation.  Yes, some data could potentially be lost, but 
none that it guaranteed to keep, and in doing so it prevents corruption... in a 
handful of easily understandable lines, that don't fundamentally alter the 
implementation.  

I want to be clear... I'm in no way opposed to improvements.  However, for 
operational data flows there is going to be some trepidation about large 
changes to such a key component... especially one that we've had previous 
issues with.  The smaller / more incremental the changes, and the more time 
allowed for evaluation under load, the more comfortable I would imagine people 
would be.


---


[jira] [Updated] (NIFIREG-147) Add Keycloak authentication method

2018-02-21 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFIREG-147:

Issue Type: Improvement  (was: Bug)

> Add Keycloak authentication method
> --
>
> Key: NIFIREG-147
> URL: https://issues.apache.org/jira/browse/NIFIREG-147
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Gregory Reshetniak
>Priority: Major
>
> Keycloak does implement a lot of related functionality, including groups, 
> users and such. It would be great to have first-class integration available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4775) Allow FlowFile Repository to optionally perform fsync when writing CREATE events but not other events

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371874#comment-16371874
 ] 

ASF GitHub Bot commented on NIFI-4775:
--

Github user devriesb commented on the issue:

https://github.com/apache/nifi/pull/2416
  
I'll grant NIFI-4775 may raise issues with my proposed solution. However, 
there is a problem right now.  My proposed solution addresses the problem right 
now.  Future modification may require adjustments to previous assumptions.  
That, however, is a problem for the future.  

In any case, after doing some experimentation, I'm not sure the current 
version of NIFI-4775 is the correct approach.  And whatever the eventual 
approach is, it may more appropriately be a new implementation (as discussed 
above).  I don't think we should put off correcting current bugs because they 
may complicate potential future features.


> Allow FlowFile Repository to optionally perform fsync when writing CREATE 
> events but not other events
> -
>
> Key: NIFI-4775
> URL: https://issues.apache.org/jira/browse/NIFI-4775
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Priority: Major
>
> Currently, when a FlowFile is written to the FlowFile Repository, the repo 
> can either fsync or not, depending on nifi.properties. We should allow a 
> third option, of fsync only for CREATE events. In this case, if we receive 
> new data from a source we can fsync the update to the FlowFile Repository 
> before ACK'ing the data from the source. This allows us to guarantee data 
> persistence without the overhead of an fsync for every FlowFile Repository 
> update.
> It may make sense, though, to be a bit more selective about when do this. For 
> example if the source is a system that does not allow us to acknowledge the 
> receipt of data, such as a ListenUDP processor, this doesn't really buy us 
> much. In such a case, we could be smart about avoiding the high cost of an 
> fsync. However, for something like GetSFTP where we have to remove the file 
> in order to 'acknowledge receipt' we can ensure that we wait for the fsync 
> before proceeding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4898) Remote Process Group in a SSL setup generates Java Exception

2018-02-21 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371958#comment-16371958
 ] 

Pierre Villard commented on NIFI-4898:
--

Hi [~jzahner], I'm not able to reproduce the issue on my side. Any custom NAR 
or non-default jars in /lib directory?

> Remote Process Group in a SSL setup generates Java Exception
> 
>
> Key: NIFI-4898
> URL: https://issues.apache.org/jira/browse/NIFI-4898
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
> Environment: NiFi Version 1.5.0
> Java 1.8.0_161-b12
> CentOS Linux release 7.4.1708
>Reporter: Josef Zahner
>Priority: Major
>
> In a SSL secured NiFi setup, doesn't matter whether it is a cluster or not, 
> NiFi creates a Java exception as soon as I create a "Remote Process Group". 
> It doesn't mater which URL (even one which doesn't exists) I insert or if I 
> choose RAW or HTTP, the error is always the same and occurs as soon as I 
> click "add" on the "Remote Process Group".
> On NiFi 1.4.0 this works without any issues.
> Error:
> {code:java}
> 2018-02-21 10:42:10,006 ERROR [Remote Process Group 
> b7bde0cc-0161-1000-2e7f-3167a78d8386 Thread-1] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at java.util.concurrent.FutureTask.report(Unknown Source)
> at java.util.concurrent.FutureTask.get(Unknown Source)
> at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.setupClient(SiteToSiteRestApiClient.java:278)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getHttpClient(SiteToSiteRestApiClient.java:219)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1189)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1237)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.fetchController(SiteToSiteRestApiClient.java:419)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:394)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:361)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:346)
> at 
> org.apache.nifi.remote.StandardRemoteProcessGroup.refreshFlowContents(StandardRemoteProcessGroup.java:842)
> at 
> org.apache.nifi.remote.StandardRemoteProcessGroup.lambda$initialize$0(StandardRemoteProcessGroup.java:193)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
>  Source)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
> ... 3 common frames omitted
> 2018-02-21 10:42:10,009 ERROR [Remote Process Group 
> b7bde0cc-0161-1000-2e7f-3167a78d8386 Thread-1] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at java.util.concurrent.FutureTask.report(Unknown Source)
> at java.util.concurrent.FutureTask.get(Unknown Source)
> at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.setupClient(SiteToSiteRestApiClient.java:278)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getHttpClient(SiteToSiteRestApiClient.java:219)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1189)
> at 
> 

[jira] [Commented] (NIFI-4899) Unable to find valid certification path to requested target

2018-02-21 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371966#comment-16371966
 ] 

Pierre Villard commented on NIFI-4899:
--

This looks like as a truststore issue. Is it a cluster setup? How has SSL been 
enabled on the cluster, manually or using the toolkit?

Nevertheless, it's kind of weird this is happening only once after a NiFi 
restart... [~alopresto] may have an idea about it.

> Unable to find valid certification path to requested target
> ---
>
> Key: NIFI-4899
> URL: https://issues.apache.org/jira/browse/NIFI-4899
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
> Environment: NiFi Version 1.5.0 
> Java 1.8.0_161-b12 
> CentOS Linux release 7.4.1708
>Reporter: Josef Zahner
>Priority: Minor
>  Labels: certificate, login, ssl
> Attachments: Screen Shot 2018-02-21 at 11.08.13.png
>
>
> In my clustered ssl environment, if I start the webgui the first time, enter 
> my login credentials (verified via LDAP) and go ahead (click "LOG IN") I'm 
> getting the error below:
> !Screen Shot 2018-02-21 at 11.08.13.png!
> {code:java}
> javax.ws.rs.ProcessingException: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
> at 
> org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:284)
> at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)
> at 
> org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
> at org.glassfish.jersey.internal.Errors.process(Errors.java:229)
> at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:414)
> at 
> org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:752)
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:661)
> at 
> org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:875)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
> at sun.security.ssl.Alerts.getSSLException(Unknown Source)
> at sun.security.ssl.SSLSocketImpl.fatal(Unknown Source)
> at sun.security.ssl.Handshaker.fatalSE(Unknown Source)
> at sun.security.ssl.Handshaker.fatalSE(Unknown Source)
> at sun.security.ssl.ClientHandshaker.serverCertificate(Unknown Source)
> at sun.security.ssl.ClientHandshaker.processMessage(Unknown Source)
> at sun.security.ssl.Handshaker.processLoop(Unknown Source)
> at sun.security.ssl.Handshaker.process_record(Unknown Source)
> at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source)
> at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
> at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
> at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
> at sun.net.www.protocol.https.HttpsClient.afterConnect(Unknown Source)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown 
> Source)
> at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(Unknown Source)
> at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
> at java.net.HttpURLConnection.getResponseCode(Unknown Source)
> at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(Unknown 
> Source)
> at 
> org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:390)
> at 
> org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:282)
> ... 14 common frames omitted
> Caused by: sun.security.validator.ValidatorException: PKIX path building 
> failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
> find valid certification path to requested target
> at sun.security.validator.PKIXValidator.doBuild(Unknown Source)
> at sun.security.validator.PKIXValidator.engineValidate(Unknown Source)
> at 

[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread devriesb
Github user devriesb commented on the issue:

https://github.com/apache/nifi/pull/2416
  
I'm glad there's support for making this opt in.  One point on @joewitt 's 
comment : "The claim of a simple fix being available to close the previous gaps 
doesn't appear to be backed with a suggested implementation though it does look 
like you received a good response to why that wasn't feasible." ... the reasons 
given were reasons why @markap14 's original proposed solution wouldn't work, 
not why another solution might not.  For example (as suggested previously in 
email):

> the immediate fix that I see needing is that when building the 
transaction map[1], we need to keep track of the *lowest* encountered corrupt 
transaction. Then, when iterating over the transaction map to rebuild the repo, 
we need to stop when that transaction is reached...  after that we can't trust 
that there are no missing transactions which could lead to repo corruption or 
incorrect state.

This is the relatively simple fix that I believe would be more appropriate 
to the WhriteAheadFlowFileRepository, as it would only prevent a known bug from 
causing corruption, as opposed to a major rewrite.  I never got any response 
suggesting this wouldn't work, rather simply that another solution was 
preferred... which turned out not to be feasible.  At that point, instead of 
circling back and trying to fix the bug, it appears as though the rewrite began.

If there's a reason the author of the original implementation doesn't 
believe my suggestion would work to prevent the observed corruption, I'd be 
happy to take another look.

[1] 
https://github.com/apache/nifi/blob/0f2ac39f69c1a744f151f0d924c9978f6790b7f7/nifi-commons/nifi-write-ahead-log/src/main/java/org/wali/MinimalLockingWriteAheadLog.java#L444



---


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2416
  
@devriesb I believe the issue with this proposed solution is that if we 
have 256 partitions (the default), for instance, then if one partition is not 
flushed to disk during a power failure, we would end up losing all data written 
to the other 255 partitions. Even if we were to use an fsync() for each CREATE 
event, that data would be lost with the proposed solution. So this could lead 
to quite a bit of data loss even with fsync enabled.


---


[GitHub] nifi issue #2448: NIFI-4838 Added configurable progressive commits to GetMon...

2018-02-21 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2448
  
@mattyb149 I'll rebase this one once #4827 is done. There's too much going 
on in the same processor right now between these and the one just merged.


---


[jira] [Updated] (NIFI-3502) Upgrade D3 to the latest 4.x version

2018-02-21 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-3502:
-
   Resolution: Fixed
Fix Version/s: 1.6.0
   Status: Resolved  (was: Patch Available)

> Upgrade D3 to the latest 4.x version
> 
>
> Key: NIFI-3502
> URL: https://issues.apache.org/jira/browse/NIFI-3502
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Scott Aslan
>Assignee: Matt Gilman
>Priority: Major
> Fix For: 1.6.0
>
>
> The NIFI canvas web application is using version 3.x of the D3 library which 
> is a major version behind and is due to be upgraded. This will be a bit of an 
> effort as the API's have changed in 4.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2416
  
I would not consider that circumstance to be unusual, but rather a common 
scenario if power is lost, after NIFI-4775 has been implemented. Given that 
NIFI-4775 was created and that there were no objections, I considered that 
verification that it is intended to be implemented in the future. Once this is 
done, it will guarantee no loss of data (though it would allow loss of 
processing). The proposed solution, however, still results in data loss if 
power is lost, but also prevents us from implementing NIFI-4775 effectively 
because once it is implemented it would provide us no real benefit with such a 
solution, as it would still throw out those fsync'ed CREATE events if another 
partition was not also fsync'ed.


---


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2416
  
The proposed solution does address the issue that was raised in NIFI-4774, 
but in doing so introduces a new issue of data loss. This is why I provided the 
solution that I did in the PR, as I believe that it addresses both of these 
issues.

Again, I do not have a problem with making this new solution one that the 
user is able to opt out of, though. If you want to submit a PR that also 
incorporates your proposed solution into the MinimalLockingWirteAheadLog that 
is okay too and we can also review and incorporate that change as well.


---


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread devriesb
Github user devriesb commented on the issue:

https://github.com/apache/nifi/pull/2416
  
I'll grant NIFI-4775 may raise issues with my proposed solution. However, 
there is a problem right now.  My proposed solution addresses the problem right 
now.  Future modification may require adjustments to previous assumptions.  
That, however, is a problem for the future.  

In any case, after doing some experimentation, I'm not sure the current 
version of NIFI-4775 is the correct approach.  And whatever the eventual 
approach is, it may more appropriately be a new implementation (as discussed 
above).  I don't think we should put off correcting current bugs because they 
may complicate potential future features.


---


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371808#comment-16371808
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2478#discussion_r169732184
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -0,0 +1,562 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.hbase.io.JsonFullRowSerializer;
+import org.apache.nifi.hbase.io.JsonQualifierAndValueRowSerializer;
+import org.apache.nifi.hbase.io.RowSerializer;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.Tuple;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.regex.Pattern;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
--- End diff --

I was thinking about that before. Couldn't really decide, and then I took a 
look at DeleteHBaseRow and FetchHBaseRow and decided to keep it consistent.


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve records from HBase tables.
> Today there are GetHBase and FetchHBaseRow. GetHBase can pull entire table or 
> only new rows after processor started; it also must be scheduled and doesn't 
> support incoming . FetchHBaseRow can pull rows with known rowkeys only.
> This processor could provide functionality similar to what could be reached 
> by using hbase shell, defining following properties:
> -scan based on range of row key IDs 
> -scan based on range of time stamps
> -limit number of records pulled
> -use filters
> -reverse rows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4774) FlowFile Repository should write updates to the same FlowFile to the same partition

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371911#comment-16371911
 ] 

ASF GitHub Bot commented on NIFI-4774:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2416
  
The proposed solution does address the issue that was raised in NIFI-4774, 
but in doing so introduces a new issue of data loss. This is why I provided the 
solution that I did in the PR, as I believe that it addresses both of these 
issues.

Again, I do not have a problem with making this new solution one that the 
user is able to opt out of, though. If you want to submit a PR that also 
incorporates your proposed solution into the MinimalLockingWirteAheadLog that 
is okay too and we can also review and incorporate that change as well.


> FlowFile Repository should write updates to the same FlowFile to the same 
> partition
> ---
>
> Key: NIFI-4774
> URL: https://issues.apache.org/jira/browse/NIFI-4774
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.6.0
>
>
> As-is, in the case of power loss or Operating System crash, we could have an 
> update that is lost, and then an update for the same FlowFile that is not 
> lost, because the updates for a given FlowFile can span partitions. If an 
> update were written to Partition 1 and then to Partition 2 and Partition 2 is 
> flushed to disk by the Operating System and then the Operating System crashes 
> or power is lost before Partition 1 is flushed to disk, we could lose the 
> update to Partition 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4827) Make GetMongo able to use flowfiles for queries

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371887#comment-16371887
 ] 

ASF GitHub Bot commented on NIFI-4827:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2443
  
@mattyb149 Passed the test and it says it's ready.


> Make GetMongo able to use flowfiles for queries
> ---
>
> Key: NIFI-4827
> URL: https://issues.apache.org/jira/browse/NIFI-4827
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Minor
>
> GetMongo should be able to retrieve a valid query from the flowfile content 
> or allow the incoming flowfile to provide attributes to power EL statements 
> in the Query configuration field. Allowing the body to be used would allow 
> GetMongo to be used in a much more generic way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4775) Allow FlowFile Repository to optionally perform fsync when writing CREATE events but not other events

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371852#comment-16371852
 ] 

ASF GitHub Bot commented on NIFI-4775:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2416
  
I would not consider that circumstance to be unusual, but rather a common 
scenario if power is lost, after NIFI-4775 has been implemented. Given that 
NIFI-4775 was created and that there were no objections, I considered that 
verification that it is intended to be implemented in the future. Once this is 
done, it will guarantee no loss of data (though it would allow loss of 
processing). The proposed solution, however, still results in data loss if 
power is lost, but also prevents us from implementing NIFI-4775 effectively 
because once it is implemented it would provide us no real benefit with such a 
solution, as it would still throw out those fsync'ed CREATE events if another 
partition was not also fsync'ed.


> Allow FlowFile Repository to optionally perform fsync when writing CREATE 
> events but not other events
> -
>
> Key: NIFI-4775
> URL: https://issues.apache.org/jira/browse/NIFI-4775
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Priority: Major
>
> Currently, when a FlowFile is written to the FlowFile Repository, the repo 
> can either fsync or not, depending on nifi.properties. We should allow a 
> third option, of fsync only for CREATE events. In this case, if we receive 
> new data from a source we can fsync the update to the FlowFile Repository 
> before ACK'ing the data from the source. This allows us to guarantee data 
> persistence without the overhead of an fsync for every FlowFile Repository 
> update.
> It may make sense, though, to be a bit more selective about when do this. For 
> example if the source is a system that does not allow us to acknowledge the 
> receipt of data, such as a ListenUDP processor, this doesn't really buy us 
> much. In such a case, we could be smart about avoiding the high cost of an 
> fsync. However, for something like GetSFTP where we have to remove the file 
> in order to 'acknowledge receipt' we can ensure that we wait for the fsync 
> before proceeding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2416
  
I am not opposed to providing a mechanism for opting out. I can look into 
creating a new PR that will provide that.


---


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371819#comment-16371819
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2448
  
@mattyb149 I'll rebase this one once #4827 is done. There's too much going 
on in the same processor right now between these and the one just merged.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4827) Make GetMongo able to use flowfiles for queries

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371801#comment-16371801
 ] 

ASF GitHub Bot commented on NIFI-4827:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2443
  
@mattyb149 Rebased and pushed. Building now.


> Make GetMongo able to use flowfiles for queries
> ---
>
> Key: NIFI-4827
> URL: https://issues.apache.org/jira/browse/NIFI-4827
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Minor
>
> GetMongo should be able to retrieve a valid query from the flowfile content 
> or allow the incoming flowfile to provide attributes to power EL statements 
> in the Query configuration field. Allowing the body to be used would allow 
> GetMongo to be used in a much more generic way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2443: NIFI-4827 Added support for reading queries from the flowf...

2018-02-21 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2443
  
@mattyb149 Rebased and pushed. Building now.


---


[jira] [Commented] (NIFI-4894) nf-canvas-utils#queryBulletins does not account for possible proxy path

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371788#comment-16371788
 ] 

ASF GitHub Bot commented on NIFI-4894:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2482
  
Thanks @jtstorck! Thanks @mcgilman this has been merged to master.


> nf-canvas-utils#queryBulletins does not account for possible proxy path
> ---
>
> Key: NIFI-4894
> URL: https://issues.apache.org/jira/browse/NIFI-4894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.6.0
>
>
> The queryBulletins function in nf-canvas-utils does not account for possible 
> proxy paths when querying for bulletins. We cannot, unfortunately, use 
> relative paths here since we need to know the full URL (to know it's length) 
> in order to break the query into multiple requests if necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4894) nf-canvas-utils#queryBulletins does not account for possible proxy path

2018-02-21 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-4894:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> nf-canvas-utils#queryBulletins does not account for possible proxy path
> ---
>
> Key: NIFI-4894
> URL: https://issues.apache.org/jira/browse/NIFI-4894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
>
> The queryBulletins function in nf-canvas-utils does not account for possible 
> proxy paths when querying for bulletins. We cannot, unfortunately, use 
> relative paths here since we need to know the full URL (to know it's length) 
> in order to break the query into multiple requests if necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4894) nf-canvas-utils#queryBulletins does not account for possible proxy path

2018-02-21 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-4894:
--
Fix Version/s: 1.6.0

> nf-canvas-utils#queryBulletins does not account for possible proxy path
> ---
>
> Key: NIFI-4894
> URL: https://issues.apache.org/jira/browse/NIFI-4894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.6.0
>
>
> The queryBulletins function in nf-canvas-utils does not account for possible 
> proxy paths when querying for bulletins. We cannot, unfortunately, use 
> relative paths here since we need to know the full URL (to know it's length) 
> in order to break the query into multiple requests if necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4894) nf-canvas-utils#queryBulletins does not account for possible proxy path

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371778#comment-16371778
 ] 

ASF GitHub Bot commented on NIFI-4894:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2482


> nf-canvas-utils#queryBulletins does not account for possible proxy path
> ---
>
> Key: NIFI-4894
> URL: https://issues.apache.org/jira/browse/NIFI-4894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.6.0
>
>
> The queryBulletins function in nf-canvas-utils does not account for possible 
> proxy paths when querying for bulletins. We cannot, unfortunately, use 
> relative paths here since we need to know the full URL (to know it's length) 
> in order to break the query into multiple requests if necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2482: NIFI-4894: Ensuring that any proxy paths are retained when...

2018-02-21 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2482
  
Thanks @jtstorck! Thanks @mcgilman this has been merged to master.


---


[GitHub] nifi pull request #2482: NIFI-4894: Ensuring that any proxy paths are retain...

2018-02-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2482


---


[jira] [Commented] (NIFI-4894) nf-canvas-utils#queryBulletins does not account for possible proxy path

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371772#comment-16371772
 ] 

ASF GitHub Bot commented on NIFI-4894:
--

Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/2482
  
@mcgilman @scottyaslan I was able to reproduce the bug in master while 
proxying with Knox by creating a HandleHttpRequest processor with a 
StandardHttpContextMap controller service.  Attempting to enable the service 
through the UI resulted in the 404.  After applying the PR to master and 
restarting, I was able to go through the same steps and successfully enable the 
StandardHttpContextMap controller service.

+1 LGTM!


> nf-canvas-utils#queryBulletins does not account for possible proxy path
> ---
>
> Key: NIFI-4894
> URL: https://issues.apache.org/jira/browse/NIFI-4894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
>
> The queryBulletins function in nf-canvas-utils does not account for possible 
> proxy paths when querying for bulletins. We cannot, unfortunately, use 
> relative paths here since we need to know the full URL (to know it's length) 
> in order to break the query into multiple requests if necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2482: NIFI-4894: Ensuring that any proxy paths are retained when...

2018-02-21 Thread jtstorck
Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/2482
  
@mcgilman @scottyaslan I was able to reproduce the bug in master while 
proxying with Knox by creating a HandleHttpRequest processor with a 
StandardHttpContextMap controller service.  Attempting to enable the service 
through the UI resulted in the 404.  After applying the PR to master and 
restarting, I was able to go through the same steps and successfully enable the 
StandardHttpContextMap controller service.

+1 LGTM!


---


[jira] [Updated] (NIFI-4815) Add EL support to ExecuteProcess

2018-02-21 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4815:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add EL support to ExecuteProcess
> 
>
> Key: NIFI-4815
> URL: https://issues.apache.org/jira/browse/NIFI-4815
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.6.0
>
>
> ExecuteProcess does not support EL for 'command' and 'working dir' 
> properties. That would be useful when promoting workflows between 
> environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2416
  
+1 to @joewitt 's comments on allowing opt-out or selection of the 
implementation. I'm good with this being the default going forward, but I think 
the other impl(s) should be available to the users.


---


[jira] [Commented] (NIFI-4815) Add EL support to ExecuteProcess

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371745#comment-16371745
 ] 

ASF GitHub Bot commented on NIFI-4815:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2432


> Add EL support to ExecuteProcess
> 
>
> Key: NIFI-4815
> URL: https://issues.apache.org/jira/browse/NIFI-4815
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.6.0
>
>
> ExecuteProcess does not support EL for 'command' and 'working dir' 
> properties. That would be useful when promoting workflows between 
> environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4815) Add EL support to ExecuteProcess

2018-02-21 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4815:
---
Fix Version/s: 1.6.0

> Add EL support to ExecuteProcess
> 
>
> Key: NIFI-4815
> URL: https://issues.apache.org/jira/browse/NIFI-4815
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.6.0
>
>
> ExecuteProcess does not support EL for 'command' and 'working dir' 
> properties. That would be useful when promoting workflows between 
> environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2416
  
It definitely requires considerable testing.  Burgess clearly did testing 
during his code review and he had favorable findings.  Mark obviously did.  I 
personally spent multiple weeks in long running protracted tests on a couple of 
systems one of which ran uninterrupted at a rate of 120,000 events per second 
(so WALI update rate was far higher) for 11 days with zero full gcs, perfectly 
stable performance, and in testing before after with a series of hard 
stops/restarts all behaviors were favorable.  Do you have tests underway with 
problematic findings? I'll state that based on my experience with this project 
I am very comfortable with this change and my own personally verified 
observations and evaluation.

It is critical code so it does help that the author that wrote the previous 
implementation also authored this one and that the problems of the previous one 
were well stated and the path to write it was clear.  That the changes were 
significant in nature in terms of total lines/files impacted I dont think 
reflects the improved simplicity this brings and as noted it was reusing much 
of the core logic of the previous more complicated approach.

The claim of a simple fix being available to close the previous gaps 
doesn't appear to be backed with a suggested implementation though it does look 
like you received a good response to why that wasn't feasible.

You raised some good points and got a really detailed reply from mark a 
month ago after which others such as myself and Burgess stayed active on this 
important item.  So, in that sense this all seems fine.

Now, having said all of this I do share your view that I would have 
preferred to see this be something users could opt-out since the old 
implementation even with the known potential issue has been extremely stable 
for a very long time and is relied upon heavily. I think using this new 
implementation as the default which closes the gap and which handles 
automatically reading old repository formatted partitions is correct and 
letting users that are on the old one stay on it as they wish is perfectly 
valid.

I dont think it is right to revert this change.  But I do think it is fair 
to reopen the JIRA and move toward a model allowing opt-out and leveraging this 
as a default.  @markap14 do you agree?  @mattyb149  do you? 

Thanks


---


[GitHub] nifi pull request #2432: NIFI-4815 - Add EL support to ExecuteProcess

2018-02-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2432


---


[jira] [Commented] (NIFI-4815) Add EL support to ExecuteProcess

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371741#comment-16371741
 ] 

ASF GitHub Bot commented on NIFI-4815:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2432
  
+1 from me as well, everything's working nicely. Thanks for the 
improvement! Merging to master


> Add EL support to ExecuteProcess
> 
>
> Key: NIFI-4815
> URL: https://issues.apache.org/jira/browse/NIFI-4815
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> ExecuteProcess does not support EL for 'command' and 'working dir' 
> properties. That would be useful when promoting workflows between 
> environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-120) Basic Docker Image

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371742#comment-16371742
 ] 

ASF GitHub Bot commented on NIFIREG-120:


Github user Chaffelson commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/89#discussion_r169721507
  
--- Diff: nifi-registry-docker/dockerhub/README.md ---
@@ -0,0 +1,127 @@
+
+
+# Docker Image Quickstart
+
+## Capabilities
+This image currently supports running in standalone mode either unsecured 
or with user authentication provided through:
+   * [Two-Way SSL with Client 
Certificates](https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#security-configuration)
+   * [Lightweight Directory Access Protocol 
(LDAP)](https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#ldap_identity_provider)
+   
+## Building
+The Docker image can be built using the following command:
+
+. 
~/Projects/nifi-dev/nifi-registry/nifi-registry-docker/dockerhub/DockerBuild.sh
+
+This will attempt to build and tag an image matching the string in 
DockerImage.txt
+
+dockerhub dchaffey$ cat DockerImage.txt
+> apache/nifi-registry:0.1.0
+docker images
+> REPOSITORY   TAG IMAGE ID
CREATED SIZE
+> apache/nifi-registry 0.1.0   751428cbf63115 
minutes ago  342MB
+
+**Note**: The default version of NiFi-Registry specified by the Dockerfile 
is typically that of one that is unreleased if working from source.
+To build an image for a prior released version, one can override the 
`NIFI_REGISTRY_VERSION` build-arg with the following command:
+
+docker build --build-arg=NIFI_REGISRTY_VERSION={Desired NiFi-Registry 
Version} -t apache/nifi-registry:latest .
--- End diff --

Sorry I missed the notification of your review @kevdoran 
All your suggestions are perfectly reasonable. I think that the pattern of 
passing environment variables is what is useful here, and expanding it to cover 
other requirements is a good idea.


> Basic Docker Image
> --
>
> Key: NIFIREG-120
> URL: https://issues.apache.org/jira/browse/NIFIREG-120
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Daniel Chaffelson
>Priority: Minor
> Fix For: 0.2.0
>
>
> It would be convenient if NiFi Registry had an integrated Docker image ready 
> for uploading to Dockerhub, similar to the main NiFi Project, for ease of 
> integration testing.
> This could probably be ported, with some changes, from the same approach used 
> in the main NiFi project for continuity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #89: NIFIREG-120 Basic Docker Image Support

2018-02-21 Thread Chaffelson
Github user Chaffelson commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/89#discussion_r169721507
  
--- Diff: nifi-registry-docker/dockerhub/README.md ---
@@ -0,0 +1,127 @@
+
+
+# Docker Image Quickstart
+
+## Capabilities
+This image currently supports running in standalone mode either unsecured 
or with user authentication provided through:
+   * [Two-Way SSL with Client 
Certificates](https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#security-configuration)
+   * [Lightweight Directory Access Protocol 
(LDAP)](https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#ldap_identity_provider)
+   
+## Building
+The Docker image can be built using the following command:
+
+. 
~/Projects/nifi-dev/nifi-registry/nifi-registry-docker/dockerhub/DockerBuild.sh
+
+This will attempt to build and tag an image matching the string in 
DockerImage.txt
+
+dockerhub dchaffey$ cat DockerImage.txt
+> apache/nifi-registry:0.1.0
+docker images
+> REPOSITORY   TAG IMAGE ID
CREATED SIZE
+> apache/nifi-registry 0.1.0   751428cbf63115 
minutes ago  342MB
+
+**Note**: The default version of NiFi-Registry specified by the Dockerfile 
is typically that of one that is unreleased if working from source.
+To build an image for a prior released version, one can override the 
`NIFI_REGISTRY_VERSION` build-arg with the following command:
+
+docker build --build-arg=NIFI_REGISRTY_VERSION={Desired NiFi-Registry 
Version} -t apache/nifi-registry:latest .
--- End diff --

Sorry I missed the notification of your review @kevdoran 
All your suggestions are perfectly reasonable. I think that the pattern of 
passing environment variables is what is useful here, and expanding it to cover 
other requirements is a good idea.


---


[GitHub] nifi issue #2432: NIFI-4815 - Add EL support to ExecuteProcess

2018-02-21 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2432
  
+1 from me as well, everything's working nicely. Thanks for the 
improvement! Merging to master


---


[GitHub] nifi-minifi-cpp pull request #267: MINIFICPP-409 Removing unused constants r...

2018-02-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/267


---


[jira] [Updated] (MINIFICPP-409) Remove unused artifacts from Configuration Listener removal

2018-02-21 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-409:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove unused artifacts from Configuration Listener removal
> ---
>
> Key: MINIFICPP-409
> URL: https://issues.apache.org/jira/browse/MINIFICPP-409
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Minor
> Fix For: 0.5.0
>
>
> With the introduction of C2 and related efforts, some artifacts were left 
> behind when the Configuration Listener was removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-408) Secure minifi controller

2018-02-21 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri updated MINIFICPP-408:
--
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Secure minifi controller
> 
>
> Key: MINIFICPP-408
> URL: https://issues.apache.org/jira/browse/MINIFICPP-408
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>
> MiNiFi Controller only works with non secure sockets. Give users the option 
> to secure their instance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-409) Remove unused artifacts from Configuration Listener removal

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371696#comment-16371696
 ] 

ASF GitHub Bot commented on MINIFICPP-409:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/267


> Remove unused artifacts from Configuration Listener removal
> ---
>
> Key: MINIFICPP-409
> URL: https://issues.apache.org/jira/browse/MINIFICPP-409
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Minor
> Fix For: 0.5.0
>
>
> With the introduction of C2 and related efforts, some artifacts were left 
> behind when the Configuration Listener was removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-409) Remove unused artifacts from Configuration Listener removal

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371692#comment-16371692
 ] 

ASF GitHub Bot commented on MINIFICPP-409:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/267
  
Travis looks good outside of a hiccup on the one OS X run from the 
environment (all tests but the last executed and passed).  
https://travis-ci.org/apache/nifi-minifi-cpp/builds/343876312?utm_source=github_status_medium=notification

Going to merge.


> Remove unused artifacts from Configuration Listener removal
> ---
>
> Key: MINIFICPP-409
> URL: https://issues.apache.org/jira/browse/MINIFICPP-409
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Minor
> Fix For: 0.5.0
>
>
> With the introduction of C2 and related efforts, some artifacts were left 
> behind when the Configuration Listener was removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #267: MINIFICPP-409 Removing unused constants relating...

2018-02-21 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/267
  
Travis looks good outside of a hiccup on the one OS X run from the 
environment (all tests but the last executed and passed).  
https://travis-ci.org/apache/nifi-minifi-cpp/builds/343876312?utm_source=github_status_medium=notification

Going to merge.


---


[jira] [Resolved] (MINIFICPP-406) Change StreamFactory to use SSLContextService if one is available.

2018-02-21 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved MINIFICPP-406.
---
   Resolution: Fixed
Fix Version/s: 0.5.0

> Change StreamFactory to use SSLContextService if one is available. 
> ---
>
> Key: MINIFICPP-406
> URL: https://issues.apache.org/jira/browse/MINIFICPP-406
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently socket coordination is done through minifi.properties and is 
> unaware of any context service. We should change this to make configuration 
> consistent. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-406) Change StreamFactory to use SSLContextService if one is available.

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371682#comment-16371682
 ] 

ASF GitHub Bot commented on MINIFICPP-406:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/265


> Change StreamFactory to use SSLContextService if one is available. 
> ---
>
> Key: MINIFICPP-406
> URL: https://issues.apache.org/jira/browse/MINIFICPP-406
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
>
> Currently socket coordination is done through minifi.properties and is 
> unaware of any context service. We should change this to make configuration 
> consistent. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #265: MINIFICPP-406: Ensure that Context Servic...

2018-02-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/265


---


[jira] [Assigned] (NIFI-543) Provide extensions a way to indicate that they can run only on primary node, if clustered

2018-02-21 Thread Sivaprasanna Sethuraman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaprasanna Sethuraman reassigned NIFI-543:


Assignee: Sivaprasanna Sethuraman  (was: Andre F de Miranda)

> Provide extensions a way to indicate that they can run only on primary node, 
> if clustered
> -
>
> Key: NIFI-543
> URL: https://issues.apache.org/jira/browse/NIFI-543
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core Framework, Documentation  Website, Extensions
>Reporter: Mark Payne
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> There are Processors that are known to be problematic if run from multiple 
> nodes simultaneously. These processors should be able to use a 
> @PrimaryNodeOnly annotation (or something similar) to indicate that they can 
> be scheduled to run only on primary node if run in a cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371675#comment-16371675
 ] 

ASF GitHub Bot commented on NIFI-4839:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2477
  
@bbende agree with your points: we can keep things as-is for now and have a 
larger effort for documentation with follow-up JIRAs.

@aperepel  I *really* believe that online documentation provides a much 
better exposure. A lot of people I discuss with completely ignore the existence 
of the toolkit binaries. And we've a lot of interesting stuff in it.

I think that providing a mechanism with annotations in the code and 
automatic generation of the documentation can answer both the online 
documentation and the information in the command's output. That would be the 
best approach IMO.


> Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
> 
>
> Key: NIFI-4839
> URL: https://issues.apache.org/jira/browse/NIFI-4839
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> Now that we have NiFi Registry and the ability to import/upgrade flows in 
> NiFi, we should offer a command-line tool to interact with these REST 
> end-points. This could part of NiFi Toolkit and would help people potentially 
> automate some of these operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2477: NIFI-4839 Adding CLI to nifi-toolkit

2018-02-21 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2477
  
@bbende agree with your points: we can keep things as-is for now and have a 
larger effort for documentation with follow-up JIRAs.

@aperepel  I *really* believe that online documentation provides a much 
better exposure. A lot of people I discuss with completely ignore the existence 
of the toolkit binaries. And we've a lot of interesting stuff in it.

I think that providing a mechanism with annotations in the code and 
automatic generation of the documentation can answer both the online 
documentation and the information in the command's output. That would be the 
best approach IMO.


---


[jira] [Commented] (NIFI-4774) FlowFile Repository should write updates to the same FlowFile to the same partition

2018-02-21 Thread Brandon DeVries (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371636#comment-16371636
 ] 

Brandon DeVries commented on NIFI-4774:
---

-1.  I was under the impression the PR for this was still a work in progress.  
I commented on the PR 
([https://github.com/apache/nifi/pull/2416).|https://github.com/apache/nifi/pull/2416)]

> FlowFile Repository should write updates to the same FlowFile to the same 
> partition
> ---
>
> Key: NIFI-4774
> URL: https://issues.apache.org/jira/browse/NIFI-4774
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.6.0
>
>
> As-is, in the case of power loss or Operating System crash, we could have an 
> update that is lost, and then an update for the same FlowFile that is not 
> lost, because the updates for a given FlowFile can span partitions. If an 
> update were written to Partition 1 and then to Partition 2 and Partition 2 is 
> flushed to disk by the Operating System and then the Operating System crashes 
> or power is lost before Partition 1 is flushed to disk, we could lose the 
> update to Partition 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4815) Add EL support to ExecuteProcess

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371626#comment-16371626
 ] 

ASF GitHub Bot commented on NIFI-4815:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2432
  
Thanks for the review @MikeThomsen !  I'll give it a try too and merge after


> Add EL support to ExecuteProcess
> 
>
> Key: NIFI-4815
> URL: https://issues.apache.org/jira/browse/NIFI-4815
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> ExecuteProcess does not support EL for 'command' and 'working dir' 
> properties. That would be useful when promoting workflows between 
> environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2432: NIFI-4815 - Add EL support to ExecuteProcess

2018-02-21 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2432
  
Thanks for the review @MikeThomsen !  I'll give it a try too and merge after


---


[jira] [Updated] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-02-21 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4814:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.6.0
>
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-02-21 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4814:
---
Fix Version/s: 1.6.0

> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.6.0
>
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371607#comment-16371607
 ] 

ASF GitHub Bot commented on NIFI-4814:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2431


> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2431: NIFI-4814 - Add distinctive attribute to S2S report...

2018-02-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2431


---


[GitHub] nifi issue #2431: NIFI-4814 - Add distinctive attribute to S2S reporting tas...

2018-02-21 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
+1 LGTM, ran all three reporting tasks and verified the attributes are 
present and correct. Thanks for the improvement! Merging to master


---


[jira] [Commented] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371605#comment-16371605
 ] 

ASF GitHub Bot commented on NIFI-4814:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
+1 LGTM, ran all three reporting tasks and verified the attributes are 
present and correct. Thanks for the improvement! Merging to master


> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2416: NIFI 4774: Provide alternate implementation of Write-Ahead...

2018-02-21 Thread devriesb
Github user devriesb commented on the issue:

https://github.com/apache/nifi/pull/2416
  
So, I was under the impression this was still a WIP.  I am a HUGE -1 on 
this change.  As @markap14 stated above, this is a critical section of code.  
And while the previous version has serious flaws, they are at least somewhat 
know based on a long period of use.  In other words, we know there are 
problems, but they only bite us every so often.  

This major rewrite of a critical piece being essentially forced on users, 
likely without their knowledge unless they are paying close attention, seems 
less than ideal.  This new implementation will need SIGNIFICANT testing before 
it can be trusted to the same degree as the previous, even with its issues.

I would have greatly preferred, and will still advocate for, making this a 
new implementation vs. a change to the WriteAheadFlowFileRepository (e.g. 
SequentialWriteAheadFlowFileRepository).  Again, there are other issues, but 
avoiding repo corruption in the previous WriteAheadFlowFileRepository would 
have been a reasonably simple fix, not requiring this rewrite.  While the 
rewrite may have other benefits, making it a new implementation (even if you 
were to make it the default...) would give users the opportunity to evaluate 
and decide for themselves when they are ready to move to the new repo, without 
forcing them to postpone an upgrade to 1.6.0 which has other worthwhile changes.

I know critical sections are change all the time, and that users won't 
always be aware of the changes.  However, the criticality of this section 
combined with it's history means I think we should tread a little more lightly.


---


[GitHub] nifi issue #2482: NIFI-4894: Ensuring that any proxy paths are retained when...

2018-02-21 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2482
  
@mcgilman I have reviewed and built these changes and all looks good.


---


[jira] [Commented] (NIFI-4894) nf-canvas-utils#queryBulletins does not account for possible proxy path

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371585#comment-16371585
 ] 

ASF GitHub Bot commented on NIFI-4894:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2482
  
@mcgilman I have reviewed and built these changes and all looks good.


> nf-canvas-utils#queryBulletins does not account for possible proxy path
> ---
>
> Key: NIFI-4894
> URL: https://issues.apache.org/jira/browse/NIFI-4894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
>
> The queryBulletins function in nf-canvas-utils does not account for possible 
> proxy paths when querying for bulletins. We cannot, unfortunately, use 
> relative paths here since we need to know the full URL (to know it's length) 
> in order to break the query into multiple requests if necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371566#comment-16371566
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2478#discussion_r169680760
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/test/java/org/apache/nifi/hbase/TestScanHBase.java
 ---
@@ -0,0 +1,375 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+public class TestScanHBase {
+
+private ScanHBase proc;
+private MockHBaseClientService hBaseClientService;
+private TestRunner runner;
+
+@Before
+public void setup() throws InitializationException {
+proc = new ScanHBase();
+runner = TestRunners.newTestRunner(proc);
+
+hBaseClientService = new MockHBaseClientService();
+runner.addControllerService("hbaseClient", hBaseClientService);
+runner.enableControllerService(hBaseClientService);
+runner.setProperty(ScanHBase.HBASE_CLIENT_SERVICE, "hbaseClient");
+}
+
+@Test
+public void testColumnsValidation() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1,cf2:cq2,cf3:cq3");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1,cf2:cq1,cf3");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1 cf2,cf3");
+runner.assertNotValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:,cf2,cf3");
+runner.assertNotValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1,");
+runner.assertNotValid();
+}
+
+@Test
+public void testNoIncomingFlowFile() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+
+runner.run();
+runner.assertTransferCount(ScanHBase.REL_FAILURE, 0);
+runner.assertTransferCount(ScanHBase.REL_SUCCESS, 0);
+runner.assertTransferCount(ScanHBase.REL_ORIGINAL, 0);
+
+Assert.assertEquals(0, hBaseClientService.getNumScans());
+}
+
+@Test
+public void testInvalidTableName() {
+runner.setProperty(ScanHBase.TABLE_NAME, "${hbase.table}");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+
+runner.enqueue("trigger flow file");
+runner.run();
+
+runner.assertTransferCount(ScanHBase.REL_FAILURE, 1);
+runner.assertTransferCount(ScanHBase.REL_SUCCESS, 0);
+runner.assertTransferCount(ScanHBase.REL_ORIGINAL, 0);
+
+Assert.assertEquals(0, hBaseClientService.getNumScans());
+}
+
+@Test
+public void testResultsNotFound() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+

[GitHub] nifi pull request #2478: NIFI-4833 Add scanHBase Processor

2018-02-21 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2478#discussion_r169680760
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/test/java/org/apache/nifi/hbase/TestScanHBase.java
 ---
@@ -0,0 +1,375 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+public class TestScanHBase {
+
+private ScanHBase proc;
+private MockHBaseClientService hBaseClientService;
+private TestRunner runner;
+
+@Before
+public void setup() throws InitializationException {
+proc = new ScanHBase();
+runner = TestRunners.newTestRunner(proc);
+
+hBaseClientService = new MockHBaseClientService();
+runner.addControllerService("hbaseClient", hBaseClientService);
+runner.enableControllerService(hBaseClientService);
+runner.setProperty(ScanHBase.HBASE_CLIENT_SERVICE, "hbaseClient");
+}
+
+@Test
+public void testColumnsValidation() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1,cf2:cq2,cf3:cq3");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1,cf2:cq1,cf3");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1 cf2,cf3");
+runner.assertNotValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:,cf2,cf3");
+runner.assertNotValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1,");
+runner.assertNotValid();
+}
+
+@Test
+public void testNoIncomingFlowFile() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+
+runner.run();
+runner.assertTransferCount(ScanHBase.REL_FAILURE, 0);
+runner.assertTransferCount(ScanHBase.REL_SUCCESS, 0);
+runner.assertTransferCount(ScanHBase.REL_ORIGINAL, 0);
+
+Assert.assertEquals(0, hBaseClientService.getNumScans());
+}
+
+@Test
+public void testInvalidTableName() {
+runner.setProperty(ScanHBase.TABLE_NAME, "${hbase.table}");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+
+runner.enqueue("trigger flow file");
+runner.run();
+
+runner.assertTransferCount(ScanHBase.REL_FAILURE, 1);
+runner.assertTransferCount(ScanHBase.REL_SUCCESS, 0);
+runner.assertTransferCount(ScanHBase.REL_ORIGINAL, 0);
+
+Assert.assertEquals(0, hBaseClientService.getNumScans());
+}
+
+@Test
+public void testResultsNotFound() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+
+runner.enqueue("trigger flow file");
+runner.run();
+
+runner.assertTransferCount(ScanHBase.REL_FAILURE, 0);
+

[jira] [Commented] (NIFI-4833) NIFI-4833 Add ScanHBase processor

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371558#comment-16371558
 ] 

ASF GitHub Bot commented on NIFI-4833:
--

Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2478#discussion_r169677516
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/test/java/org/apache/nifi/hbase/TestScanHBase.java
 ---
@@ -0,0 +1,375 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+public class TestScanHBase {
+
+private ScanHBase proc;
+private MockHBaseClientService hBaseClientService;
+private TestRunner runner;
+
+@Before
+public void setup() throws InitializationException {
+proc = new ScanHBase();
+runner = TestRunners.newTestRunner(proc);
+
+hBaseClientService = new MockHBaseClientService();
+runner.addControllerService("hbaseClient", hBaseClientService);
+runner.enableControllerService(hBaseClientService);
+runner.setProperty(ScanHBase.HBASE_CLIENT_SERVICE, "hbaseClient");
+}
+
+@Test
+public void testColumnsValidation() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1,cf2:cq2,cf3:cq3");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1,cf2:cq1,cf3");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1 cf2,cf3");
+runner.assertNotValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:,cf2,cf3");
+runner.assertNotValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1,");
+runner.assertNotValid();
+}
+
+@Test
+public void testNoIncomingFlowFile() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+
+runner.run();
+runner.assertTransferCount(ScanHBase.REL_FAILURE, 0);
+runner.assertTransferCount(ScanHBase.REL_SUCCESS, 0);
+runner.assertTransferCount(ScanHBase.REL_ORIGINAL, 0);
+
+Assert.assertEquals(0, hBaseClientService.getNumScans());
+}
+
+@Test
+public void testInvalidTableName() {
+runner.setProperty(ScanHBase.TABLE_NAME, "${hbase.table}");
--- End diff --

Not setting a value for "hbase.table" is intentional. This test is for 
failure handling if expression is invalid (cannot be evaluated). You can see 
that FF expected at REL_FAILURE without scans. If I just didn't understand what 
you meant, please let me know.


> NIFI-4833 Add ScanHBase processor
> -
>
> Key: NIFI-4833
> URL: https://issues.apache.org/jira/browse/NIFI-4833
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> Add ScanHBase (new) processor to retrieve 

[GitHub] nifi pull request #2478: NIFI-4833 Add scanHBase Processor

2018-02-21 Thread bdesert
Github user bdesert commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2478#discussion_r169677516
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/test/java/org/apache/nifi/hbase/TestScanHBase.java
 ---
@@ -0,0 +1,375 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.MockFlowFile;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+public class TestScanHBase {
+
+private ScanHBase proc;
+private MockHBaseClientService hBaseClientService;
+private TestRunner runner;
+
+@Before
+public void setup() throws InitializationException {
+proc = new ScanHBase();
+runner = TestRunners.newTestRunner(proc);
+
+hBaseClientService = new MockHBaseClientService();
+runner.addControllerService("hbaseClient", hBaseClientService);
+runner.enableControllerService(hBaseClientService);
+runner.setProperty(ScanHBase.HBASE_CLIENT_SERVICE, "hbaseClient");
+}
+
+@Test
+public void testColumnsValidation() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1,cf2:cq2,cf3:cq3");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1,cf2:cq1,cf3");
+runner.assertValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1 cf2,cf3");
+runner.assertNotValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:,cf2,cf3");
+runner.assertNotValid();
+
+runner.setProperty(ScanHBase.COLUMNS, "cf1:cq1,");
+runner.assertNotValid();
+}
+
+@Test
+public void testNoIncomingFlowFile() {
+runner.setProperty(ScanHBase.TABLE_NAME, "table1");
+runner.setProperty(ScanHBase.START_ROW, "row1");
+runner.setProperty(ScanHBase.END_ROW, "row1");
+
+runner.run();
+runner.assertTransferCount(ScanHBase.REL_FAILURE, 0);
+runner.assertTransferCount(ScanHBase.REL_SUCCESS, 0);
+runner.assertTransferCount(ScanHBase.REL_ORIGINAL, 0);
+
+Assert.assertEquals(0, hBaseClientService.getNumScans());
+}
+
+@Test
+public void testInvalidTableName() {
+runner.setProperty(ScanHBase.TABLE_NAME, "${hbase.table}");
--- End diff --

Not setting a value for "hbase.table" is intentional. This test is for 
failure handling if expression is invalid (cannot be evaluated). You can see 
that FF expected at REL_FAILURE without scans. If I just didn't understand what 
you meant, please let me know.


---


[jira] [Commented] (MINIFICPP-397) Implement RouteOnAttribute

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371554#comment-16371554
 ] 

ASF GitHub Bot commented on MINIFICPP-397:
--

GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/268

MINIFICPP-397 Added implementation of RouteOnAttribute

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-397

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/268.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #268


commit 8328f8e386e6a32253c0fc19c9769aad90d95281
Author: Andrew I. Christianson 
Date:   2018-02-20T16:22:18Z

MINIFICPP-397 Added implementation of RouteOnAttribute




> Implement RouteOnAttribute
> --
>
> Key: MINIFICPP-397
> URL: https://issues.apache.org/jira/browse/MINIFICPP-397
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> RouteOnAttribute is notably missing from MiNiFi - C++ and should be 
> implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #268: MINIFICPP-397 Added implementation of Rou...

2018-02-21 Thread achristianson
GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/268

MINIFICPP-397 Added implementation of RouteOnAttribute

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-397

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/268.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #268


commit 8328f8e386e6a32253c0fc19c9769aad90d95281
Author: Andrew I. Christianson 
Date:   2018-02-20T16:22:18Z

MINIFICPP-397 Added implementation of RouteOnAttribute




---


[jira] [Updated] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-21 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4816:
---
Fix Version/s: 1.6.0

> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-21 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4816:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.6.0
>
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371495#comment-16371495
 ] 

ASF GitHub Bot commented on NIFI-4816:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2452


> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2452: NIFI-4816: Allow name to be updated for ReportingTa...

2018-02-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2452


---


[jira] [Updated] (MINIFICPP-410) MQTT should use controller service instead of a manual SSL configuration

2018-02-21 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-410:
-
Summary: MQTT should use controller service instead of a manual SSL 
configuration  (was: MQTT should use controller service instead of a manual SSL 
configuratino)

> MQTT should use controller service instead of a manual SSL configuration
> 
>
> Key: MINIFICPP-410
> URL: https://issues.apache.org/jira/browse/MINIFICPP-410
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-410) MQTT should use controller service instead of

2018-02-21 Thread marco polo (JIRA)
marco polo created MINIFICPP-410:


 Summary: MQTT should use controller service instead of 
 Key: MINIFICPP-410
 URL: https://issues.apache.org/jira/browse/MINIFICPP-410
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: marco polo
 Fix For: 0.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-410) MQTT should use controller service instead of a manual SSL configuratino

2018-02-21 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-410:
-
Summary: MQTT should use controller service instead of a manual SSL 
configuratino  (was: MQTT should use controller service instead of a manual SSL 
configuratin)

> MQTT should use controller service instead of a manual SSL configuratino
> 
>
> Key: MINIFICPP-410
> URL: https://issues.apache.org/jira/browse/MINIFICPP-410
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-410) MQTT should use controller service instead of a manual SSL configuratin

2018-02-21 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-410:
-
Summary: MQTT should use controller service instead of a manual SSL 
configuratin  (was: MQTT should use controller service instead of )

> MQTT should use controller service instead of a manual SSL configuratin
> ---
>
> Key: MINIFICPP-410
> URL: https://issues.apache.org/jira/browse/MINIFICPP-410
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371489#comment-16371489
 ] 

ASF GitHub Bot commented on NIFI-4901:
--

GitHub user gardellajuanpablo opened a pull request:

https://github.com/apache/nifi/pull/2486

NIFI-4901 Json to Avro using Record framework does not support union …

…types with boolean

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gardellajuanpablo/nifi OptionalBoolean

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2486.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2486


commit 286b9ac3012c6b9118f947663b7458b493658d87
Author: gardellajuanpablo 
Date:   2018-02-21T14:36:00Z

NIFI-4901 Json to Avro using Record framework does not support union types 
with boolean




> Json to Avro using Record framework does not support union types with boolean
> -
>
> Key: NIFI-4901
> URL: https://issues.apache.org/jira/browse/NIFI-4901
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: optiona-boolean.zip
>
>
> Given the following valid Avro Schema:
> {code}
> {
>"type":"record",
>"name":"foo",
>"fields":[
>   {
>  "name":"isSwap",
>  "type":[
> "boolean",
> "null"
>  ]
>   } 
>]
> }
> {code}
> And the following JSON:
> {code}
> {
>   "isSwap": {
> "boolean": true
>   }
> }
> {code}
> When it is trying to be converted to Avro using ConvertRecord fails with:
> {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed 
> a JSON object from input but failed to convert into a Record object with the 
> given schema}}
> Attached a repository to reproduce the issue and also included the fix:
> * Run {{mvn clean test}} to reproduce the issue.
> * Run {{mvn clean test -Ppatch}} to test the fix. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2486: NIFI-4901 Json to Avro using Record framework does ...

2018-02-21 Thread gardellajuanpablo
GitHub user gardellajuanpablo opened a pull request:

https://github.com/apache/nifi/pull/2486

NIFI-4901 Json to Avro using Record framework does not support union …

…types with boolean

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gardellajuanpablo/nifi OptionalBoolean

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2486.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2486


commit 286b9ac3012c6b9118f947663b7458b493658d87
Author: gardellajuanpablo 
Date:   2018-02-21T14:36:00Z

NIFI-4901 Json to Avro using Record framework does not support union types 
with boolean




---


[jira] [Updated] (NIFI-4902) ConsumeAMQP and PublishAMQP use a single connection, which results in poor performance

2018-02-21 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4902:
-
Status: Patch Available  (was: Open)

> ConsumeAMQP and PublishAMQP use a single connection, which results in poor 
> performance
> --
>
> Key: NIFI-4902
> URL: https://issues.apache.org/jira/browse/NIFI-4902
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.6.0
>
>
> PublishAMQP and ConsumeAMQP both use a single underlying connection, 
> regardless of how many concurrent tasks are available. As a result, this 
> leads to poor performance when the network latency is not extremely small.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4902) ConsumeAMQP and PublishAMQP use a single connection, which results in poor performance

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371488#comment-16371488
 ] 

ASF GitHub Bot commented on NIFI-4902:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2485

NIFI-4902: Updated ConsumeAMQP, PublishAMQP to use one connection per…

… concurrent task instead of a single connection shared by all concurrent 
tasks. This offers far better throughput when the network latency is 
non-trivial. Also refactored to simplify code

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4902

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2485.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2485


commit b9e801e889b32cde415920eda1c5cef2af5fd41e
Author: Mark Payne 
Date:   2018-02-21T14:31:36Z

NIFI-4902: Updated ConsumeAMQP, PublishAMQP to use one connection per 
concurrent task instead of a single connection shared by all concurrent tasks. 
This offers far better throughput when the network latency is non-trivial. Also 
refactored to simplify code




> ConsumeAMQP and PublishAMQP use a single connection, which results in poor 
> performance
> --
>
> Key: NIFI-4902
> URL: https://issues.apache.org/jira/browse/NIFI-4902
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.6.0
>
>
> PublishAMQP and ConsumeAMQP both use a single underlying connection, 
> regardless of how many concurrent tasks are available. As a result, this 
> leads to poor performance when the network latency is not extremely small.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2485: NIFI-4902: Updated ConsumeAMQP, PublishAMQP to use ...

2018-02-21 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2485

NIFI-4902: Updated ConsumeAMQP, PublishAMQP to use one connection per…

… concurrent task instead of a single connection shared by all concurrent 
tasks. This offers far better throughput when the network latency is 
non-trivial. Also refactored to simplify code

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4902

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2485.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2485


commit b9e801e889b32cde415920eda1c5cef2af5fd41e
Author: Mark Payne 
Date:   2018-02-21T14:31:36Z

NIFI-4902: Updated ConsumeAMQP, PublishAMQP to use one connection per 
concurrent task instead of a single connection shared by all concurrent tasks. 
This offers far better throughput when the network latency is non-trivial. Also 
refactored to simplify code




---


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371483#comment-16371483
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2448
  
Will do.


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4827) Make GetMongo able to use flowfiles for queries

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371481#comment-16371481
 ] 

ASF GitHub Bot commented on NIFI-4827:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2443
  
NP. I'll try to get that done in a little bit.


> Make GetMongo able to use flowfiles for queries
> ---
>
> Key: NIFI-4827
> URL: https://issues.apache.org/jira/browse/NIFI-4827
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Minor
>
> GetMongo should be able to retrieve a valid query from the flowfile content 
> or allow the incoming flowfile to provide attributes to power EL statements 
> in the Query configuration field. Allowing the body to be used would allow 
> GetMongo to be used in a much more generic way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2448: NIFI-4838 Added configurable progressive commits to GetMon...

2018-02-21 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2448
  
Will do.


---


[GitHub] nifi issue #2443: NIFI-4827 Added support for reading queries from the flowf...

2018-02-21 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2443
  
NP. I'll try to get that done in a little bit.


---


[jira] [Created] (NIFI-4902) ConsumeAMQP and PublishAMQP use a single connection, which results in poor performance

2018-02-21 Thread Mark Payne (JIRA)
Mark Payne created NIFI-4902:


 Summary: ConsumeAMQP and PublishAMQP use a single connection, 
which results in poor performance
 Key: NIFI-4902
 URL: https://issues.apache.org/jira/browse/NIFI-4902
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 1.6.0


PublishAMQP and ConsumeAMQP both use a single underlying connection, regardless 
of how many concurrent tasks are available. As a result, this leads to poor 
performance when the network latency is not extremely small.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2452: NIFI-4816: Allow name to be updated for ReportingTasks

2018-02-21 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2452
  
@mattyb149 this looks good to me now! Thanks for updating. I'm not in a 
position to merge right now, but I will give it a +1 if you want to merge 
yourself. Otherwise I will merge when I get my local branch squared away. 
Thanks!


---


[jira] [Commented] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371471#comment-16371471
 ] 

ASF GitHub Bot commented on NIFI-4816:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2452
  
@mattyb149 this looks good to me now! Thanks for updating. I'm not in a 
position to merge right now, but I will give it a +1 if you want to merge 
yourself. Otherwise I will merge when I get my local branch squared away. 
Thanks!


> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-21 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-4901:
--
Description: 
Given the following valid Avro Schema:
{code}
{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}
{code}

And the following JSON:
{code}
{
  "isSwap": {
"boolean": true
  }
}
{code}
When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run {{mvn clean test}} to reproduce the issue.
* Run {{mvn clean test -Ppatch}} to test the fix.   

  was:
Given the following valid Avro Schema:
{code}
{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}
{code}

And the following JSON:
{code}
{
  "isSwap": {
"boolean": true
  }
}
{code}
When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run mvn clean test to reproduce the issue.
* Run mvn clean test -Ppatch to test the fix.   


> Json to Avro using Record framework does not support union types with boolean
> -
>
> Key: NIFI-4901
> URL: https://issues.apache.org/jira/browse/NIFI-4901
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: optiona-boolean.zip
>
>
> Given the following valid Avro Schema:
> {code}
> {
>"type":"record",
>"name":"foo",
>"fields":[
>   {
>  "name":"isSwap",
>  "type":[
> "boolean",
> "null"
>  ]
>   } 
>]
> }
> {code}
> And the following JSON:
> {code}
> {
>   "isSwap": {
> "boolean": true
>   }
> }
> {code}
> When it is trying to be converted to Avro using ConvertRecord fails with:
> {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed 
> a JSON object from input but failed to convert into a Record object with the 
> given schema}}
> Attached a repository to reproduce the issue and also included the fix:
> * Run {{mvn clean test}} to reproduce the issue.
> * Run {{mvn clean test -Ppatch}} to test the fix. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371459#comment-16371459
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2448
  
Since #2180 was merged, there are conflicts, mind rebasing against the 
latest master? Please and thanks!


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2448: NIFI-4838 Added configurable progressive commits to GetMon...

2018-02-21 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2448
  
Since #2180 was merged, there are conflicts, mind rebasing against the 
latest master? Please and thanks!


---


[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows

2018-02-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371457#comment-16371457
 ] 

ASF GitHub Bot commented on NIFI-4839:
--

Github user aperepel commented on the issue:

https://github.com/apache/nifi/pull/2477
  
@pvillard31 we were discussing a modular design for commands with dynamic 
discovery/loading (e.g. via Java's ServiceLoader mechanism). This is to support 
the idea that we should try to incorporate as much reference documentation as 
possible into command's output, and only add doc pages focusing on the 
workflow, typical use cases and best practices instead. My $.02


> Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
> 
>
> Key: NIFI-4839
> URL: https://issues.apache.org/jira/browse/NIFI-4839
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> Now that we have NiFi Registry and the ability to import/upgrade flows in 
> NiFi, we should offer a command-line tool to interact with these REST 
> end-points. This could part of NiFi Toolkit and would help people potentially 
> automate some of these operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2477: NIFI-4839 Adding CLI to nifi-toolkit

2018-02-21 Thread aperepel
Github user aperepel commented on the issue:

https://github.com/apache/nifi/pull/2477
  
@pvillard31 we were discussing a modular design for commands with dynamic 
discovery/loading (e.g. via Java's ServiceLoader mechanism). This is to support 
the idea that we should try to incorporate as much reference documentation as 
possible into command's output, and only add doc pages focusing on the 
workflow, typical use cases and best practices instead. My $.02


---


[jira] [Updated] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean

2018-02-21 Thread Gardella Juan Pablo (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated NIFI-4901:
--
Description: 
Given the following valid Avro Schema:
{code}
{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}
{code}

And the following JSON:
{code}
{
  "isSwap": {
"boolean": true
  }
}
{code}
When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run mvn clean test to reproduce the issue.
* Run mvn clean test -Ppatch to test the fix.   

  was:
Given the following valid Avro Schema:

{
   "type":"record",
   "name":"foo",
   "fields":[
  {
 "name":"isSwap",
 "type":[
"boolean",
"null"
 ]
  } 
   ]
}

And the following JSON:
{
  "isSwap": {
"boolean": true
  }
}

When it is trying to be converted to Avro using ConvertRecord fails with:
{{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed a 
JSON object from input but failed to convert into a Record object with the 
given schema}}

Attached a repository to reproduce the issue and also included the fix:
* Run mvn clean test to reproduce the issue.
* Run mvn clean test -Ppatch to test the fix.   


> Json to Avro using Record framework does not support union types with boolean
> -
>
> Key: NIFI-4901
> URL: https://issues.apache.org/jira/browse/NIFI-4901
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: optiona-boolean.zip
>
>
> Given the following valid Avro Schema:
> {code}
> {
>"type":"record",
>"name":"foo",
>"fields":[
>   {
>  "name":"isSwap",
>  "type":[
> "boolean",
> "null"
>  ]
>   } 
>]
> }
> {code}
> And the following JSON:
> {code}
> {
>   "isSwap": {
> "boolean": true
>   }
> }
> {code}
> When it is trying to be converted to Avro using ConvertRecord fails with:
> {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed 
> a JSON object from input but failed to convert into a Record object with the 
> given schema}}
> Attached a repository to reproduce the issue and also included the fix:
> * Run mvn clean test to reproduce the issue.
> * Run mvn clean test -Ppatch to test the fix. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >