[GitHub] nifi issue #1983: NiFi-2829: Add Date and Time Format Support for PutSQL

2017-07-17 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1983
  
As this PR is based on #1524 which was submitted by @patricker , I 
cherry-picked #1524 first, then cherry-picked this PR to give @patricker a 
credit. Thanks again!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1983: NiFi-2829: Add Date and Time Format Support for PutSQL

2017-07-17 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1983
  
I've made a slight change to the documentation to state MySQL and Derby are 
some database engines that do not support milliseconds, but those may not be 
the only two. I am fine with not having unit test for it as we use Derby, 
clarify it via docs should be enough.
Thanks @yjhyjhyjh0 , this has been merged to master!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2829) PutSQL assumes all Date and Time values are provided in Epoch

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091130#comment-16091130
 ] 

ASF GitHub Bot commented on NIFI-2829:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1524
  
I reviewed and confirmed #1983 addressed the unit test timezone issue. So, 
merged this #1524 and #1983 to master. Thanks @patricker and @yjhyjhyjh0!


> PutSQL assumes all Date and Time values are provided in Epoch
> -
>
> Key: NIFI-2829
> URL: https://issues.apache.org/jira/browse/NIFI-2829
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Paul Gibeault
>Assignee: Peter Wicks
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This bug is the same as NIFI-2576 only extended to data types DATE and TIME.
> https://issues.apache.org/jira/browse/NIFI-2576
> When PutSQL sees a DATE or TIME data type it assumes that it's being provided 
> as a Long in Epoch format.
> This doesn't make much sense since the Query Database tools that return Avro 
> return DATES and TIME values as strings; and thus following the 
> Avro->JSON->JSON To SQL Route leads to DATE and TIME fields as being strings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1524: NIFI-2829 Date and Time Format Support for PutSQL

2017-07-17 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1524
  
I reviewed and confirmed #1983 addressed the unit test timezone issue. So, 
merged this #1524 and #1983 to master. Thanks @patricker and @yjhyjhyjh0!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2829) PutSQL assumes all Date and Time values are provided in Epoch

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091127#comment-16091127
 ] 

ASF GitHub Bot commented on NIFI-2829:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1524


> PutSQL assumes all Date and Time values are provided in Epoch
> -
>
> Key: NIFI-2829
> URL: https://issues.apache.org/jira/browse/NIFI-2829
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Paul Gibeault
>Assignee: Peter Wicks
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This bug is the same as NIFI-2576 only extended to data types DATE and TIME.
> https://issues.apache.org/jira/browse/NIFI-2576
> When PutSQL sees a DATE or TIME data type it assumes that it's being provided 
> as a Long in Epoch format.
> This doesn't make much sense since the Query Database tools that return Avro 
> return DATES and TIME values as strings; and thus following the 
> Avro->JSON->JSON To SQL Route leads to DATE and TIME fields as being strings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1524: NIFI-2829 Date and Time Format Support for PutSQL

2017-07-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1524


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2829) PutSQL assumes all Date and Time values are provided in Epoch

2017-07-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091126#comment-16091126
 ] 

ASF subversion and git services commented on NIFI-2829:
---

Commit a6e94de0bbea99561f0ff788b54db4d9af7e8f6a in nifi's branch 
refs/heads/master from deonhuang
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=a6e94de ]

NIFI-2829 - Add Date and Time Format Support for PutSQL

Fix unit test for Date and Time type time zone problem
Enhance Time type to record milliseconds

This closes #1983.

Signed-off-by: Koji Kawamura 


> PutSQL assumes all Date and Time values are provided in Epoch
> -
>
> Key: NIFI-2829
> URL: https://issues.apache.org/jira/browse/NIFI-2829
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Paul Gibeault
>Assignee: Peter Wicks
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This bug is the same as NIFI-2576 only extended to data types DATE and TIME.
> https://issues.apache.org/jira/browse/NIFI-2576
> When PutSQL sees a DATE or TIME data type it assumes that it's being provided 
> as a Long in Epoch format.
> This doesn't make much sense since the Query Database tools that return Avro 
> return DATES and TIME values as strings; and thus following the 
> Avro->JSON->JSON To SQL Route leads to DATE and TIME fields as being strings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1983: NiFi-2829: Add Date and Time Format Support for Put...

2017-07-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1983


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2829) PutSQL assumes all Date and Time values are provided in Epoch

2017-07-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091125#comment-16091125
 ] 

ASF subversion and git services commented on NIFI-2829:
---

Commit 03bff7c9fc320a95dbacef1eb8390a1aae174dc4 in nifi's branch 
refs/heads/master from patricker
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=03bff7c ]

NIFI-2829 Date and Time Format Support for PutSQL

This closes #1524.

Signed-off-by: Koji Kawamura 


> PutSQL assumes all Date and Time values are provided in Epoch
> -
>
> Key: NIFI-2829
> URL: https://issues.apache.org/jira/browse/NIFI-2829
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Paul Gibeault
>Assignee: Peter Wicks
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This bug is the same as NIFI-2576 only extended to data types DATE and TIME.
> https://issues.apache.org/jira/browse/NIFI-2576
> When PutSQL sees a DATE or TIME data type it assumes that it's being provided 
> as a Long in Epoch format.
> This doesn't make much sense since the Query Database tools that return Avro 
> return DATES and TIME values as strings; and thus following the 
> Avro->JSON->JSON To SQL Route leads to DATE and TIME fields as being strings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4188) Create RethinkDB get processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091119#comment-16091119
 ] 

ASF GitHub Bot commented on NIFI-4188:
--

Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127879683
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
+public static AllowableValue READ_MODE_OUTDATED = new 
AllowableValue("outdated", "Outdated", "Read values from memory from an 
arbitrary replica ");
+
+protected static final PropertyDescriptor READ_MODE = new 
PropertyDescriptor.Builder()
+.name("rethinkdb-read-mode")
+.displayName("Read Mode")
+.description("Read mode used for consistency")
+.required(true)
+.defaultValue(READ_MODE_SINGLE.getValue())
+.allowableValues(READ_MODE_SINGLE, READ_MODE_MAJORITY, 
READ_MODE_OUTDATED)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor RETHINKDB_DOCUMENT_ID = new 
PropertyDescriptor.Builder()
+.displayName("Document Identifier")
+.name("rethinkdb-document-identifier")
+.description("A FlowFile attribute, or attribute expression 
used " +
+"for determining RethinkDB key for the Flow File content")
+.required(true)
+

[jira] [Commented] (NIFI-4188) Create RethinkDB get processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091120#comment-16091120
 ] 

ASF GitHub Bot commented on NIFI-4188:
--

Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127879552
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
--- End diff --

Spell checker says `committed` instead of `commited`.


> Create RethinkDB get processor
> --
>
> Key: NIFI-4188
> URL: https://issues.apache.org/jira/browse/NIFI-4188
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: document, get, rethinkdb
> Fix For: 1.4.0
>
>
> Create processor for getting documents by id from RethinkDB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4188) Create RethinkDB get processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091122#comment-16091122
 ] 

ASF GitHub Bot commented on NIFI-4188:
--

Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127882745
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
+public static AllowableValue READ_MODE_OUTDATED = new 
AllowableValue("outdated", "Outdated", "Read values from memory from an 
arbitrary replica ");
+
+protected static final PropertyDescriptor READ_MODE = new 
PropertyDescriptor.Builder()
+.name("rethinkdb-read-mode")
+.displayName("Read Mode")
+.description("Read mode used for consistency")
+.required(true)
+.defaultValue(READ_MODE_SINGLE.getValue())
+.allowableValues(READ_MODE_SINGLE, READ_MODE_MAJORITY, 
READ_MODE_OUTDATED)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor RETHINKDB_DOCUMENT_ID = new 
PropertyDescriptor.Builder()
+.displayName("Document Identifier")
+.name("rethinkdb-document-identifier")
+.description("A FlowFile attribute, or attribute expression 
used " +
+"for determining RethinkDB key for the Flow File content")
+.required(true)
+

[jira] [Commented] (NIFI-4188) Create RethinkDB get processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091123#comment-16091123
 ] 

ASF GitHub Bot commented on NIFI-4188:
--

Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127880998
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
+public static AllowableValue READ_MODE_OUTDATED = new 
AllowableValue("outdated", "Outdated", "Read values from memory from an 
arbitrary replica ");
+
+protected static final PropertyDescriptor READ_MODE = new 
PropertyDescriptor.Builder()
+.name("rethinkdb-read-mode")
+.displayName("Read Mode")
+.description("Read mode used for consistency")
+.required(true)
+.defaultValue(READ_MODE_SINGLE.getValue())
+.allowableValues(READ_MODE_SINGLE, READ_MODE_MAJORITY, 
READ_MODE_OUTDATED)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor RETHINKDB_DOCUMENT_ID = new 
PropertyDescriptor.Builder()
+.displayName("Document Identifier")
+.name("rethinkdb-document-identifier")
+.description("A FlowFile attribute, or attribute expression 
used " +
+"for determining RethinkDB key for the Flow File content")
+.required(true)
+

[jira] [Commented] (NIFI-4188) Create RethinkDB get processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091121#comment-16091121
 ] 

ASF GitHub Bot commented on NIFI-4188:
--

Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127880492
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
+public static AllowableValue READ_MODE_OUTDATED = new 
AllowableValue("outdated", "Outdated", "Read values from memory from an 
arbitrary replica ");
+
+protected static final PropertyDescriptor READ_MODE = new 
PropertyDescriptor.Builder()
+.name("rethinkdb-read-mode")
+.displayName("Read Mode")
+.description("Read mode used for consistency")
+.required(true)
+.defaultValue(READ_MODE_SINGLE.getValue())
+.allowableValues(READ_MODE_SINGLE, READ_MODE_MAJORITY, 
READ_MODE_OUTDATED)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor RETHINKDB_DOCUMENT_ID = new 
PropertyDescriptor.Builder()
+.displayName("Document Identifier")
+.name("rethinkdb-document-identifier")
+.description("A FlowFile attribute, or attribute expression 
used " +
+"for determining RethinkDB key for the Flow File content")
+.required(true)
+

[GitHub] nifi pull request #2012: NIFI-4188 - Nifi RethinkDB Get processor

2017-07-17 Thread jvwing
Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127879683
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
+public static AllowableValue READ_MODE_OUTDATED = new 
AllowableValue("outdated", "Outdated", "Read values from memory from an 
arbitrary replica ");
+
+protected static final PropertyDescriptor READ_MODE = new 
PropertyDescriptor.Builder()
+.name("rethinkdb-read-mode")
+.displayName("Read Mode")
+.description("Read mode used for consistency")
+.required(true)
+.defaultValue(READ_MODE_SINGLE.getValue())
+.allowableValues(READ_MODE_SINGLE, READ_MODE_MAJORITY, 
READ_MODE_OUTDATED)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor RETHINKDB_DOCUMENT_ID = new 
PropertyDescriptor.Builder()
+.displayName("Document Identifier")
+.name("rethinkdb-document-identifier")
+.description("A FlowFile attribute, or attribute expression 
used " +
+"for determining RethinkDB key for the Flow File content")
+.required(true)
+
.addValidator(StandardValidators.createAttributeExpressionLanguageValidator(ResultType.STRING,
 true))
+.expressionLanguageSupported(true)
+.build();
+
+protected String READ_MODE_KEY = "read_mode";
+
+private static final 

[GitHub] nifi pull request #2012: NIFI-4188 - Nifi RethinkDB Get processor

2017-07-17 Thread jvwing
Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127882745
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
+public static AllowableValue READ_MODE_OUTDATED = new 
AllowableValue("outdated", "Outdated", "Read values from memory from an 
arbitrary replica ");
+
+protected static final PropertyDescriptor READ_MODE = new 
PropertyDescriptor.Builder()
+.name("rethinkdb-read-mode")
+.displayName("Read Mode")
+.description("Read mode used for consistency")
+.required(true)
+.defaultValue(READ_MODE_SINGLE.getValue())
+.allowableValues(READ_MODE_SINGLE, READ_MODE_MAJORITY, 
READ_MODE_OUTDATED)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor RETHINKDB_DOCUMENT_ID = new 
PropertyDescriptor.Builder()
+.displayName("Document Identifier")
+.name("rethinkdb-document-identifier")
+.description("A FlowFile attribute, or attribute expression 
used " +
+"for determining RethinkDB key for the Flow File content")
+.required(true)
+
.addValidator(StandardValidators.createAttributeExpressionLanguageValidator(ResultType.STRING,
 true))
+.expressionLanguageSupported(true)
+.build();
+
+protected String READ_MODE_KEY = "read_mode";
+
+private static final 

[GitHub] nifi pull request #2012: NIFI-4188 - Nifi RethinkDB Get processor

2017-07-17 Thread jvwing
Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127880492
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
+public static AllowableValue READ_MODE_OUTDATED = new 
AllowableValue("outdated", "Outdated", "Read values from memory from an 
arbitrary replica ");
+
+protected static final PropertyDescriptor READ_MODE = new 
PropertyDescriptor.Builder()
+.name("rethinkdb-read-mode")
+.displayName("Read Mode")
+.description("Read mode used for consistency")
+.required(true)
+.defaultValue(READ_MODE_SINGLE.getValue())
+.allowableValues(READ_MODE_SINGLE, READ_MODE_MAJORITY, 
READ_MODE_OUTDATED)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor RETHINKDB_DOCUMENT_ID = new 
PropertyDescriptor.Builder()
+.displayName("Document Identifier")
+.name("rethinkdb-document-identifier")
+.description("A FlowFile attribute, or attribute expression 
used " +
+"for determining RethinkDB key for the Flow File content")
+.required(true)
+
.addValidator(StandardValidators.createAttributeExpressionLanguageValidator(ResultType.STRING,
 true))
+.expressionLanguageSupported(true)
+.build();
+
+protected String READ_MODE_KEY = "read_mode";
+
+private static final 

[GitHub] nifi pull request #2012: NIFI-4188 - Nifi RethinkDB Get processor

2017-07-17 Thread jvwing
Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127880998
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
+public static AllowableValue READ_MODE_OUTDATED = new 
AllowableValue("outdated", "Outdated", "Read values from memory from an 
arbitrary replica ");
+
+protected static final PropertyDescriptor READ_MODE = new 
PropertyDescriptor.Builder()
+.name("rethinkdb-read-mode")
+.displayName("Read Mode")
+.description("Read mode used for consistency")
+.required(true)
+.defaultValue(READ_MODE_SINGLE.getValue())
+.allowableValues(READ_MODE_SINGLE, READ_MODE_MAJORITY, 
READ_MODE_OUTDATED)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor RETHINKDB_DOCUMENT_ID = new 
PropertyDescriptor.Builder()
+.displayName("Document Identifier")
+.name("rethinkdb-document-identifier")
+.description("A FlowFile attribute, or attribute expression 
used " +
+"for determining RethinkDB key for the Flow File content")
+.required(true)
+
.addValidator(StandardValidators.createAttributeExpressionLanguageValidator(ResultType.STRING,
 true))
+.expressionLanguageSupported(true)
+.build();
+
+protected String READ_MODE_KEY = "read_mode";
+
+private static final 

[GitHub] nifi pull request #2012: NIFI-4188 - Nifi RethinkDB Get processor

2017-07-17 Thread jvwing
Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2012#discussion_r127879552
  
--- Diff: 
nifi-nar-bundles/nifi-rethinkdb-bundle/nifi-rethinkdb-processors/src/main/java/org/apache/nifi/processors/rethinkdb/GetRethinkDB.java
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.rethinkdb;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.AttributeExpression.ResultType;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import com.google.gson.Gson;
+
+import java.io.ByteArrayInputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"rethinkdb", "get", "read"})
+@CapabilityDescription("Processor to get a JSON document from RethinkDB 
(https://www.rethinkdb.com/) using the document id. The FlowFile will contain 
the retrieved document")
+@WritesAttributes({
+@WritesAttribute(attribute = GetRethinkDB.RETHINKDB_ERROR_MESSAGE, 
description = "RethinkDB error message"),
+})
+@SeeAlso({PutRethinkDB.class})
+public class GetRethinkDB extends AbstractRethinkDBProcessor {
+
+public static AllowableValue READ_MODE_SINGLE = new 
AllowableValue("single", "Single", "Read values from memory from primary 
replica (Default)");
+public static AllowableValue READ_MODE_MAJORITY = new 
AllowableValue("majority", "Majority", "Read values commited to disk on 
majority of replicas");
--- End diff --

Spell checker says `committed` instead of `commited`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-17 Thread Y Wikander (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091078#comment-16091078
 ] 

Y Wikander commented on NIFI-4169:
--

[~ijokarumawak], regarding your 
[comment|https://issues.apache.org/jira/browse/NIFI-4169?focusedCommentId=16087053=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16087053].

I think that adding a sendBroadcastMessage() function to WebSocketMessageRouter 
and removing broadcast support from sendMessage() (in the same class) would 
work better.

sendBroadcastMessage() would return a list that PutWebSocket class would use 
find out the session id, Exception, etc. -- for each failed session id. This 
would allow the PutWebSocket class to make better informed decisions about 
cloning the flowfile, what attributes to set (and with what).

It seems to me that just changing the PutWebSocket class, as you suggested, 
would be hiding to much information from itself.




> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-17 Thread Y Wikander (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091061#comment-16091061
 ] 

Y Wikander commented on NIFI-4169:
--

[~ijokarumawak], 
[your|https://issues.apache.org/jira/browse/NIFI-4169?focusedCommentId=16087053=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16087053]
 suggestion.


> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-17 Thread Y Wikander (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y Wikander updated NIFI-4169:
-
Comment: was deleted

(was: [~ijokarumawak], 
[your|https://issues.apache.org/jira/browse/NIFI-4169?focusedCommentId=16087053=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16087053]
 suggestion.
)

> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4194) NullPointerException in InvokeHTTP processor when trusted hostname property is used

2017-07-17 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4194:

Attachment: flow_20170717-192339_trusted_hostname.xml

Flow which demonstrates the issue. 

> NullPointerException in InvokeHTTP processor when trusted hostname property 
> is used
> ---
>
> Key: NIFI-4194
> URL: https://issues.apache.org/jira/browse/NIFI-4194
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>  Labels: hostname, okhttp, security, tls
> Attachments: flow_20170717-192339_trusted_hostname.xml
>
>
> When a user configures the {{InvokeHTTP}} processor with HTTPS (using an 
> {{SSLContextService}}) and populates the {{trustedHostname}} property, the 
> processor will throw a {{NullPointerException}} because the OkHttp client 
> does not have a valid {{HostnameVerifier}} configured when the 
> {{@OnScheduled}} method is called and that verifier is delegated to the 
> processor. This results in the stacktrace below:
> {code}
> 2017-07-17 19:15:49,341 ERROR [Timer-Driven Process Thread-6] 
> o.a.nifi.processors.standard.InvokeHTTP 
> InvokeHTTP[id=53784003-015d-1000-ffb0-e07173729c9c] Routing to Failure due to 
> exception: java.lang.NullPointerException: java.lang.NullPointerException
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processors.standard.InvokeHTTP$OverrideHostnameVerifier.verify(InvokeHTTP.java:1050)
>   at 
> com.squareup.okhttp.internal.io.RealConnection.connectTls(RealConnection.java:192)
>   at 
> com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:145)
>   at 
> com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108)
>   at 
> com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
>   at 
> com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
>   at 
> com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
>   at 
> com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:283)
>   at 
> com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
>   at com.squareup.okhttp.Call.getResponse(Call.java:286)
>   at 
> com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
>   at 
> com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
>   at com.squareup.okhttp.Call.execute(Call.java:80)
>   at 
> org.apache.nifi.processors.standard.InvokeHTTP.onTrigger(InvokeHTTP.java:630)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>   at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>   at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> When that method is invoked, the existence of the {{HostnameVerifier}} should 
> be checked before it is provided to the {{OverrideHostnameVerifier}}, and a 
> default value of {{OkHostnameVerifier.INSTANCE}} (public static singleton) 
> should be provided if none is available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4194) NullPointerException in InvokeHTTP processor when trusted hostname property is used

2017-07-17 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-4194:
---

 Summary: NullPointerException in InvokeHTTP processor when trusted 
hostname property is used
 Key: NIFI-4194
 URL: https://issues.apache.org/jira/browse/NIFI-4194
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.3.0
Reporter: Andy LoPresto


When a user configures the {{InvokeHTTP}} processor with HTTPS (using an 
{{SSLContextService}}) and populates the {{trustedHostname}} property, the 
processor will throw a {{NullPointerException}} because the OkHttp client does 
not have a valid {{HostnameVerifier}} configured when the {{@OnScheduled}} 
method is called and that verifier is delegated to the processor. This results 
in the stacktrace below:

{code}
2017-07-17 19:15:49,341 ERROR [Timer-Driven Process Thread-6] 
o.a.nifi.processors.standard.InvokeHTTP 
InvokeHTTP[id=53784003-015d-1000-ffb0-e07173729c9c] Routing to Failure due to 
exception: java.lang.NullPointerException: java.lang.NullPointerException
java.lang.NullPointerException: null
at 
org.apache.nifi.processors.standard.InvokeHTTP$OverrideHostnameVerifier.verify(InvokeHTTP.java:1050)
at 
com.squareup.okhttp.internal.io.RealConnection.connectTls(RealConnection.java:192)
at 
com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:145)
at 
com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108)
at 
com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
at 
com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at 
com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at 
com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:283)
at 
com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at 
com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at 
com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at 
org.apache.nifi.processors.standard.InvokeHTTP.onTrigger(InvokeHTTP.java:630)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

When that method is invoked, the existence of the {{HostnameVerifier}} should 
be checked before it is provided to the {{OverrideHostnameVerifier}}, and a 
default value of {{OkHostnameVerifier.INSTANCE}} (public static singleton) 
should be provided if none is available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4184) I needed to put some attributes on REMOTE_GROUP and REMOTE_OWNER in the PutHDFS Processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090987#comment-16090987
 ] 

ASF GitHub Bot commented on NIFI-4184:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2007
  
Hi @panelladavide thanks for your contribution!

In order to enable Expression Language, setting expressionLanguageSupported 
to true is not enough. You also need to evaluate the configured EL. Plus, if 
you need to support EL to use FlowFile attribute, you need to pass a FlowFile 
(the incoming FlowFile in most cases) when EL is evaluated.

Specifically, you need to modify changeOwner method:
- Add EL evaluation like:
 ```
 String owner = 
context.getProperty(REMOTE_OWNER).evaluateAttributeExpressions(flowFile).getValue();
 ```
- Add FlowFile argument, so that it can be passed to 
`evaluateAttributeExpressions(flowFile)`


https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java#L389


>  I needed to put some attributes on REMOTE_GROUP and REMOTE_OWNER in the 
> PutHDFS Processor
> --
>
> Key: NIFI-4184
> URL: https://issues.apache.org/jira/browse/NIFI-4184
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: dav
>
>  I needed to put some attributes on REMOTE_GROUP and REMOTE_OWNER in order to 
> achieve it i put expressionLanguageSupported(true) on the PropertyDescriptor 
> of REMOTE_GROUP and REMOTE_OWNER



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2007: NIFI-4184: PutHDFS Processor Expression language TRUE on R...

2017-07-17 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2007
  
Hi @panelladavide thanks for your contribution!

In order to enable Expression Language, setting expressionLanguageSupported 
to true is not enough. You also need to evaluate the configured EL. Plus, if 
you need to support EL to use FlowFile attribute, you need to pass a FlowFile 
(the incoming FlowFile in most cases) when EL is evaluated.

Specifically, you need to modify changeOwner method:
- Add EL evaluation like:
 ```
 String owner = 
context.getProperty(REMOTE_OWNER).evaluateAttributeExpressions(flowFile).getValue();
 ```
- Add FlowFile argument, so that it can be passed to 
`evaluateAttributeExpressions(flowFile)`


https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java#L389


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-17 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090960#comment-16090960
 ] 

Koji Kawamura commented on NIFI-4174:
-

[~jomach] ExecuteSQL or PutSQL are generally used to execute a sql statement. 
Probably using SplitText or SplitContent before Execute/PutSQL would work if 
incoming data contains multiple lines of SQL.

> GenerateTableFetch does not work with oracle on Nifi 1.2
> 
>
> Key: NIFI-4174
> URL: https://issues.apache.org/jira/browse/NIFI-4174
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jorge Machado
>Priority: Minor
>
> I'm trying to extract some data from a oracle DB.  
> I'm getting : 
> {code:java}
> 2017-07-11 16:19:29,612 WARN [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Timed out while waiting for 
> OnScheduled of 'GenerateTableFetch' processor to finish. An attempt is made 
> to cancel the task via Thread.interrupt(). However it does not guarantee that 
> the task will be canceled since the code inside current OnScheduled operation 
> may have been written to ignore interrupts which may result in a runaway 
> thread. This could lead to more issues, eventually requiring NiFi to be 
> restarted. This is usually a bug in the target Processor 
> 'GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4]' that needs to 
> be documented, reported and eventually fixed.
> 2017-07-11 16:19:29,612 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.p.standard.GenerateTableFetch 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.; processor will not be 
> scheduled to run for 30 seconds: java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> 2017-07-11 16:19:29,613 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method 
> due to java.lang.RuntimeException: Timed out while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> 

[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090650#comment-16090650
 ] 

ASF GitHub Bot commented on NIFI-4057:
--

Github user cricket007 commented on the issue:

https://github.com/apache/nifi/pull/1910
  
Is there anyway to re-publish the 1.2.0 and 1.3.0 images so that those 
aren't as large?


> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Assignee: Jordan Moore
>Priority: Minor
> Fix For: 1.4.0
>
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1910: NIFI-4057 Docker Image is twice as large as necessary

2017-07-17 Thread cricket007
Github user cricket007 commented on the issue:

https://github.com/apache/nifi/pull/1910
  
Is there anyway to re-publish the 1.2.0 and 1.3.0 images so that those 
aren't as large?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4124) Add a Record API-based PutMongo clone

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090592#comment-16090592
 ] 

ASF GitHub Bot commented on NIFI-4124:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/1945
  
@markap14 are you in a position to help close this out since @bbende is 
offline for a bit?


> Add a Record API-based PutMongo clone
> -
>
> Key: NIFI-4124
> URL: https://issues.apache.org/jira/browse/NIFI-4124
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>  Labels: mongodb, putmongo, records
>
> A new processor that can use the Record API to put data into Mongo is needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1945: NIFI-4124 Added org.apache.nifi.mongo.PutMongoRecord.

2017-07-17 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/1945
  
@markap14 are you in a position to help close this out since @bbende is 
offline for a bit?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4124) Add a Record API-based PutMongo clone

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090589#comment-16090589
 ] 

ASF GitHub Bot commented on NIFI-4124:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1945
  
@bbende @markap14 @pvillard31 Any chance of getting this merged?


> Add a Record API-based PutMongo clone
> -
>
> Key: NIFI-4124
> URL: https://issues.apache.org/jira/browse/NIFI-4124
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Priority: Minor
>  Labels: mongodb, putmongo, records
>
> A new processor that can use the Record API to put data into Mongo is needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1945: NIFI-4124 Added org.apache.nifi.mongo.PutMongoRecord.

2017-07-17 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1945
  
@bbende @markap14 @pvillard31 Any chance of getting this merged?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri reassigned NIFI-4057:
-

Assignee: Jordan Moore

> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Assignee: Jordan Moore
>Priority: Minor
> Fix For: 1.4.0
>
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090584#comment-16090584
 ] 

ASF subversion and git services commented on NIFI-4057:
---

Commit 3da8b94dddc3b08ecbf10f368240dd1b3e992bbf in nifi's branch 
refs/heads/master from [~cricket007]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3da8b94 ]

NIFI-4057 Docker Image is twice as large as necessary

  * Merging unnecessary layers
  * MAINTAINER is deprecated
  * Using JRE as base since JDK is not necessary
  * Set GID=1000 since openjdk image already defines 50
  * Add ability to specify Apache mirror site to reduce load on Apache Archive
  * Created templates directory since this is not included in the binary

This closes #1910.

Signed-off-by: Aldrin Piri 


> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Priority: Minor
> Fix For: 1.4.0
>
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread Aldrin Piri (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldrin Piri resolved NIFI-4057.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Priority: Minor
> Fix For: 1.4.0
>
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090586#comment-16090586
 ] 

ASF GitHub Bot commented on NIFI-4057:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1910


> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Priority: Minor
> Fix For: 1.4.0
>
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1910: NIFI-4057 Docker Image is twice as large as necessa...

2017-07-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1910


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090565#comment-16090565
 ] 

ASF GitHub Bot commented on NIFI-4057:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1910#discussion_r127826350
  
--- Diff: nifi-docker/dockermaven/Dockerfile ---
@@ -16,32 +16,33 @@
 # under the License.
 #
 
-FROM openjdk:8
-MAINTAINER Apache NiFi 
+FROM openjdk:8-jre
+LABEL maintainer "Apache NiFi "
 
 ARG UID=1000
-ARG GID=50
+ARG GID=1000
 ARG NIFI_VERSION
 ARG NIFI_BINARY
 
 ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION
 
 # Setup NiFi user
-RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1`
-RUN useradd --shell /bin/bash -u $UID -g $GID -m nifi
-RUN mkdir -p $NIFI_HOME 
+RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1` \
+&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \
+&& mkdir -p $NIFI_HOME/conf/templates
+&& chown -R nifi:nifi $NIFI_BASE_DIR
+
+USER nifi
 
 ADD $NIFI_BINARY $NIFI_BASE_DIR
 RUN chown -R nifi:nifi $NIFI_HOME
--- End diff --

Going to merge this in as I don't believe we have a better way to work 
around this for the local case and presumably there is not another way without 
getting overly complex.


> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Priority: Minor
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1910: NIFI-4057 Docker Image is twice as large as necessa...

2017-07-17 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1910#discussion_r127826350
  
--- Diff: nifi-docker/dockermaven/Dockerfile ---
@@ -16,32 +16,33 @@
 # under the License.
 #
 
-FROM openjdk:8
-MAINTAINER Apache NiFi 
+FROM openjdk:8-jre
+LABEL maintainer "Apache NiFi "
 
 ARG UID=1000
-ARG GID=50
+ARG GID=1000
 ARG NIFI_VERSION
 ARG NIFI_BINARY
 
 ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION
 
 # Setup NiFi user
-RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1`
-RUN useradd --shell /bin/bash -u $UID -g $GID -m nifi
-RUN mkdir -p $NIFI_HOME 
+RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1` \
+&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \
+&& mkdir -p $NIFI_HOME/conf/templates
+&& chown -R nifi:nifi $NIFI_BASE_DIR
+
+USER nifi
 
 ADD $NIFI_BINARY $NIFI_BASE_DIR
 RUN chown -R nifi:nifi $NIFI_HOME
--- End diff --

Going to merge this in as I don't believe we have a better way to work 
around this for the local case and presumably there is not another way without 
getting overly complex.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090550#comment-16090550
 ] 

ASF GitHub Bot commented on NIFI-4057:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1910#discussion_r127824242
  
--- Diff: nifi-docker/dockermaven/Dockerfile ---
@@ -16,32 +16,33 @@
 # under the License.
 #
 
-FROM openjdk:8
-MAINTAINER Apache NiFi 
+FROM openjdk:8-jre
+LABEL maintainer "Apache NiFi "
 
 ARG UID=1000
-ARG GID=50
+ARG GID=1000
 ARG NIFI_VERSION
 ARG NIFI_BINARY
 
 ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION
 
 # Setup NiFi user
-RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1`
-RUN useradd --shell /bin/bash -u $UID -g $GID -m nifi
-RUN mkdir -p $NIFI_HOME 
+RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1` \
+&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \
+&& mkdir -p $NIFI_HOME/conf/templates
+&& chown -R nifi:nifi $NIFI_BASE_DIR
+
+USER nifi
 
 ADD $NIFI_BINARY $NIFI_BASE_DIR
 RUN chown -R nifi:nifi $NIFI_HOME
--- End diff --

But this causes the duplicate layer issue again.  Bit of a different 
environment as we are not able to add & chmod the files in the same sequence 
given the nature of the ADD command.  May just have to be a concession we make 
for the local environment.


> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Priority: Minor
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1910: NIFI-4057 Docker Image is twice as large as necessa...

2017-07-17 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1910#discussion_r127824242
  
--- Diff: nifi-docker/dockermaven/Dockerfile ---
@@ -16,32 +16,33 @@
 # under the License.
 #
 
-FROM openjdk:8
-MAINTAINER Apache NiFi 
+FROM openjdk:8-jre
+LABEL maintainer "Apache NiFi "
 
 ARG UID=1000
-ARG GID=50
+ARG GID=1000
 ARG NIFI_VERSION
 ARG NIFI_BINARY
 
 ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION
 
 # Setup NiFi user
-RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1`
-RUN useradd --shell /bin/bash -u $UID -g $GID -m nifi
-RUN mkdir -p $NIFI_HOME 
+RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1` \
+&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \
+&& mkdir -p $NIFI_HOME/conf/templates
+&& chown -R nifi:nifi $NIFI_BASE_DIR
+
+USER nifi
 
 ADD $NIFI_BINARY $NIFI_BASE_DIR
 RUN chown -R nifi:nifi $NIFI_HOME
--- End diff --

But this causes the duplicate layer issue again.  Bit of a different 
environment as we are not able to add & chmod the files in the same sequence 
given the nature of the ADD command.  May just have to be a concession we make 
for the local environment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090547#comment-16090547
 ] 

ASF GitHub Bot commented on NIFI-4057:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1910#discussion_r127823749
  
--- Diff: nifi-docker/dockermaven/Dockerfile ---
@@ -16,32 +16,33 @@
 # under the License.
 #
 
-FROM openjdk:8
-MAINTAINER Apache NiFi 
+FROM openjdk:8-jre
+LABEL maintainer "Apache NiFi "
 
 ARG UID=1000
-ARG GID=50
+ARG GID=1000
 ARG NIFI_VERSION
 ARG NIFI_BINARY
 
 ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION
 
 # Setup NiFi user
-RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1`
-RUN useradd --shell /bin/bash -u $UID -g $GID -m nifi
-RUN mkdir -p $NIFI_HOME 
+RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1` \
+&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \
+&& mkdir -p $NIFI_HOME/conf/templates
+&& chown -R nifi:nifi $NIFI_BASE_DIR
+
+USER nifi
 
 ADD $NIFI_BINARY $NIFI_BASE_DIR
 RUN chown -R nifi:nifi $NIFI_HOME
--- End diff --

Looks like the chown was an issue after user.  Moving USER below the chown 
seems to work appropriately.


> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Priority: Minor
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1910: NIFI-4057 Docker Image is twice as large as necessa...

2017-07-17 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1910#discussion_r127823749
  
--- Diff: nifi-docker/dockermaven/Dockerfile ---
@@ -16,32 +16,33 @@
 # under the License.
 #
 
-FROM openjdk:8
-MAINTAINER Apache NiFi 
+FROM openjdk:8-jre
+LABEL maintainer "Apache NiFi "
 
 ARG UID=1000
-ARG GID=50
+ARG GID=1000
 ARG NIFI_VERSION
 ARG NIFI_BINARY
 
 ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION
 
 # Setup NiFi user
-RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1`
-RUN useradd --shell /bin/bash -u $UID -g $GID -m nifi
-RUN mkdir -p $NIFI_HOME 
+RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1` \
+&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \
+&& mkdir -p $NIFI_HOME/conf/templates
+&& chown -R nifi:nifi $NIFI_BASE_DIR
+
+USER nifi
 
 ADD $NIFI_BINARY $NIFI_BASE_DIR
 RUN chown -R nifi:nifi $NIFI_HOME
--- End diff --

Looks like the chown was an issue after user.  Moving USER below the chown 
seems to work appropriately.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090510#comment-16090510
 ] 

ASF GitHub Bot commented on NIFI-4057:
--

Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1910#discussion_r127817762
  
--- Diff: nifi-docker/dockermaven/Dockerfile ---
@@ -16,32 +16,33 @@
 # under the License.
 #
 
-FROM openjdk:8
-MAINTAINER Apache NiFi 
+FROM openjdk:8-jre
+LABEL maintainer "Apache NiFi "
 
 ARG UID=1000
-ARG GID=50
+ARG GID=1000
 ARG NIFI_VERSION
 ARG NIFI_BINARY
 
 ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION
 
 # Setup NiFi user
-RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1`
-RUN useradd --shell /bin/bash -u $UID -g $GID -m nifi
-RUN mkdir -p $NIFI_HOME 
+RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1` \
+&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \
+&& mkdir -p $NIFI_HOME/conf/templates
--- End diff --

This needs a '\'

Everything else looks pretty good and just verifying successful build.  If 
so, I can adjust this on merge.


> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Priority: Minor
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1910: NIFI-4057 Docker Image is twice as large as necessa...

2017-07-17 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1910#discussion_r127817762
  
--- Diff: nifi-docker/dockermaven/Dockerfile ---
@@ -16,32 +16,33 @@
 # under the License.
 #
 
-FROM openjdk:8
-MAINTAINER Apache NiFi 
+FROM openjdk:8-jre
+LABEL maintainer "Apache NiFi "
 
 ARG UID=1000
-ARG GID=50
+ARG GID=1000
 ARG NIFI_VERSION
 ARG NIFI_BINARY
 
 ENV NIFI_BASE_DIR /opt/nifi
 ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION
 
 # Setup NiFi user
-RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1`
-RUN useradd --shell /bin/bash -u $UID -g $GID -m nifi
-RUN mkdir -p $NIFI_HOME 
+RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: 
-f1` \
+&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \
+&& mkdir -p $NIFI_HOME/conf/templates
--- End diff --

This needs a '\'

Everything else looks pretty good and just verifying successful build.  If 
so, I can adjust this on merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp issue #114: site2site port negotiation

2017-07-17 Thread benqiu2016
Github user benqiu2016 commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/114
  
Thanks for the review. Can we merge the PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4193) Transition to Spotify dockerfile-maven plugin

2017-07-17 Thread Aldrin Piri (JIRA)
Aldrin Piri created NIFI-4193:
-

 Summary: Transition to Spotify dockerfile-maven plugin
 Key: NIFI-4193
 URL: https://issues.apache.org/jira/browse/NIFI-4193
 Project: Apache NiFi
  Issue Type: Task
Reporter: Aldrin Piri


As per 
https://github.com/spotify/docker-maven-plugin#the-future-of-docker-maven-plugin,
 it is preferred to make use of the dockerfile-maven plugin 
(https://github.com/spotify/dockerfile-maven).  We should consider using this 
plugin instead as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1969: NIFI-4082 - Added EL on GetMongo properties

2017-07-17 Thread jfrazee
Github user jfrazee commented on the issue:

https://github.com/apache/nifi/pull/1969
  
@pvillard31 I think this needs more updates to the tests. Since you've 
added URI, collection, and DB, we should probably add that to the Get test. 
And, I think there are similar changes that make sense for the Put and Abstract 
processor.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4082) Enable nifi expression language for GetMongo - Query property

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090470#comment-16090470
 ] 

ASF GitHub Bot commented on NIFI-4082:
--

Github user jfrazee commented on the issue:

https://github.com/apache/nifi/pull/1969
  
@pvillard31 I think this needs more updates to the tests. Since you've 
added URI, collection, and DB, we should probably add that to the Get test. 
And, I think there are similar changes that make sense for the Put and Abstract 
processor.


> Enable nifi expression language for GetMongo - Query property
> -
>
> Key: NIFI-4082
> URL: https://issues.apache.org/jira/browse/NIFI-4082
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Dmitry Lukyanov
>Assignee: Pierre Villard
>Priority: Trivial
>
> Currently the `Query` property of the  `GetMongo` processor does not support 
> expression language.
> That disables query parametrization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-07-17 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090455#comment-16090455
 ] 

Matt Burgess commented on NIFI-3335:


[~patricker] Are you still working on this? If not, do you mind if I unassign 
it?  Thanks!

> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Peter Wicks
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4082) Enable nifi expression language for GetMongo - Query property

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090447#comment-16090447
 ] 

ASF GitHub Bot commented on NIFI-4082:
--

Github user jfrazee commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1969#discussion_r127808068
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -59,14 +59,21 @@
 public static final Validator DOCUMENT_VALIDATOR = new Validator() {
 @Override
 public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+final ValidationResult.Builder builder = new 
ValidationResult.Builder();
+builder.subject(subject).input(value);
+
+if (context.isExpressionLanguageSupported(subject) && 
context.isExpressionLanguagePresent(value)) {
+return builder.valid(true).explanation("Contains 
Expression Language").build();
+}
+
 String reason = null;
 try {
 Document.parse(value);
 } catch (final RuntimeException e) {
 reason = e.getClass().getName();
--- End diff --

Seems like this should be `e.getMessage()` or `get.getClass().getName() + 
": " + e.getMessage()` if you want the underlying exception class.


> Enable nifi expression language for GetMongo - Query property
> -
>
> Key: NIFI-4082
> URL: https://issues.apache.org/jira/browse/NIFI-4082
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Dmitry Lukyanov
>Assignee: Pierre Villard
>Priority: Trivial
>
> Currently the `Query` property of the  `GetMongo` processor does not support 
> expression language.
> That disables query parametrization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1969: NIFI-4082 - Added EL on GetMongo properties

2017-07-17 Thread jfrazee
Github user jfrazee commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1969#discussion_r127808068
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -59,14 +59,21 @@
 public static final Validator DOCUMENT_VALIDATOR = new Validator() {
 @Override
 public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+final ValidationResult.Builder builder = new 
ValidationResult.Builder();
+builder.subject(subject).input(value);
+
+if (context.isExpressionLanguageSupported(subject) && 
context.isExpressionLanguagePresent(value)) {
+return builder.valid(true).explanation("Contains 
Expression Language").build();
+}
+
 String reason = null;
 try {
 Document.parse(value);
 } catch (final RuntimeException e) {
 reason = e.getClass().getName();
--- End diff --

Seems like this should be `e.getMessage()` or `get.getClass().getName() + 
": " + e.getMessage()` if you want the underlying exception class.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4057) Docker Image is twice as large as necessary

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090336#comment-16090336
 ] 

ASF GitHub Bot commented on NIFI-4057:
--

Github user apiri commented on the issue:

https://github.com/apache/nifi/pull/1910
  
reviewing


> Docker Image is twice as large as necessary
> ---
>
> Key: NIFI-4057
> URL: https://issues.apache.org/jira/browse/NIFI-4057
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Jordan Moore
>Priority: Minor
>
> By calling {{chown}} as a secondary {{RUN}} command, you effectively double 
> the size of image by creating a Docker layer of the same size as the 
> extracted binary. 
> See GitHub discussion: 
> https://github.com/apache/nifi/pull/1372#issuecomment-307592287
> *Expectation*
> The resultant Docker image should be no larger than the Base image + the size 
> required by extracting the Nifi binaries. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1910: NIFI-4057 Docker Image is twice as large as necessary

2017-07-17 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi/pull/1910
  
reviewing


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp issue #114: site2site port negotiation

2017-07-17 Thread kevdoran
Github user kevdoran commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/114
  
@benqiu2016 thanks for making that round of changes! They look good. I'll 
be happy to test the branch when a get a chance to setup a NiFi environment 
that I can use to test some of the more advanced configurations that are 
supported (clustered peers, secure client). I should have a chance to do that 
this week, hopefully in the next couple days. As a general comment, it looks 
like there are quite a few cases that are not covered by corresponding 
automated tests so a lot of this will rely manual verification. You may want to 
open a JIRA ticket for adding some additional unit tests and integration tests 
as a way to verify this feature over time as changes are made to the codebase. 
Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090058#comment-16090058
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1961
  
@bbende Ok, should be good to go now.


> Create EvaluateRecordPath processor
> ---
>
> Key: NIFI-4024
> URL: https://issues.apache.org/jira/browse/NIFI-4024
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Steve Champagne
>Priority: Minor
>
> With the new RecordPath DSL, it would be nice if there was a processor that 
> could pull fields into attributes of the flowfile based on a RecordPath. This 
> would be similar to the EvaluateJsonPath processor that currently exists, 
> except it could be used to pull fields from arbitrary record formats. My 
> current use case for it would be pulling fields out of Avro records while 
> skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and 
> then converting back to Avro. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecord

2017-07-17 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1961
  
@bbende Ok, should be good to go now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089975#comment-16089975
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r127741006
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FlowController.java
 ---
@@ -2867,6 +2867,7 @@ private ProcessorStatus getProcessorStatus(final 
RepositoryStatusReport report,
 status.setFlowFilesSent(entry.getFlowFilesSent());
 status.setBytesSent(entry.getBytesSent());
 status.setFlowFilesRemoved(entry.getFlowFilesRemoved());
+status.setCounters(entry.getCounters());
--- End diff --

This should be done conditionally based on `isProcessorAuthorized`. When 
captured for status history purposes that `Predicate` will always result in 
`true`.


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089976#comment-16089976
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r127742238
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/controller/status/ProcessorStatus.java 
---
@@ -234,6 +245,7 @@ public ProcessorStatus clone() {
 clonedObj.flowFilesRemoved = flowFilesRemoved;
 clonedObj.runStatus = runStatus;
 clonedObj.type = type;
+clonedObj.counters = new HashMap<>(counters);
--- End diff --

May need to protect against NPE when `counters` is null.


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089978#comment-16089978
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r127741268
  
--- Diff: 
nifi-framework-api/src/main/java/org/apache/nifi/controller/status/history/StatusHistory.java
 ---
@@ -41,4 +41,9 @@
  * @return List of snapshots for a given component
  */
 List getStatusSnapshots();
+
+/**
+ * @return true if counter values are included in the 
Status History
+ */
+boolean isIncludeCounters();
--- End diff --

If we're able to remove the flag from the `StatusHistoryDTO`, we may also 
be able to remove this one.


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089977#comment-16089977
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r127739416
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/StatusHistoryEndpointMerger.java
 ---
@@ -109,13 +119,49 @@ public NodeResponse merge(URI uri, String method, 
Set successfulRe
 noReadPermissionsComponentDetails = 
nodeStatus.getComponentDetails();
 }
 
+if (!nodeStatus.isIncludeCounters()) {
--- End diff --

I'm not sure we need to add a new field to the `nodeStatus` as the read 
permission is already present in the corresponding `nodeResponseEntity`.


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1872: NIFI-106: Expose processors' counters in Stats Hist...

2017-07-17 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r127742238
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/controller/status/ProcessorStatus.java 
---
@@ -234,6 +245,7 @@ public ProcessorStatus clone() {
 clonedObj.flowFilesRemoved = flowFilesRemoved;
 clonedObj.runStatus = runStatus;
 clonedObj.type = type;
+clonedObj.counters = new HashMap<>(counters);
--- End diff --

May need to protect against NPE when `counters` is null.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1872: NIFI-106: Expose processors' counters in Stats Hist...

2017-07-17 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r127739416
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/StatusHistoryEndpointMerger.java
 ---
@@ -109,13 +119,49 @@ public NodeResponse merge(URI uri, String method, 
Set successfulRe
 noReadPermissionsComponentDetails = 
nodeStatus.getComponentDetails();
 }
 
+if (!nodeStatus.isIncludeCounters()) {
--- End diff --

I'm not sure we need to add a new field to the `nodeStatus` as the read 
permission is already present in the corresponding `nodeResponseEntity`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1872: NIFI-106: Expose processors' counters in Stats Hist...

2017-07-17 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r127741268
  
--- Diff: 
nifi-framework-api/src/main/java/org/apache/nifi/controller/status/history/StatusHistory.java
 ---
@@ -41,4 +41,9 @@
  * @return List of snapshots for a given component
  */
 List getStatusSnapshots();
+
+/**
+ * @return true if counter values are included in the 
Status History
+ */
+boolean isIncludeCounters();
--- End diff --

If we're able to remove the flag from the `StatusHistoryDTO`, we may also 
be able to remove this one.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1872: NIFI-106: Expose processors' counters in Stats Hist...

2017-07-17 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1872#discussion_r127741006
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FlowController.java
 ---
@@ -2867,6 +2867,7 @@ private ProcessorStatus getProcessorStatus(final 
RepositoryStatusReport report,
 status.setFlowFilesSent(entry.getFlowFilesSent());
 status.setBytesSent(entry.getBytesSent());
 status.setFlowFilesRemoved(entry.getFlowFilesRemoved());
+status.setCounters(entry.getCounters());
--- End diff --

This should be done conditionally based on `isProcessorAuthorized`. When 
captured for status history purposes that `Predicate` will always result in 
`true`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4142) Implement a ValidateRecord Processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089956#comment-16089956
 ] 

ASF GitHub Bot commented on NIFI-4142:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2015#discussion_r127738912
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/RecordReader.java
 ---
@@ -38,14 +38,35 @@
 public interface RecordReader extends Closeable {
 
 /**
- * Returns the next record in the stream or null if no 
more records are available.
+ * Returns the next record in the stream or null if no 
more records are available. Schema enforcement will be enabled.
  *
  * @return the next record in the stream or null if no 
more records are available.
  *
  * @throws IOException if unable to read from the underlying data
  * @throws MalformedRecordException if an unrecoverable failure occurs 
when trying to parse a record
+ * @throws SchemaValidationException if a Record contains a field that 
violates the schema and cannot be coerced into the appropriate field type.
  */
-Record nextRecord() throws IOException, MalformedRecordException;
+default Record nextRecord() throws IOException, 
MalformedRecordException {
--- End diff --

we should indicate whether the scheme enforcement strictness is 'lenient' 
or 'strict'.


> Implement a ValidateRecord Processor
> 
>
> Key: NIFI-4142
> URL: https://issues.apache.org/jira/browse/NIFI-4142
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> We need a processor that is capable of validating that all Records in a 
> FlowFile adhere to the proper schema.
> The Processor should be configured with a Record Reader and should route each 
> record to either 'valid' or 'invalid' based on whether or not the record 
> adheres to the reader's schema. A record would be invalid in any of the 
> following cases:
> - Missing field that is required according to the schema
> - Extra field that is not present in schema (it should be configurable 
> whether or not this is a failure)
> - Field requires coercion and strict type checking enabled (this should also 
> be configurable)
> - Field is invalid, such as the value "hello" when it should be an integer



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4142) Implement a ValidateRecord Processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089955#comment-16089955
 ] 

ASF GitHub Bot commented on NIFI-4142:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2015#discussion_r127738787
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/RecordReader.java
 ---
@@ -38,14 +38,35 @@
 public interface RecordReader extends Closeable {
 
 /**
- * Returns the next record in the stream or null if no 
more records are available.
+ * Returns the next record in the stream or null if no 
more records are available. Schema enforcement will be enabled.
  *
  * @return the next record in the stream or null if no 
more records are available.
  *
  * @throws IOException if unable to read from the underlying data
  * @throws MalformedRecordException if an unrecoverable failure occurs 
when trying to parse a record
+ * @throws SchemaValidationException if a Record contains a field that 
violates the schema and cannot be coerced into the appropriate field type.
  */
-Record nextRecord() throws IOException, MalformedRecordException;
+default Record nextRecord() throws IOException, 
MalformedRecordException {
+return nextRecord(true);
+}
+
+/**
+ * Reads the next record from the underlying stream. If schema 
enforcement is enabled, then any field in the Record whose type does not
+ * match the schema will be coerced to the correct type and a 
MalformedRecordException will be thrown if unable to coerce the data into
+ * the correct type. If schema enforcement is disabled, then no type 
coercion will occur. As a result, calling
+ * {@link 
Record#getValue(org.apache.nifi.serialization.record.RecordField)}
+ * may return any type of Object, such as a String or another Record, 
even though the schema indicates that the field must be an integer.
+ *
+ * @param enforceSchema whether or not fields in the Record should be 
validated against the schema and coerced when necessary
+ *
+ * @return the next record in the stream or null if no 
more records are available
+ * @throws IOException if unable to read from the underlying data
+ * @throws MalformedRecordException if an unrecoverable failure occurs 
when trying to parse a record, or a Record contains a field
+ * that violates the schema and cannot be coerced into the 
appropriate field type.
+ * @throws SchemaValidationException if a Record contains a field that 
violates the schema and cannot be coerced into the appropriate
+ * field type and schema enforcement is enabled
+ */
+Record nextRecord(boolean enforceSchema) throws IOException, 
MalformedRecordException;
--- End diff --

the schema had always been enforced arguably just with sense of leniency.  
I think this method parameter should be 'strictSchemaEnforcement' or 
'enforceStrictSchema'.


> Implement a ValidateRecord Processor
> 
>
> Key: NIFI-4142
> URL: https://issues.apache.org/jira/browse/NIFI-4142
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> We need a processor that is capable of validating that all Records in a 
> FlowFile adhere to the proper schema.
> The Processor should be configured with a Record Reader and should route each 
> record to either 'valid' or 'invalid' based on whether or not the record 
> adheres to the reader's schema. A record would be invalid in any of the 
> following cases:
> - Missing field that is required according to the schema
> - Extra field that is not present in schema (it should be configurable 
> whether or not this is a failure)
> - Field requires coercion and strict type checking enabled (this should also 
> be configurable)
> - Field is invalid, such as the value "hello" when it should be an integer



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2015: NIFI-4142: Refactored Record Reader/Writer to allow...

2017-07-17 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2015#discussion_r127738912
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/RecordReader.java
 ---
@@ -38,14 +38,35 @@
 public interface RecordReader extends Closeable {
 
 /**
- * Returns the next record in the stream or null if no 
more records are available.
+ * Returns the next record in the stream or null if no 
more records are available. Schema enforcement will be enabled.
  *
  * @return the next record in the stream or null if no 
more records are available.
  *
  * @throws IOException if unable to read from the underlying data
  * @throws MalformedRecordException if an unrecoverable failure occurs 
when trying to parse a record
+ * @throws SchemaValidationException if a Record contains a field that 
violates the schema and cannot be coerced into the appropriate field type.
  */
-Record nextRecord() throws IOException, MalformedRecordException;
+default Record nextRecord() throws IOException, 
MalformedRecordException {
--- End diff --

we should indicate whether the scheme enforcement strictness is 'lenient' 
or 'strict'.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2015: NIFI-4142: Refactored Record Reader/Writer to allow...

2017-07-17 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2015#discussion_r127738787
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/RecordReader.java
 ---
@@ -38,14 +38,35 @@
 public interface RecordReader extends Closeable {
 
 /**
- * Returns the next record in the stream or null if no 
more records are available.
+ * Returns the next record in the stream or null if no 
more records are available. Schema enforcement will be enabled.
  *
  * @return the next record in the stream or null if no 
more records are available.
  *
  * @throws IOException if unable to read from the underlying data
  * @throws MalformedRecordException if an unrecoverable failure occurs 
when trying to parse a record
+ * @throws SchemaValidationException if a Record contains a field that 
violates the schema and cannot be coerced into the appropriate field type.
  */
-Record nextRecord() throws IOException, MalformedRecordException;
+default Record nextRecord() throws IOException, 
MalformedRecordException {
+return nextRecord(true);
+}
+
+/**
+ * Reads the next record from the underlying stream. If schema 
enforcement is enabled, then any field in the Record whose type does not
+ * match the schema will be coerced to the correct type and a 
MalformedRecordException will be thrown if unable to coerce the data into
+ * the correct type. If schema enforcement is disabled, then no type 
coercion will occur. As a result, calling
+ * {@link 
Record#getValue(org.apache.nifi.serialization.record.RecordField)}
+ * may return any type of Object, such as a String or another Record, 
even though the schema indicates that the field must be an integer.
+ *
+ * @param enforceSchema whether or not fields in the Record should be 
validated against the schema and coerced when necessary
+ *
+ * @return the next record in the stream or null if no 
more records are available
+ * @throws IOException if unable to read from the underlying data
+ * @throws MalformedRecordException if an unrecoverable failure occurs 
when trying to parse a record, or a Record contains a field
+ * that violates the schema and cannot be coerced into the 
appropriate field type.
+ * @throws SchemaValidationException if a Record contains a field that 
violates the schema and cannot be coerced into the appropriate
+ * field type and schema enforcement is enabled
+ */
+Record nextRecord(boolean enforceSchema) throws IOException, 
MalformedRecordException;
--- End diff --

the schema had always been enforced arguably just with sense of leniency.  
I think this method parameter should be 'strictSchemaEnforcement' or 
'enforceStrictSchema'.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4142) Implement a ValidateRecord Processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089954#comment-16089954
 ] 

ASF GitHub Bot commented on NIFI-4142:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2015#discussion_r127738279
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/SchemaValidationException.java
 ---
@@ -15,14 +15,16 @@
  * limitations under the License.
  */
 
-package org.apache.nifi.serialization.record;
+package org.apache.nifi.serialization;
 
-public class TypeMismatchException extends RuntimeException {
--- End diff --

i dont think it is ok to change this exeception class name at this juncture 
and even if it is questionable ok the juice is probably not worth the squeeze.  
TypeMismatch and SchemaValidation are pretty much the same thing


> Implement a ValidateRecord Processor
> 
>
> Key: NIFI-4142
> URL: https://issues.apache.org/jira/browse/NIFI-4142
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> We need a processor that is capable of validating that all Records in a 
> FlowFile adhere to the proper schema.
> The Processor should be configured with a Record Reader and should route each 
> record to either 'valid' or 'invalid' based on whether or not the record 
> adheres to the reader's schema. A record would be invalid in any of the 
> following cases:
> - Missing field that is required according to the schema
> - Extra field that is not present in schema (it should be configurable 
> whether or not this is a failure)
> - Field requires coercion and strict type checking enabled (this should also 
> be configurable)
> - Field is invalid, such as the value "hello" when it should be an integer



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2015: NIFI-4142: Refactored Record Reader/Writer to allow...

2017-07-17 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2015#discussion_r127738279
  
--- Diff: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/SchemaValidationException.java
 ---
@@ -15,14 +15,16 @@
  * limitations under the License.
  */
 
-package org.apache.nifi.serialization.record;
+package org.apache.nifi.serialization;
 
-public class TypeMismatchException extends RuntimeException {
--- End diff --

i dont think it is ok to change this exeception class name at this juncture 
and even if it is questionable ok the juice is probably not worth the squeeze.  
TypeMismatch and SchemaValidation are pretty much the same thing


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (NIFI-4059) Implement LdapUserGroupProvider

2017-07-17 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-4059.
---
Resolution: Fixed

Resolving as the LdapUserGroupProvider is completed and merged in. Will create 
a new JIRA at a later date if supporting a listing of users/group search base 
is desired.

> Implement LdapUserGroupProvider
> ---
>
> Key: NIFI-4059
> URL: https://issues.apache.org/jira/browse/NIFI-4059
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.4.0
>
>
> Implement an LDAP based UserGroupProvider to provide support for backing NiFi 
> users and groups in a directory server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-106) Processor Counters should be included in the Status Reports

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089919#comment-16089919
 ] 

ASF GitHub Bot commented on NIFI-106:
-

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1872
  
@markap14 this sounds like a good approach. Will review...


> Processor Counters should be included in the Status Reports
> ---
>
> Key: NIFI-106
> URL: https://issues.apache.org/jira/browse/NIFI-106
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Mark Payne
>Priority: Minor
>
> This would allow a Processor's Status HIstory to show counters that were 
> maintained over time periods instead of having only a single count since 
> system start.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1872: NIFI-106: Expose processors' counters in Stats History

2017-07-17 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/1872
  
@markap14 this sounds like a good approach. Will review...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4142) Implement a ValidateRecord Processor

2017-07-17 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4142:
-
Fix Version/s: 1.4.0
   Status: Patch Available  (was: Open)

> Implement a ValidateRecord Processor
> 
>
> Key: NIFI-4142
> URL: https://issues.apache.org/jira/browse/NIFI-4142
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> We need a processor that is capable of validating that all Records in a 
> FlowFile adhere to the proper schema.
> The Processor should be configured with a Record Reader and should route each 
> record to either 'valid' or 'invalid' based on whether or not the record 
> adheres to the reader's schema. A record would be invalid in any of the 
> following cases:
> - Missing field that is required according to the schema
> - Extra field that is not present in schema (it should be configurable 
> whether or not this is a failure)
> - Field requires coercion and strict type checking enabled (this should also 
> be configurable)
> - Field is invalid, such as the value "hello" when it should be an integer



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4142) Implement a ValidateRecord Processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089908#comment-16089908
 ] 

ASF GitHub Bot commented on NIFI-4142:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2015

NIFI-4142: Refactored Record Reader/Writer to allow for reading/writi…

…ng "raw records". Implemented ValidateRecord.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4142

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2015


commit ad5c46fe7646103021556178726254f2cbb0b8a0
Author: Mark Payne 
Date:   2017-06-30T12:32:01Z

NIFI-4142: Refactored Record Reader/Writer to allow for reading/writing 
"raw records". Implemented ValidateRecord.




> Implement a ValidateRecord Processor
> 
>
> Key: NIFI-4142
> URL: https://issues.apache.org/jira/browse/NIFI-4142
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> We need a processor that is capable of validating that all Records in a 
> FlowFile adhere to the proper schema.
> The Processor should be configured with a Record Reader and should route each 
> record to either 'valid' or 'invalid' based on whether or not the record 
> adheres to the reader's schema. A record would be invalid in any of the 
> following cases:
> - Missing field that is required according to the schema
> - Extra field that is not present in schema (it should be configurable 
> whether or not this is a failure)
> - Field requires coercion and strict type checking enabled (this should also 
> be configurable)
> - Field is invalid, such as the value "hello" when it should be an integer



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2015: NIFI-4142: Refactored Record Reader/Writer to allow...

2017-07-17 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2015

NIFI-4142: Refactored Record Reader/Writer to allow for reading/writi…

…ng "raw records". Implemented ValidateRecord.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4142

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2015


commit ad5c46fe7646103021556178726254f2cbb0b8a0
Author: Mark Payne 
Date:   2017-06-30T12:32:01Z

NIFI-4142: Refactored Record Reader/Writer to allow for reading/writing 
"raw records". Implemented ValidateRecord.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (NIFI-3591) Relationship should support a DisplayName and Name like PropertyDescriptor

2017-07-17 Thread Joseph Niemiec (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Niemiec reassigned NIFI-3591:


Assignee: (was: Joseph Niemiec)

> Relationship should support a DisplayName and Name like PropertyDescriptor
> --
>
> Key: NIFI-3591
> URL: https://issues.apache.org/jira/browse/NIFI-3591
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Niemiec
>Priority: Minor
>
> Today the PropertyDescriptor supports a display name which allows UI updates 
> without breaking of the flow.xml.gz file in terms of downstream relationship 
> binding. If we attempted to just update the NAME processors my break, but by 
> enabling a decoupled displayname this can get updated at will.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4192) Issue with the MergeContent Processor when processing Avro files

2017-07-17 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-4192:
--

 Summary: Issue with the MergeContent Processor when processing 
Avro files
 Key: NIFI-4192
 URL: https://issues.apache.org/jira/browse/NIFI-4192
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Matt Burgess


Currently the Avro Merge strategy in MergeContent requires that any key/value 
pairs in the Avro non-reserved metadata must match. This might have been done 
to support re-merging of split Avro files upstream, since the "fragment.*" 
capability was added afterward.

Now that the fragment.* attributes are available to help merge a batch of flow 
files, the user should be able to select the Avro Metadata Strategy the same 
way as the Attribute Strategy, with the additional option of "Ignore if 
unmatched", which should be default to maintain currently functionality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-4192) Issue with the MergeContent Processor when processing Avro files

2017-07-17 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-4192:
--

Assignee: Matt Burgess

> Issue with the MergeContent Processor when processing Avro files
> 
>
> Key: NIFI-4192
> URL: https://issues.apache.org/jira/browse/NIFI-4192
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> Currently the Avro Merge strategy in MergeContent requires that any key/value 
> pairs in the Avro non-reserved metadata must match. This might have been done 
> to support re-merging of split Avro files upstream, since the "fragment.*" 
> capability was added afterward.
> Now that the fragment.* attributes are available to help merge a batch of 
> flow files, the user should be able to select the Avro Metadata Strategy the 
> same way as the Attribute Strategy, with the additional option of "Ignore if 
> unmatched", which should be default to maintain currently functionality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4156) SplitText processor Fragment.count does not match number of output FlowFiles.

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089817#comment-16089817
 ] 

ASF GitHub Bot commented on NIFI-4156:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2014

NIFI-4156: Fixed fragment.count in SplitText to equal emitted flow files

Tried a few different use cases / code paths, to check for correct change 
in behavior.


### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4156

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2014


commit acc6725cd57a0976502e849ba4d12f50811eb8be
Author: Matt Burgess 
Date:   2017-07-17T13:23:55Z

NIFI-4156: Fixed fragment.count in SplitText to equal emitted flow files




> SplitText processor Fragment.count does not match number of output FlowFiles.
> -
>
> Key: NIFI-4156
> URL: https://issues.apache.org/jira/browse/NIFI-4156
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0
>Reporter: Matthew Clarke
>Assignee: Matt Burgess
>Priority: Minor
>
> SplitText processor configured as follows:
> - line split count = 1
> - Remove trailing Newlines = true
> If the FlowFile being split contains 1 or more blank lines, those blank lines 
> are not turned into split FlowFiles as expected based on above settings.
> However, those blank lines are still counted in the fragment.count attribute. 
> The fragment.count should actually match the number of emitted FlowFiles.
> Fragment count is likely to be used as a method to confirm all FlowFiles have 
> been processed later downstream of splitText. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4156) SplitText processor Fragment.count does not match number of output FlowFiles.

2017-07-17 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4156:
---
Status: Patch Available  (was: Open)

> SplitText processor Fragment.count does not match number of output FlowFiles.
> -
>
> Key: NIFI-4156
> URL: https://issues.apache.org/jira/browse/NIFI-4156
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0
>Reporter: Matthew Clarke
>Assignee: Matt Burgess
>Priority: Minor
>
> SplitText processor configured as follows:
> - line split count = 1
> - Remove trailing Newlines = true
> If the FlowFile being split contains 1 or more blank lines, those blank lines 
> are not turned into split FlowFiles as expected based on above settings.
> However, those blank lines are still counted in the fragment.count attribute. 
> The fragment.count should actually match the number of emitted FlowFiles.
> Fragment count is likely to be used as a method to confirm all FlowFiles have 
> been processed later downstream of splitText. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2014: NIFI-4156: Fixed fragment.count in SplitText to equ...

2017-07-17 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2014

NIFI-4156: Fixed fragment.count in SplitText to equal emitted flow files

Tried a few different use cases / code paths, to check for correct change 
in behavior.


### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4156

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2014


commit acc6725cd57a0976502e849ba4d12f50811eb8be
Author: Matt Burgess 
Date:   2017-07-17T13:23:55Z

NIFI-4156: Fixed fragment.count in SplitText to equal emitted flow files




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (NIFI-4156) SplitText processor Fragment.count does not match number of output FlowFiles.

2017-07-17 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-4156:
--

Assignee: Matt Burgess

> SplitText processor Fragment.count does not match number of output FlowFiles.
> -
>
> Key: NIFI-4156
> URL: https://issues.apache.org/jira/browse/NIFI-4156
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0
>Reporter: Matthew Clarke
>Assignee: Matt Burgess
>Priority: Minor
>
> SplitText processor configured as follows:
> - line split count = 1
> - Remove trailing Newlines = true
> If the FlowFile being split contains 1 or more blank lines, those blank lines 
> are not turned into split FlowFiles as expected based on above settings.
> However, those blank lines are still counted in the fragment.count attribute. 
> The fragment.count should actually match the number of emitted FlowFiles.
> Fragment count is likely to be used as a method to confirm all FlowFiles have 
> been processed later downstream of splitText. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4191) Display process-specific component icons

2017-07-17 Thread Rob Moran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089791#comment-16089791
 ] 

Rob Moran commented on NIFI-4191:
-

I plan to start going through current offerings and grouping them by pattern as 
mentioned in the description.

> Display process-specific component icons
> 
>
> Key: NIFI-4191
> URL: https://issues.apache.org/jira/browse/NIFI-4191
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Rob Moran
>Priority: Minor
>
> It would be nice to expand the iconography used for components on the NiFi 
> canvas to more accurately describe the pattern type (e.g., split, route, 
> join, partition, etc.) or system being used (e.g., Kafka, HBase, etc.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4191) Display process-specific component icons

2017-07-17 Thread Rob Moran (JIRA)
Rob Moran created NIFI-4191:
---

 Summary: Display process-specific component icons
 Key: NIFI-4191
 URL: https://issues.apache.org/jira/browse/NIFI-4191
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Reporter: Rob Moran
Priority: Minor


It would be nice to expand the iconography used for components on the NiFi 
canvas to more accurately describe the pattern type (e.g., split, route, join, 
partition, etc.) or system being used (e.g., Kafka, HBase, etc.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2009: NIFI-1580 - Allow double-click to display config

2017-07-17 Thread yuri1969
Github user yuri1969 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2009#discussion_r127677617
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-graph.js
 ---
@@ -198,13 +201,13 @@
 var nfGraph = {
 init: function () {
 // initialize the object responsible for each type of component
-nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfFunnel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
-nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfRemoteProcessGroup.init(nfConnectable, nfDraggable, 
nfSelectable, nfContextMenu);
--- End diff --

I haven't thought about RPGs (the ticket mentions _processors_ only). So 
RPGs should feature this functionality too, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089623#comment-16089623
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user yuri1969 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2009#discussion_r127677617
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-graph.js
 ---
@@ -198,13 +201,13 @@
 var nfGraph = {
 init: function () {
 // initialize the object responsible for each type of component
-nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfLabel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfFunnel.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
-nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu);
+nfPort.init(nfConnectable, nfDraggable, nfSelectable, 
nfContextMenu, nfQuickSelect);
 nfRemoteProcessGroup.init(nfConnectable, nfDraggable, 
nfSelectable, nfContextMenu);
--- End diff --

I haven't thought about RPGs (the ticket mentions _processors_ only). So 
RPGs should feature this functionality too, right?


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)