[jira] [Commented] (NIFI-3376) Content repository disk usage is not close to reported size in Status Bar

2017-07-27 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103952#comment-16103952
 ] 

Michael Moser commented on NIFI-3376:
-

I also modified the title to describe the observations rather than propose a 
solution.  Thanks to all who are interested in investigating this!

> Content repository disk usage is not close to reported size in Status Bar
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
> Attachments: NIFI-3376_Content_Repo_size_demo.xml
>
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3376) Content repository disk usage is not close to reported size in Status Bar

2017-07-27 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3376:

Summary: Content repository disk usage is not close to reported size in 
Status Bar  (was: Implement content repository ResourceClaim compaction)

> Content repository disk usage is not close to reported size in Status Bar
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
> Attachments: NIFI-3376_Content_Repo_size_demo.xml
>
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-27 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt reopened NIFI-3736:
---

reopening to keep track of the PR now tagged to this ticket for adjusting the 
default to 1MB

> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103899#comment-16103899
 ] 

ASF GitHub Bot commented on NIFI-3736:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2041
  
@markap14  anything moser and I might be overlooking.  This strikes me as a 
no brainer to do.  Was there a strong intent behind 10MB?


> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2041: NIFI-3736 modify default nifi.content.claim.max.appendable...

2017-07-27 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2041
  
@markap14  anything moser and I might be overlooking.  This strikes me as a 
no brainer to do.  Was there a strong intent behind 10MB?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103875#comment-16103875
 ] 

ASF GitHub Bot commented on NIFI-3736:
--

GitHub user mosermw opened a pull request:

https://github.com/apache/nifi/pull/2041

NIFI-3736 modify default nifi.content.claim.max.appendable.size

in nifi.properties to 1 MB

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mosermw/nifi NIFI-3736

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2041.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2041


commit 641fb4fbc2da1942fead30bf74a3d960769a12ff
Author: Mike Moser 
Date:   2017-07-27T20:42:09Z

NIFI-3736 modify default nifi.content.claim.max.appendable.size in 
nifi.properties to 1 MB




> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2041: NIFI-3736 modify default nifi.content.claim.max.app...

2017-07-27 Thread mosermw
GitHub user mosermw opened a pull request:

https://github.com/apache/nifi/pull/2041

NIFI-3736 modify default nifi.content.claim.max.appendable.size

in nifi.properties to 1 MB

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mosermw/nifi NIFI-3736

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2041.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2041


commit 641fb4fbc2da1942fead30bf74a3d960769a12ff
Author: Mike Moser 
Date:   2017-07-27T20:42:09Z

NIFI-3736 modify default nifi.content.claim.max.appendable.size in 
nifi.properties to 1 MB




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-27 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103840#comment-16103840
 ] 

Joseph Witt commented on NIFI-3736:
---

Both seem like good points to me. 

#1 It makes sense that we ensure both max number of claims and max total size 
of claims can be tuned.
#2 I agree.  We should at least be consistent.  The smaller 1MB value sounds 
about right.  We're just trying to help make writes/reads more efficient.  
Going much larger likely wont help much anyway.

> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-27 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103804#comment-16103804
 ] 

Michael Moser commented on NIFI-3736:
-

While building a template to show the effect of NIFI-3376, I found 2 things 
about this issue.

1. The nifi.content.claim.max.flow.files property setting has no effect on 
claim size.  Shall I reopen this JIRA to address this or make a new one?
2. In 1.3.0, the effective nifi.content.claim.max.appendable.size was hard 
coded to 1 MB.  Now that the property is used, the default for that property is 
10 MB.  This is a pretty significant change to default behavior of the content 
repository.  I think we should set the default for this property to 1 MB to 
match 1.3.0 behavior.  Shall I make a new JIRA to address this?

> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-27 Thread rickysaltzer
Github user rickysaltzer commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r129937204
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import java.io.BufferedInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.kudu.client.KuduClient;
+import org.apache.kudu.client.KuduException;
+import org.apache.kudu.client.KuduSession;
+import org.apache.kudu.client.KuduTable;
+import org.apache.kudu.client.Insert;
+
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.FlowFileAccessException;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.processors.hadoop.exception.RecordReaderFactoryException;
+
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.serialization.record.Record;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+public abstract class AbstractKudu extends AbstractProcessor {
+
+protected static final PropertyDescriptor KUDU_MASTERS = new 
PropertyDescriptor.Builder()
+.name("KUDU Masters")
+.description("List all kudu masters's ip with port (e.g. 
7051), comma separated")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("Table Name")
+.description("The name of the Kudu Table to put data into")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("The service for reading records from incoming 
flow files.")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor SKIP_HEAD_LINE = new 
PropertyDescriptor.Builder()
+.name("Skip head line")
+.description("Set it to true if your first line is the header 
line e.g. column names")
+.allowableValues("true", "false")
+.defaultValue("true")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("success")
+.description("A FlowFile is routed to this relationship after 
it has been successfully stored in Kudu")
+.build();
+protected static final Relationship REL_FAILURE = new 
Relationship.Builder()
+.name("failure")
+.description("A FlowFile is routed to this relationship if it 
cannot be sent to Kudu")
+.build();
+
+public static final String RECORD_COUNT_ATTR = 

[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-27 Thread rickysaltzer
Github user rickysaltzer commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r129937107
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import java.io.BufferedInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.kudu.client.KuduClient;
+import org.apache.kudu.client.KuduException;
+import org.apache.kudu.client.KuduSession;
+import org.apache.kudu.client.KuduTable;
+import org.apache.kudu.client.Insert;
+
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.FlowFileAccessException;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.processors.hadoop.exception.RecordReaderFactoryException;
+
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.serialization.record.Record;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+public abstract class AbstractKudu extends AbstractProcessor {
+
+protected static final PropertyDescriptor KUDU_MASTERS = new 
PropertyDescriptor.Builder()
+.name("KUDU Masters")
+.description("List all kudu masters's ip with port (e.g. 
7051), comma separated")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("Table Name")
+.description("The name of the Kudu Table to put data into")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("The service for reading records from incoming 
flow files.")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor SKIP_HEAD_LINE = new 
PropertyDescriptor.Builder()
+.name("Skip head line")
+.description("Set it to true if your first line is the header 
line e.g. column names")
+.allowableValues("true", "false")
+.defaultValue("true")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("success")
+.description("A FlowFile is routed to this relationship after 
it has been successfully stored in Kudu")
+.build();
+protected static final Relationship REL_FAILURE = new 
Relationship.Builder()
+.name("failure")
+.description("A FlowFile is routed to this relationship if it 
cannot be sent to Kudu")
+.build();
+
+public static final String RECORD_COUNT_ATTR = 

[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-27 Thread rickysaltzer
Github user rickysaltzer commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r129936165
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import java.io.BufferedInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.kudu.client.KuduClient;
+import org.apache.kudu.client.KuduException;
+import org.apache.kudu.client.KuduSession;
+import org.apache.kudu.client.KuduTable;
+import org.apache.kudu.client.Insert;
+
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.FlowFileAccessException;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.processors.hadoop.exception.RecordReaderFactoryException;
+
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.serialization.record.Record;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+public abstract class AbstractKudu extends AbstractProcessor {
+
+protected static final PropertyDescriptor KUDU_MASTERS = new 
PropertyDescriptor.Builder()
+.name("KUDU Masters")
+.description("List all kudu masters's ip with port (e.g. 
7051), comma separated")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("Table Name")
+.description("The name of the Kudu Table to put data into")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("The service for reading records from incoming 
flow files.")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor SKIP_HEAD_LINE = new 
PropertyDescriptor.Builder()
+.name("Skip head line")
+.description("Set it to true if your first line is the header 
line e.g. column names")
+.allowableValues("true", "false")
+.defaultValue("true")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("success")
+.description("A FlowFile is routed to this relationship after 
it has been successfully stored in Kudu")
+.build();
+protected static final Relationship REL_FAILURE = new 
Relationship.Builder()
+.name("failure")
+.description("A FlowFile is routed to this relationship if it 
cannot be sent to Kudu")
+.build();
+
+public static final String RECORD_COUNT_ATTR = 

[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-27 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r129935601
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import java.io.BufferedInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.kudu.client.KuduClient;
+import org.apache.kudu.client.KuduException;
+import org.apache.kudu.client.KuduSession;
+import org.apache.kudu.client.KuduTable;
+import org.apache.kudu.client.Insert;
+
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.FlowFileAccessException;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.processors.hadoop.exception.RecordReaderFactoryException;
+
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.serialization.record.Record;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+public abstract class AbstractKudu extends AbstractProcessor {
+
+protected static final PropertyDescriptor KUDU_MASTERS = new 
PropertyDescriptor.Builder()
+.name("KUDU Masters")
+.description("List all kudu masters's ip with port (e.g. 
7051), comma separated")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("Table Name")
+.description("The name of the Kudu Table to put data into")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("The service for reading records from incoming 
flow files.")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor SKIP_HEAD_LINE = new 
PropertyDescriptor.Builder()
+.name("Skip head line")
+.description("Set it to true if your first line is the header 
line e.g. column names")
+.allowableValues("true", "false")
+.defaultValue("true")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("success")
+.description("A FlowFile is routed to this relationship after 
it has been successfully stored in Kudu")
+.build();
+protected static final Relationship REL_FAILURE = new 
Relationship.Builder()
+.name("failure")
+.description("A FlowFile is routed to this relationship if it 
cannot be sent to Kudu")
+.build();
+
+public static final String RECORD_COUNT_ATTR = "record.count";
  

[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-27 Thread rickysaltzer
Github user rickysaltzer commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r129933513
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java
 ---
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import org.apache.kudu.Schema;
+import org.apache.kudu.Type;
+import org.apache.kudu.client.Insert;
+import org.apache.kudu.client.PartialRow;
+import org.apache.kudu.client.KuduTable;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.serialization.record.Record;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"put", "database", "NoSQL", "kudu", "HDFS"})
+@CapabilityDescription("Reads records from an incoming FlowFile using the 
provided Record Reader, and writes those records " +
+"to the specified Kudu's table. The schema for the table must be 
provided in the processor properties or from your source." +
+" If any error occurs while reading records from the input, or 
writing records to Kudu, the FlowFile will be routed to failure")
+
+public class PutKudu extends AbstractKudu {
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = new ArrayList<>();
+properties.add(KUDU_MASTERS);
+properties.add(TABLE_NAME);
+properties.add(SKIP_HEAD_LINE);
+properties.add(RECORD_READER);
+
+return properties;
+}
+
+@Override
+public Set getRelationships() {
+final Set rels = new HashSet<>();
+rels.add(REL_SUCCESS);
+rels.add(REL_FAILURE);
+return rels;
+}
+
+@Override
+protected Insert insertRecordToKudu(KuduTable kuduTable, Record 
record, List fieldNames) throws IllegalStateException, Exception {
+Insert insert = kuduTable.newInsert();
+PartialRow row = insert.getRow();
+Schema colSchema = kuduTable.getSchema();
+
+for (String colName : fieldNames) {
+int colIdx = this.getColumnIndex(colSchema, colName);
+if (colIdx != -1) {
+Type colType = 
colSchema.getColumnByIndex(colIdx).getType();
+
+switch (colType.getDataType()) {
+case BOOL:
+row.addBoolean(colIdx, 
record.getAsBoolean(colName));
+break;
+case FLOAT:
+row.addFloat(colIdx, record.getAsFloat(colName));
+break;
+case DOUBLE:
+row.addDouble(colIdx, record.getAsDouble(colName));
+break;
+case BINARY:
+row.addBinary(colIdx, 
record.getAsString(colName).getBytes());
+break;
+case INT8:
+case INT16:
--- End diff --

Would be useful to allow users to {{Upsert}} as well as {{Insert}}. This 
would ideally be configurable via the processor's properties. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled 

[GitHub] nifi-minifi pull request #88: MINIFI-347: Adding tests for C2 file system ca...

2017-07-27 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/88#discussion_r129931863
  
--- Diff: minifi-assembly/NOTICE ---
@@ -681,6 +681,14 @@ The following binary components are provided under the 
Common Development and Di
 (CDDL 1.0) (GPL3) Streaming API For XML 
(javax.xml.stream:stax-api:jar:1.0-2 - no url provided)
 
 
+Common Public License 1.0
+
+
+The following binary components are provided under the Common Public 
License Version 1.0.  See project link for details.
--- End diff --

I believe this is unneeded as it is just a test dependency and neither the 
source of this project nor its binary representation make its way into any of 
the generated distributions (source or convenience binaries).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #87: MINIFI-310: Changing MiNiFi docker image to us...

2017-07-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi/pull/87


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-27 Thread rickysaltzer
Github user rickysaltzer commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r129928513
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/PutKudu.java
 ---
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import org.apache.kudu.Schema;
+import org.apache.kudu.Type;
+import org.apache.kudu.client.Insert;
+import org.apache.kudu.client.PartialRow;
+import org.apache.kudu.client.KuduTable;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.serialization.record.Record;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"put", "database", "NoSQL", "kudu", "HDFS"})
+@CapabilityDescription("Reads records from an incoming FlowFile using the 
provided Record Reader, and writes those records " +
+"to the specified Kudu's table. The schema for the table must be 
provided in the processor properties or from your source." +
+" If any error occurs while reading records from the input, or 
writing records to Kudu, the FlowFile will be routed to failure")
+
+public class PutKudu extends AbstractKudu {
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = new ArrayList<>();
+properties.add(KUDU_MASTERS);
+properties.add(TABLE_NAME);
+properties.add(SKIP_HEAD_LINE);
+properties.add(RECORD_READER);
+
+return properties;
+}
+
+@Override
+public Set getRelationships() {
+final Set rels = new HashSet<>();
+rels.add(REL_SUCCESS);
+rels.add(REL_FAILURE);
+return rels;
+}
+
+@Override
+protected Insert insertRecordToKudu(KuduTable kuduTable, Record 
record, List fieldNames) throws IllegalStateException, Exception {
+Insert insert = kuduTable.newInsert();
+PartialRow row = insert.getRow();
+Schema colSchema = kuduTable.getSchema();
+
+for (String colName : fieldNames) {
+int colIdx = this.getColumnIndex(colSchema, colName);
+if (colIdx != -1) {
+Type colType = 
colSchema.getColumnByIndex(colIdx).getType();
+
+switch (colType.getDataType()) {
+case BOOL:
+row.addBoolean(colIdx, 
record.getAsBoolean(colName));
+break;
+case FLOAT:
+row.addFloat(colIdx, record.getAsFloat(colName));
+break;
+case DOUBLE:
+row.addDouble(colIdx, record.getAsDouble(colName));
+break;
+case BINARY:
+row.addBinary(colIdx, 
record.getAsString(colName).getBytes());
+break;
+case INT8:
+case INT16:
--- End diff --

Could we write this as a `short` using `row.addShort(..)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or 

[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103650#comment-16103650
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2009
  
- I find it natural to double-click on the 'line' when I want to add a new 
bend.
- In order to check connection's config I would try to double-click on its 
rectangular stats box.
- Connection's 'nodes' being able to open the config dialog feels OK to me. 
However, I wouldn't double-click on them intuitively.

I have got a Mule Anypoint Studio background so my opinion might be skewed.


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-27 Thread yuri1969
Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2009
  
- I find it natural to double-click on the 'line' when I want to add a new 
bend.
- In order to check connection's config I would try to double-click on its 
rectangular stats box.
- Connection's 'nodes' being able to open the config dialog feels OK to me. 
However, I wouldn't double-click on them intuitively.

I have got a Mule Anypoint Studio background so my opinion might be skewed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #120: MINIFI-354 Reverting default config.yml t...

2017-07-27 Thread apiri
GitHub user apiri opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/120

MINIFI-354 Reverting default config.yml to minimal configuration.

MINIFI-354 Reverting default config.yml to minimal configuration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apiri/nifi-minifi-cpp minifi-354

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/120.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #120


commit 16879508820d5ff10447d17f40036f4b9cf0aa71
Author: Aldrin Piri 
Date:   2017-07-27T18:01:24Z

MINIFI-354 Reverting default config.yml to minimal configuration.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103616#comment-16103616
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
h I see the problem now...I was thinking the issue is with the actual 
"line". See, If you right click on the connection's 'line' you get the context 
menu and can then choose to config/view config so I was thinking we should add 
that 'quickSelect' open config/view config to the 'line' as well as the 
'start', 'end', or 'middle' nodes. The problem with this is that we already 
have an action for double click on the 'line' (that action is for adding a 
bend). So maybe @moranr could chime in here

Currently the way it is with this PR you can double click the the stats box 
or an existing bend on a connection's 'line' to open the config dialog for the 
connection. However, if you double click on the 'line' you will add a bend to 
the connection rather than opening the connections' config dialog. Is this the 
expected behavior? Or, do we want to change the current behavior such that when 
you double click on a 'line' for a connection you will open the connection's 
config dialog (just like double click everywhere else on a connection) and then 
we can add an action to the context menu to 'Add Bend' to make it more explicit 
to the userthoughts?


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-27 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
h I see the problem now...I was thinking the issue is with the actual 
"line". See, If you right click on the connection's 'line' you get the context 
menu and can then choose to config/view config so I was thinking we should add 
that 'quickSelect' open config/view config to the 'line' as well as the 
'start', 'end', or 'middle' nodes. The problem with this is that we already 
have an action for double click on the 'line' (that action is for adding a 
bend). So maybe @moranr could chime in here

Currently the way it is with this PR you can double click the the stats box 
or an existing bend on a connection's 'line' to open the config dialog for the 
connection. However, if you double click on the 'line' you will add a bend to 
the connection rather than opening the connections' config dialog. Is this the 
expected behavior? Or, do we want to change the current behavior such that when 
you double click on a 'line' for a connection you will open the connection's 
config dialog (just like double click everywhere else on a connection) and then 
we can add an action to the context menu to 'Add Bend' to make it more explicit 
to the userthoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #118: MINIFI-311 Move to alpine base for docker...

2017-07-27 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/118#discussion_r129909967
  
--- Diff: thirdparty/spdlog-20170710/tests/CMakeLists.txt ---
@@ -0,0 +1,19 @@
+#
--- End diff --

Should remove the tests subdirectory and skip this part of the build for 
the lib


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #118: MINIFI-311 Move to alpine base for docker...

2017-07-27 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/118#discussion_r129904782
  
--- Diff: thirdparty/spdlog-20170710/bench/boost-bench-mt.cpp ---
@@ -0,0 +1,84 @@
+//
--- End diff --

Should remove the entirety of the bench subdirectory.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #118: MINIFI-311 Move to alpine base for docker...

2017-07-27 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/118#discussion_r129905012
  
--- Diff: thirdparty/spdlog-20170710/example/CMakeLists.txt ---
@@ -0,0 +1,49 @@
+# 
*/
--- End diff --

Should remove the entirety of the example directory


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #118: MINIFI-311 Move to alpine base for docker...

2017-07-27 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/118#discussion_r129904610
  
--- Diff: thirdparty/spdlog-20170710/.travis.yml ---
@@ -0,0 +1,90 @@
+# Adapted from various sources, including:
--- End diff --

We should remove this from the source inclusion.  Anything beyond the 
actual source should get stripped out to minimize LICENSE/NOTICE concerns.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-27 Thread yuri1969
Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2009
  
@scottyaslan Thanks. So the problem is just with the start and end nodes, 
right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #113: MINIFI-337: Update configuration readme f...

2017-07-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/113


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp issue #113: MINIFI-337: Update configuration readme for Repo...

2017-07-27 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/113
  
After rebasing, things look good.  Had some minor merge conflicts which 
were pretty straightforward to adjust.  Will get this merged.  Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp issue #113: MINIFI-337: Update configuration readme for Repo...

2017-07-27 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/113
  
rebasing/reviewing now that #110 was merged


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #110: MINIFI-249: Update prov repo to better ab...

2017-07-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/110


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp issue #110: MINIFI-249: Update prov repo to better abstract ...

2017-07-27 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/110
  
build with RAT and linter looks good
verified functionality
code changes look good in response to those items suggested prior.  will 
get this merged


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp issue #110: MINIFI-249: Update prov repo to better abstract ...

2017-07-27 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/110
  
reviewing


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4235) Provide the ability to enable multiple Controller Services at the same time

2017-07-27 Thread Andrew Lim (JIRA)
Andrew Lim created NIFI-4235:


 Summary: Provide the ability to enable multiple Controller 
Services at the same time
 Key: NIFI-4235
 URL: https://issues.apache.org/jira/browse/NIFI-4235
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Reporter: Andrew Lim
Priority: Minor


It would be nice to be able to start all or multiple controller services at the 
same time instead of the current method of doing them one by one.   NiFi should 
be smart enough to start the controller services that need to be enabled first 
if other controller services are dependent on them (i.e. Start an 
AvroSchemaRegistry CS first, so that any Record Reader/Writer CS that reference 
it can be enabled afterward).

Similar improvement could be done to provide the ability to start multiple 
Reporting Tasks at the same time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4234) The Filter in the Add Processor window should return valid results even if search terms do not exactly match tags

2017-07-27 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim updated NIFI-4234:
-
Summary: The Filter in the Add Processor window should return valid results 
even if search terms do not exactly match tags  (was: The Filter the Add 
Processor window should return valid results even if search terms do not 
exactly match tags)

> The Filter in the Add Processor window should return valid results even if 
> search terms do not exactly match tags
> -
>
> Key: NIFI-4234
> URL: https://issues.apache.org/jira/browse/NIFI-4234
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Andrew Lim
>Priority: Minor
>
> The filter field should be more forgiving.  For example, if you enter 
> "database, rdbms" you get 2 results (ConvertJSONToSQL and PutSQL processors). 
>  I would think the following would also return the same results, but you get 
> none:
> "database,rdbms"
> "database rdbms"
> "rdbms, database"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4234) The Filter the Add Processor window should return valid results even if search terms do not exactly match tags

2017-07-27 Thread Andrew Lim (JIRA)
Andrew Lim created NIFI-4234:


 Summary: The Filter the Add Processor window should return valid 
results even if search terms do not exactly match tags
 Key: NIFI-4234
 URL: https://issues.apache.org/jira/browse/NIFI-4234
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core UI
Reporter: Andrew Lim
Priority: Minor


The filter field should be more forgiving.  For example, if you enter 
"database, rdbms" you get 2 results (ConvertJSONToSQL and PutSQL processors).  
I would think the following would also return the same results, but you get 
none:

"database,rdbms"
"database rdbms"
"rdbms, database"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103293#comment-16103293
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1961
  
@MikeThomsen I should be able to take a look early next week


> Create EvaluateRecordPath processor
> ---
>
> Key: NIFI-4024
> URL: https://issues.apache.org/jira/browse/NIFI-4024
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Steve Champagne
>Priority: Minor
>
> With the new RecordPath DSL, it would be nice if there was a processor that 
> could pull fields into attributes of the flowfile based on a RecordPath. This 
> would be similar to the EvaluateJsonPath processor that currently exists, 
> except it could be used to pull fields from arbitrary record formats. My 
> current use case for it would be pulling fields out of Avro records while 
> skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and 
> then converting back to Avro. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecord

2017-07-27 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1961
  
@MikeThomsen I should be able to take a look early next week


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4233) MockFlowFile#getData is not public

2017-07-27 Thread Netanel Bitan (JIRA)
Netanel Bitan created NIFI-4233:
---

 Summary: MockFlowFile#getData is not public
 Key: NIFI-4233
 URL: https://issues.apache.org/jira/browse/NIFI-4233
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Netanel Bitan
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-4022) Use SASL Auth Scheme For Secured Zookeeper Client Interaction

2017-07-27 Thread Yolanda M. Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yolanda M. Davis reassigned NIFI-4022:
--

Assignee: Yolanda M. Davis

> Use SASL Auth Scheme For Secured Zookeeper Client Interaction
> -
>
> Key: NIFI-4022
> URL: https://issues.apache.org/jira/browse/NIFI-4022
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>
> NiFi uses Zookeeper to assist in cluster orchestration including leader 
> elections for Primary Node and Cluster Coordinator and to store state for 
> various processors (such as MonitorActivity). In secured Zookeeper 
> environments (supported by SASL + Kerberos) NiFi should protect the zNodes it 
> creates to prevent users or hosts, outside of a NiFi cluster, from accessing 
> or modifying entries.  In its current implementation security can be enforced 
> for processors that store state information in Zookeeper, however zNodes used 
> for managing Primary Node and Cluster Coordinator data are left open and 
> susceptible to change from any user.  Also when zNodes are secured for 
> processor state, a “Creator Only” policy is used which allows the system to 
> determine the identification of the NiFi node and protect any zNodes created 
> with that node id using Zookeeper’s “auth” scheme. The challenge with this 
> scheme is that it limits the ability for other NiFi nodes in the cluster to 
> access that zNode if needed (since it is specifically binds that zNode to the 
> unique id of its creator).
>  
> To best protect zNodes created in Zookeeper by NiFi while maximizing NiFi’s 
> ability to share information across the cluster I propose that we move to 
> using Zookeeper’s SASL authentication scheme, which will allow the use of 
> Kerberos principals for securing zNode with the appropriate permissions.  For 
> maximum flexibility, these principals can be mapped appropriately in 
> Zookeeper, using auth-to-local rules, to ensure that nodes across the cluster 
> can share zNodes as needed. 
>  
> Potential Concerns/Challenges for Discussion:
>  
> 1)  For existing NiFi users how will we migrate Zookeeper entries from 
> the old security scheme to the new scheme?
> 2)  How should zNodes be reverted to open if kerberos is disabled?
> 3)  What will the performance impact be on the cluster once SASL scheme 
> is enabled (since we’d be moving from open to protected)? Would require 
> investigation
> 4)  Currently users can control authentication scheme via state 
> management configuration for processors yet not for clusters.  Should we 
> still maintain the practice of allowing schemes to be configurable for 
> processors (with SASL being the new default)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-2903) GetKafka cannot handle null value in Kafka offset cause NullPointerException

2017-07-27 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103188#comment-16103188
 ] 

Joseph Witt commented on NIFI-2903:
---

[~milan.baran] does not look like there has been a contribution for this issue 
yet.  However, I strongly recommend using the ConsumeKafka processors instead.  
They are more efficient and better designed.  Not sure if this null issue 
exists (or even how such a thing could be null) but consider that if an option 
for you.

> GetKafka cannot handle null value in Kafka offset cause NullPointerException
> 
>
> Key: NIFI-2903
> URL: https://issues.apache.org/jira/browse/NIFI-2903
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Kurt Hung
>
> The GetKafka processor does not handle the condition of null value that 
> exists in Kafka offset, which would cause the processor hang on and generate 
> a NullPointerException after 30 sec. It's easy to reproduce this issue; I use 
> the kafka-python to insert a key-value pair ("abc", None), and create a 
> GetKafka processor to consume the topic I've created. This issue would happen 
> immediately, and moreover, the processor could't consume rest offsets.
> Temporary I customize a GetKafka processor for a "Failure" relationship to 
> handle null values exist in Kafka offset to prevent this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4077) o.a.n.p.index.lucene.LuceneEventIndex Failed to retrieve Provenance Events from store

2017-07-27 Thread David A. Wynne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103177#comment-16103177
 ] 

David A. Wynne commented on NIFI-4077:
--

I am seeing the same issue
2017-07-26 12:00:10,753 ERROR [Provenance Query-2] 
o.a.n.provenance.index.lucene.QueryTask Failed to query events against index 
/usr/hdp/provenance_repository/index-1500466113184
org.apache.nifi.provenance.index.SearchFailedException: Unable to retrieve 
events from the Provenance Store
at 
org.apache.nifi.provenance.index.lucene.QueryTask.readDocuments(QueryTask.java:198)
at 
org.apache.nifi.provenance.index.lucene.QueryTask.run(QueryTask.java:149)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: Unable to locate file 
/usr/hdp/provenance_repository/11495395.prov
at 
org.apache.nifi.provenance.serialization.RecordReaders.newRecordReader(RecordReaders.java:119)
at 
org.apache.nifi.provenance.WriteAheadProvenanceRepository.lambda$initialize$1(WriteAheadProvenanceRepository.java:125)
at 
org.apache.nifi.provenance.store.iterator.SelectiveRecordReaderEventIterator.nextEvent(SelectiveRecordReaderEventIterator.java:137)
at 
org.apache.nifi.provenance.store.iterator.AuthorizingEventIterator.nextEvent(AuthorizingEventIterator.java:47)
at 
org.apache.nifi.provenance.store.PartitionedEventStore.getEvents(PartitionedEventStore.java:192)
at 
org.apache.nifi.provenance.store.PartitionedEventStore.getEvents(PartitionedEventStore.java:167)
at 
org.apache.nifi.provenance.index.lucene.QueryTask.readDocuments(QueryTask.java:196)
... 6 common frames omitted

> o.a.n.p.index.lucene.LuceneEventIndex Failed to retrieve Provenance Events 
> from store
> -
>
> Key: NIFI-4077
> URL: https://issues.apache.org/jira/browse/NIFI-4077
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Joseph Gresock
>
> I get this error when trying to display Data Provenance on a 4-node cluster:
> 2017-06-15 18:28:23,945 ERROR [Provenance Query-2] 
> o.a.n.p.index.lucene.LuceneEventIndex Failed to retrieve Provenance Events 
> from store
> java.io.FileNotFoundException: Unable to locate file 
> /data/nifi/provenance_repository/85309321.prov
> at 
> org.apache.nifi.provenance.serialization.RecordReaders.newRecordReader(RecordReaders.java:119)
> at 
> org.apache.nifi.provenance.WriteAheadProvenanceRepository.lambda$initialize$46(WriteAheadProvenanceRepository.java:125)
> at 
> org.apache.nifi.provenance.store.iterator.SelectiveRecordReaderEventIterator.nextEvent(SelectiveRecordReaderEventIterator.java:137)
> at 
> org.apache.nifi.provenance.store.iterator.AuthorizingEventIterator.nextEvent(AuthorizingEventIterator.java:47)
> at 
> org.apache.nifi.provenance.store.PartitionedEventStore.getEvents(PartitionedEventStore.java:192)
> at 
> org.apache.nifi.provenance.store.PartitionedEventStore.getEvents(PartitionedEventStore.java:167)
> at 
> org.apache.nifi.provenance.index.lucene.LuceneEventIndex.lambda$submitQuery$58(LuceneEventIndex.java:442)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> I don't know if this is related, but I also see this error in the same logs:
> 2017-06-15 18:28:42,923 ERROR [Compress Provenance Logs-1-thread-1] 
> o.a.n.p.s.EventFileCompressor Failed to read TOC File 
> /data/nifi/provenance_repository/toc/87247412.toc
> java.io.EOFException: null
> at 
> org.apache.nifi.provenance.toc.StandardTocReader.(StandardTocReader.java:48)
> at 
> org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:93)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)




[jira] [Commented] (NIFI-4031) Nullable Array

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103158#comment-16103158
 ] 

ASF GitHub Bot commented on NIFI-4031:
--

Github user champagst commented on the issue:

https://github.com/apache/nifi/pull/1919
  
It looks like #2040 should cover my issue. Thanks!


> Nullable Array
> --
>
> Key: NIFI-4031
> URL: https://issues.apache.org/jira/browse/NIFI-4031
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Steve Champagne
> Attachments: NullableArray.xml
>
>
> I'm getting an error when I try to use a nullable array. I've attached an 
> example template.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4031) Nullable Array

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103159#comment-16103159
 ] 

ASF GitHub Bot commented on NIFI-4031:
--

Github user champagst closed the pull request at:

https://github.com/apache/nifi/pull/1919


> Nullable Array
> --
>
> Key: NIFI-4031
> URL: https://issues.apache.org/jira/browse/NIFI-4031
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Steve Champagne
> Attachments: NullableArray.xml
>
>
> I'm getting an error when I try to use a nullable array. I've attached an 
> example template.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1919: NIFI-4031: Fix Avro nullable arrays

2017-07-27 Thread champagst
Github user champagst commented on the issue:

https://github.com/apache/nifi/pull/1919
  
It looks like #2040 should cover my issue. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1919: NIFI-4031: Fix Avro nullable arrays

2017-07-27 Thread champagst
Github user champagst closed the pull request at:

https://github.com/apache/nifi/pull/1919


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-07-27 Thread Noe (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103153#comment-16103153
 ] 

Noe edited comment on NIFI-3349 at 7/27/17 12:35 PM:
-

GetSplunk is periodically having this issue and think it is related:
ERROR [Timer-Driven Process Thread-29] o.a.nifi.processors.splunk.GetSplunk 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] failed to process due to 
com.splunk.HttpException: HTTP 401 -- {"messages":[{"type":"WARN","text":"call 
not properly authenticated"}
This happens on [line 
461|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461]

Would like to catch and call service.login() at least once, as well?




was (Author: ndet...@minerkasch.com):
GetSplunk is periodically having this issue and think it is related:
ERROR [Timer-Driven Process Thread-29] o.a.nifi.processors.splunk.GetSplunk 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] failed to process due to 
com.splunk.HttpException: HTTP 401 -- {"messages":[{"type":"WARN","text":"call 
not properly authenticated"}
This happens on [line 
461|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461]

Would like to catch and call service.login() as well?



> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Priority: Minor
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-07-27 Thread Noe (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103153#comment-16103153
 ] 

Noe edited comment on NIFI-3349 at 7/27/17 12:34 PM:
-

GetSplunk is periodically having this issue and think it is related:
ERROR [Timer-Driven Process Thread-29] o.a.nifi.processors.splunk.GetSplunk 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] failed to process due to 
com.splunk.HttpException: HTTP 401 -- {"messages":[{"type":"WARN","text":"call 
not properly authenticated"}]}; rolling back session: 

This happens on [line 
461|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461]

Would like to catch and call service.login() as well?




was (Author: ndet...@minerkasch.com):
GetSplunk is periodically having this issue and think it is related:
ERROR [Timer-Driven Process Thread-29] o.a.nifi.processors.splunk.GetSplunk 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] failed to process due to 
com.splunk.HttpException: HTTP 401 -- {"messages":[{"type":"WARN","text":"call 
not properly authenticated"}]}; rolling back session: com.splunk.HttpException: 
HTTP 401 -- {"messages":[{"type":"WARN","text":"call not properly 
authenticated"}]}
This happens on [line 
461|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461]

Would like to catch and call service.login() as well?



> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Priority: Minor
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-07-27 Thread Noe (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103153#comment-16103153
 ] 

Noe edited comment on NIFI-3349 at 7/27/17 12:34 PM:
-

GetSplunk is periodically having this issue and think it is related:
ERROR [Timer-Driven Process Thread-29] o.a.nifi.processors.splunk.GetSplunk 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] failed to process due to 
com.splunk.HttpException: HTTP 401 -- {"messages":[{"type":"WARN","text":"call 
not properly authenticated"}
This happens on [line 
461|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461]

Would like to catch and call service.login() as well?




was (Author: ndet...@minerkasch.com):
GetSplunk is periodically having this issue and think it is related:
ERROR [Timer-Driven Process Thread-29] o.a.nifi.processors.splunk.GetSplunk 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] failed to process due to 
com.splunk.HttpException: HTTP 401 -- {"messages":[{"type":"WARN","text":"call 
not properly authenticated"}]}; rolling back session: 

This happens on [line 
461|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461]

Would like to catch and call service.login() as well?



> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Priority: Minor
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3349) GetSplunk Should Periodically Re-Authenticate

2017-07-27 Thread Noe (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103153#comment-16103153
 ] 

Noe commented on NIFI-3349:
---

GetSplunk is periodically having this issue and think it is related:
ERROR [Timer-Driven Process Thread-29] o.a.nifi.processors.splunk.GetSplunk 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] 
GetSplunk[id=57083ec7-c0ef-152a-a952-2c31764dc675] failed to process due to 
com.splunk.HttpException: HTTP 401 -- {"messages":[{"type":"WARN","text":"call 
not properly authenticated"}]}; rolling back session: com.splunk.HttpException: 
HTTP 401 -- {"messages":[{"type":"WARN","text":"call not properly 
authenticated"}]}
This happens on [line 
461|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L461]

Would like to catch and call service.login() as well?



> GetSplunk Should Periodically Re-Authenticate
> -
>
> Key: NIFI-3349
> URL: https://issues.apache.org/jira/browse/NIFI-3349
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.1.1, 1.0.1
>Reporter: Bryan Bende
>Priority: Minor
>
> The first time the processor executes, it lazily initializes the Splunk 
> Service object:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java#L372-L377
> As part of this initialization, the Splunk service calls a login method like 
> this:
> {code}
> public Service login(String username, String password) {
> this.username = username;
> this.password = password;
> Args args = new Args();
> args.put("username", username);
> args.put("password", password);
> args.put("cookie", "1");
> ResponseMessage response = post("/services/auth/login", args);
> String sessionKey = Xml.parse(response.getContent())
> .getElementsByTagName("sessionKey")
> .item(0)
> .getTextContent();
> this.token = "Splunk " + sessionKey;
> this.version = this.getInfo().getVersion();
> if (versionCompare("4.3") >= 0)
> this.passwordEndPoint = "storage/passwords";
> return this;
> }
> {code}
> Since this only happens the first time the processor executes, it will only 
> happen again if you stop and start the processor. If the processor has been 
> running long enough that session probably expired and the processor is 
> continuing to attempt to execute.
> We should periodically call service.login() in a timer thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-2903) GetKafka cannot handle null value in Kafka offset cause NullPointerException

2017-07-27 Thread Milan Baran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103068#comment-16103068
 ] 

Milan Baran commented on NIFI-2903:
---

Any progress?

> GetKafka cannot handle null value in Kafka offset cause NullPointerException
> 
>
> Key: NIFI-2903
> URL: https://issues.apache.org/jira/browse/NIFI-2903
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Kurt Hung
>
> The GetKafka processor does not handle the condition of null value that 
> exists in Kafka offset, which would cause the processor hang on and generate 
> a NullPointerException after 30 sec. It's easy to reproduce this issue; I use 
> the kafka-python to insert a key-value pair ("abc", None), and create a 
> GetKafka processor to consume the topic I've created. This issue would happen 
> immediately, and moreover, the processor could't consume rest offsets.
> Temporary I customize a GetKafka processor for a "Failure" relationship to 
> handle null values exist in Kafka offset to prevent this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4232) AvroRecordSetWriter not properly handling Arrays of Records

2017-07-27 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4232:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> AvroRecordSetWriter not properly handling Arrays of Records
> ---
>
> Key: NIFI-4232
> URL: https://issues.apache.org/jira/browse/NIFI-4232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have JSON coming in that has an Array of complex JSON objects. When I try 
> to convert it to Avro via ConvertRecord, it fails, with the following error:
> {code}
> ConvertRecord[id=4c8b14f0-1027-115d-2dd0-33fb39b2fc23] Failed to process 
> StandardFlowFileRecord[uuid=8506b97c-31fe-4598-b645-a526134b0f9f,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1501080084533-2, container=default, 
> section=2], offset=2192294, 
> length=5646],offset=0,name=286941224561187,size=5646]; will route to failure: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [Ljava.lang.Object;@746d38cd of type class 
> [Ljava.lang.Object; because no compatible types exist in the UNION
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4232) AvroRecordSetWriter not properly handling Arrays of Records

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102812#comment-16102812
 ] 

ASF GitHub Bot commented on NIFI-4232:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2040


> AvroRecordSetWriter not properly handling Arrays of Records
> ---
>
> Key: NIFI-4232
> URL: https://issues.apache.org/jira/browse/NIFI-4232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have JSON coming in that has an Array of complex JSON objects. When I try 
> to convert it to Avro via ConvertRecord, it fails, with the following error:
> {code}
> ConvertRecord[id=4c8b14f0-1027-115d-2dd0-33fb39b2fc23] Failed to process 
> StandardFlowFileRecord[uuid=8506b97c-31fe-4598-b645-a526134b0f9f,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1501080084533-2, container=default, 
> section=2], offset=2192294, 
> length=5646],offset=0,name=286941224561187,size=5646]; will route to failure: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [Ljava.lang.Object;@746d38cd of type class 
> [Ljava.lang.Object; because no compatible types exist in the UNION
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4232) AvroRecordSetWriter not properly handling Arrays of Records

2017-07-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102811#comment-16102811
 ] 

ASF subversion and git services commented on NIFI-4232:
---

Commit 1d6b486b631b897626aee6872e39522eb77fc217 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=1d6b486 ]

NIFI-4232: Ensure that we handle conversions to Avro Arrays properly. Also, if 
unable to convert a value to the expected object, include in the log message 
the (fully qualified) name of the field that is problematic

Signed-off-by: Pierre Villard 

This closes #2040.


> AvroRecordSetWriter not properly handling Arrays of Records
> ---
>
> Key: NIFI-4232
> URL: https://issues.apache.org/jira/browse/NIFI-4232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have JSON coming in that has an Array of complex JSON objects. When I try 
> to convert it to Avro via ConvertRecord, it fails, with the following error:
> {code}
> ConvertRecord[id=4c8b14f0-1027-115d-2dd0-33fb39b2fc23] Failed to process 
> StandardFlowFileRecord[uuid=8506b97c-31fe-4598-b645-a526134b0f9f,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1501080084533-2, container=default, 
> section=2], offset=2192294, 
> length=5646],offset=0,name=286941224561187,size=5646]; will route to failure: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [Ljava.lang.Object;@746d38cd of type class 
> [Ljava.lang.Object; because no compatible types exist in the UNION
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2040: NIFI-4232: Ensure that we handle conversions to Avr...

2017-07-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2040


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-07-27 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102806#comment-16102806
 ] 

Pierre Villard commented on NIFI-3335:
--

OK, don't know why there is an issue with this JIRA specifically... but fact 
is: I cannot resolve/close it.
I think that's the same for you [~mattyb149]. Could you have a look [~joewitt]?

> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.4.0
>
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-07-27 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-3335:


Assignee: Matt Burgess  (was: Pierre Villard)

> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.4.0
>
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-07-27 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-3335:
-
Fix Version/s: 1.4.0

> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.4.0
>
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-07-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102797#comment-16102797
 ] 

ASF GitHub Bot commented on NIFI-3335:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2039


> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3335) GenerateTableFetch should allow you to specify an initial Max Value

2017-07-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16102795#comment-16102795
 ] 

ASF subversion and git services commented on NIFI-3335:
---

Commit dc4006f423b83e7819981569d17e8d90b9a9d1a4 in nifi's branch 
refs/heads/master from [~mattyb149]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=dc4006f ]

NIFI-3335: Add initial.maxvalue support to GenerateTableFetch

Signed-off-by: Pierre Villard 

This closes #2039.


> GenerateTableFetch should allow you to specify an initial Max Value
> ---
>
> Key: NIFI-3335
> URL: https://issues.apache.org/jira/browse/NIFI-3335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>
> NIFI-2583 added the ability (via dynamic properties) to specify initial Max 
> Values for columns, to enable the user to "pick up where they left off" if 
> something happened with a flow, a NiFi instance, etc. where the state was 
> stored but the processing did not complete successfully.
> This feature would also be helpful in GenerateTableFetch, which also supports 
> max-value columns.
> Since NIFI-2881 adds incoming flow file support, it's more useful if Initial 
> max values can be specified via flow file attributes. Because if a table name 
> is dynamically passed via flow file attribute and Expression Language, user 
> won't be able to configure dynamic processor attribute in advance for each 
> possible table.
> Add dynamic properties ('initial.maxvalue.' same as 
> QueryDatabaseTable) to specify initial max values statically, and also use 
> incoming flow file attributes named 'initial.maxvalue.' if 
> any. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2039: NIFI-3335: Add initial.maxvalue support to Generate...

2017-07-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2039


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---