[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175716#comment-16175716
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user yjhyjhyjh0 commented on the issue:

https://github.com/apache/nifi/pull/2166
  
Thanks for the detail comment and suggestion.
That helps me a lot.

Some fix by suggestion like.
1 - put back StringmaxValueColumnNames to GenerateTableFetch to avoid NPE 
unit test fail.
 (Originally forget to include this line)
2 - move re-cache method, calling super.setup(), before the getColumnType 
to provide more 
readable code. 
3 - Remove ternary operator because evaluateAttributeExpressions already 
can handle null flowfile.

Thanks


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2166: NIFI-4395 - GenerateTableFetch can't fetch column type by ...

2017-09-21 Thread yjhyjhyjh0
Github user yjhyjhyjh0 commented on the issue:

https://github.com/apache/nifi/pull/2166
  
Thanks for the detail comment and suggestion.
That helps me a lot.

Some fix by suggestion like.
1 - put back StringmaxValueColumnNames to GenerateTableFetch to avoid NPE 
unit test fail.
 (Originally forget to include this line)
2 - move re-cache method, calling super.setup(), before the getColumnType 
to provide more 
readable code. 
3 - Remove ternary operator because evaluateAttributeExpressions already 
can handle null flowfile.

Thanks


---


[jira] [Commented] (NIFI-3915) Add Kerberos support to the Cassandra processors

2017-09-21 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175600#comment-16175600
 ] 

Matt Burgess commented on NIFI-3915:


As far as I can tell, Kerberos support for Cassandra is only available via a 
vendor (DataStax Enterprise), they have their own fork of the Apache Cassandra 
driver, which does appear to be Apache 2.0-licensed.  However their transitive 
dependencies are of different (older) versions than the current version of the 
driver included with the NiFi Cassandra processors (driver version 3.0.2). I'm 
worried that switching to 1.0.6-dse or some other non-Apache-based version 
could cause problems for those using Apache Cassandra clusters.

Perhaps an initial approach is to parameterize the version for 
cassandra-driver-core in nifi-cassandra-processors/pom.xml. Then, as long as 
the driver is truly a drop-in replacement, you could build the 
nifi-cassandra-bundle overriding that property with a vendor-specific version.  
Having said that, you would also need to add a processor property (to 
AbstractCassandraProcessor) for the JAAS Client App name, and some way to use 
the alternative AuthProvider (perhaps a AuthProvider Class Name property, 
defaulted to empty or the Apache Cassandra default AuthProvider class).  I'm a 
bit leery of adding these properties to Apache NiFi when they can only be used 
with a vendor's product, but there is precedence for it (Shield/X-Pack 
properties for the Elasticsearch processors, e.g.).

Any thoughts on this? I am welcome to all comments, questions and suggestions :)

> Add Kerberos support to the Cassandra processors
> 
>
> Key: NIFI-3915
> URL: https://issues.apache.org/jira/browse/NIFI-3915
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.1.0, 1.1.1
> Environment: RHEL 7
>Reporter: RAVINDRA
>Assignee: Matt Burgess
>
> Currently we use PutCassandraQL processor to persists the in to cassandra
> We have a requirement to kerborize the Cassandra cluster
> Since PutCassandraQL processor does not support kerberos,we are having issues 
> integrating Cassndra from NIFI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread cammachusa
Github user cammachusa commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140365671
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +97,29 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
+"AUTO_FLUSH_SYNC: the call returns when the operation 
is persisted, else it throws an exception.\n" +
+"AUTO_FLUSH_BACKGROUND: the call returns when the 
operation has been added to the buffer. This call should normally perform only 
fast in-memory" +
+" operations but it may have to wait when the buffer 
is full and there's another buffer being flushed.\n" +
+"MANUAL_FLUSH: the call returns when the operation has 
been added to the buffer, else it throws a KuduException if the buffer is 
full.")
+.allowableValues(SessionConfiguration.FlushMode.values())
+
.defaultValue(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND.toString())
+.required(true)
+.build();
+
+protected static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.description("Set the number of operations that can be 
buffered, between 2 - 10. " +
+"Depend on your memory size, and data size per row set 
an appropriate batch size. " +
+"Gradually increase this number to find out your best 
one for best performance")
+.defaultValue("100")
--- End diff --

Like, I made in note in the description. It's depend on their memory size, 
and data row being inserted, and also their cluster size. Setting the buffer 
size too big won't help, and too small won't help either. And at noted, 
developer got to find out this number from his environment. A lot of people hit 
performance peak at 50 with single machine Kudu's cluster. My colleague hit 
performance peak at 3500 with 6 nodes cluster (10 CPU, 64 GB Memory each). I 
randomly pick 100 as I saw it from other Put-xxx processor, but I don't want to 
put 1000 since most developers test it with single machine, and would leave 
this default value.


---


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175487#comment-16175487
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user paulgibeault commented on the issue:

https://github.com/apache/nifi/pull/2166
  
@ijokarumawak Thank you for the quick review.  We will get right on these 
changes and resubmit.


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2166: NIFI-4395 - GenerateTableFetch can't fetch column type by ...

2017-09-21 Thread paulgibeault
Github user paulgibeault commented on the issue:

https://github.com/apache/nifi/pull/2166
  
@ijokarumawak Thank you for the quick review.  We will get right on these 
changes and resubmit.


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread cammachusa
Github user cammachusa commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140362618
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +97,29 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
+"AUTO_FLUSH_SYNC: the call returns when the operation 
is persisted, else it throws an exception.\n" +
+"AUTO_FLUSH_BACKGROUND: the call returns when the 
operation has been added to the buffer. This call should normally perform only 
fast in-memory" +
+" operations but it may have to wait when the buffer 
is full and there's another buffer being flushed.\n" +
+"MANUAL_FLUSH: the call returns when the operation has 
been added to the buffer, else it throws a KuduException if the buffer is 
full.")
+.allowableValues(SessionConfiguration.FlushMode.values())
+
.defaultValue(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND.toString())
+.required(true)
+.build();
+
+protected static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.description("Set the number of operations that can be 
buffered, between 2 - 10. " +
+"Depend on your memory size, and data size per row set 
an appropriate batch size. " +
+"Gradually increase this number to find out your best 
one for best performance")
+.defaultValue("100")
+.required(true)
+.addValidator(StandardValidators.createLongValidator(2, 
10, true))
--- End diff --

The value of 1 wouldn't make sense. If set 1, the buffer will always have 
one item, since its purpose is to queue up coming items. Second, doing so, will 
significantly degrade the performance. The read always faster than the write.


---


[jira] [Commented] (NIFIREG-23) Fix swagger.json output from Build

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-23?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175458#comment-16175458
 ] 

ASF GitHub Bot commented on NIFIREG-23:
---

GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/12

NIFIREG-23: Fix SortParamater and OperationID in swagger.json output

I spent a while trying to figure out how to change the swagger.json output 
without changing the type of the query param from List to 
List, but in the end this was the only thing that worked for me. So now 
the SortParameter.fromString method is called explicitly.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-23

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/12.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12


commit 913912e5db37b07809c83c2f32398157a80eeb8a
Author: Kevin Doran 
Date:   2017-09-21T21:02:07Z

NIFIREG-23: Fix SortParamater and OperationID in swagger.json output




> Fix swagger.json output from Build
> --
>
> Key: NIFIREG-23
> URL: https://issues.apache.org/jira/browse/NIFIREG-23
> Project: NiFi Registry
>  Issue Type: Bug
>Reporter: Kevin Doran
>Assignee: Kevin Doran
>Priority: Minor
>
> The swagger.json that is generated from the build has some errors, such as 
> duplicate operationIds and missing type referencese, when processed by a 
> swagger parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #12: NIFIREG-23: Fix SortParamater and OperationI...

2017-09-21 Thread kevdoran
GitHub user kevdoran opened a pull request:

https://github.com/apache/nifi-registry/pull/12

NIFIREG-23: Fix SortParamater and OperationID in swagger.json output

I spent a while trying to figure out how to change the swagger.json output 
without changing the type of the query param from List to 
List, but in the end this was the only thing that worked for me. So now 
the SortParameter.fromString method is called explicitly.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevdoran/nifi-registry NIFIREG-23

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/12.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12


commit 913912e5db37b07809c83c2f32398157a80eeb8a
Author: Kevin Doran 
Date:   2017-09-21T21:02:07Z

NIFIREG-23: Fix SortParamater and OperationID in swagger.json output




---


[jira] [Created] (NIFIREG-23) Fix swagger.json output from Build

2017-09-21 Thread Kevin Doran (JIRA)
Kevin Doran created NIFIREG-23:
--

 Summary: Fix swagger.json output from Build
 Key: NIFIREG-23
 URL: https://issues.apache.org/jira/browse/NIFIREG-23
 Project: NiFi Registry
  Issue Type: Bug
Reporter: Kevin Doran
Assignee: Kevin Doran
Priority: Minor


The swagger.json that is generated from the build has some errors, such as 
duplicate operationIds and missing type referencese, when processed by a 
swagger parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140355391
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +95,27 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
--- End diff --

My only concern is: if we update the Kudu dependency and a new 
SessionConfiguration.FlushMode option is made available in the library, the 
contributor bumping the Kudu version will have to remember to update the 
description otherwise there will be an undocumented option. But I'm fine with 
this approach: it does not make a huge difference.


---


[GitHub] nifi issue #2160: [NiFi-4384] - Enhance PutKudu processor to support batch i...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2160
  
Another comment completely unrelated to this specific PR: could you add the 
tag ``"record"`` in PutKudu ``@Tags`` annotation to be in line with other 
record-oriented processors? It'll help users to list the record processors.


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140353856
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +97,29 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
+"AUTO_FLUSH_SYNC: the call returns when the operation 
is persisted, else it throws an exception.\n" +
+"AUTO_FLUSH_BACKGROUND: the call returns when the 
operation has been added to the buffer. This call should normally perform only 
fast in-memory" +
+" operations but it may have to wait when the buffer 
is full and there's another buffer being flushed.\n" +
+"MANUAL_FLUSH: the call returns when the operation has 
been added to the buffer, else it throws a KuduException if the buffer is 
full.")
+.allowableValues(SessionConfiguration.FlushMode.values())
+
.defaultValue(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND.toString())
+.required(true)
+.build();
+
+protected static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.description("Set the number of operations that can be 
buffered, between 2 - 10. " +
+"Depend on your memory size, and data size per row set 
an appropriate batch size. " +
+"Gradually increase this number to find out your best 
one for best performance")
+.defaultValue("100")
--- End diff --

Looking at the AsyncKuduSession class, it seems the default value is 1000. 
Any reason to set it to 100?


---


[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-21 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175433#comment-16175433
 ] 

Joseph Witt commented on NIFI-4360:
---

java version "1.8.0_141"
Java(TM) SE Runtime Environment (build 1.8.0_141-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)

Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 
2017-04-03T15:39:06-04:00)
Maven home: /Users/jwitt/Applications/apache-maven-3.5.0
Java version: 1.8.0_141, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.12.6", arch: "x86_64", family: "mac"'

This is the environmental info for where I ran the build/tests.  We need the 
tests to work on mac/win/lin or skip certain tests in environments they're not 
built for.

> Add support for Azure Data Lake Store (ADLS)
> 
>
> Key: NIFI-4360
> URL: https://issues.apache.org/jira/browse/NIFI-4360
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Milan Chandna
>Assignee: Milan Chandna
>  Labels: adls, azure, hdfs
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently ingress and egress in ADLS account is possible only using HDFS 
> processors.
> Opening this feature to support separate processors for interaction with ADLS 
> accounts directly.
> Benefits are many like 
> - simple configuration.
> - Helping users not familiar with HDFS 
> - Helping users who currently are accessing ADLS accounts directly.
> - using the ADLS SDK rather than HDFS client, one lesser layer to go through.
> Can be achieved by adding separate ADLS processors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140353231
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +97,29 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
+"AUTO_FLUSH_SYNC: the call returns when the operation 
is persisted, else it throws an exception.\n" +
+"AUTO_FLUSH_BACKGROUND: the call returns when the 
operation has been added to the buffer. This call should normally perform only 
fast in-memory" +
+" operations but it may have to wait when the buffer 
is full and there's another buffer being flushed.\n" +
+"MANUAL_FLUSH: the call returns when the operation has 
been added to the buffer, else it throws a KuduException if the buffer is 
full.")
+.allowableValues(SessionConfiguration.FlushMode.values())
+
.defaultValue(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND.toString())
+.required(true)
+.build();
+
+protected static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.description("Set the number of operations that can be 
buffered, between 2 - 10. " +
+"Depend on your memory size, and data size per row set 
an appropriate batch size. " +
+"Gradually increase this number to find out your best 
one for best performance")
+.defaultValue("100")
+.required(true)
+.addValidator(StandardValidators.createLongValidator(2, 
10, true))
--- End diff --

I'm not an expert, but any reason for setting the minimum value to 2? Does 
a value of 1 make sense?


---


[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-21 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175431#comment-16175431
 ] 

Joseph Witt commented on NIFI-4360:
---

Understood on the test files being desirable.  I'd like to avoid storing multi 
MB test file.  Better to have it generated for the tests and destroyed.  Even 
if the text seems really benign the rules for tracking the LICENSE and NOTICE 
also apply to test artifacts.  These would need to be licensed in our source 
LICENSE and even if they're apache we'd cite them.  It isn't clear that they 
are actually ASF licensed to me and in doing a quick google search they do not 
appear to be original works of this PR.  So, lets just make this simple and not 
pull in test files from elsewhere.

Thanks

> Add support for Azure Data Lake Store (ADLS)
> 
>
> Key: NIFI-4360
> URL: https://issues.apache.org/jira/browse/NIFI-4360
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Milan Chandna
>Assignee: Milan Chandna
>  Labels: adls, azure, hdfs
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently ingress and egress in ADLS account is possible only using HDFS 
> processors.
> Opening this feature to support separate processors for interaction with ADLS 
> accounts directly.
> Benefits are many like 
> - simple configuration.
> - Helping users not familiar with HDFS 
> - Helping users who currently are accessing ADLS accounts directly.
> - using the ADLS SDK rather than HDFS client, one lesser layer to go through.
> Can be achieved by adding separate ADLS processors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175429#comment-16175429
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140349370
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -221,8 +222,12 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String
 return super.customValidate(validationContext);
 }
 
-public void setup(final ProcessContext context) {
-final String maxValueColumnNames = 
context.getProperty(MAX_VALUE_COLUMN_NAMES).evaluateAttributeExpressions().getValue();
+public void setup(final ProcessContext context){
+setup(context,true,null);
+}
+
+public void setup(final ProcessContext context, boolean 
shouldCleanCache,FlowFile flowFile) {
+final String maxValueColumnNames = (flowFile == null) ? 
context.getProperty(MAX_VALUE_COLUMN_NAMES).evaluateAttributeExpressions().getValue()
 : 
context.getProperty(MAX_VALUE_COLUMN_NAMES).evaluateAttributeExpressions(flowFile).getValue();
--- End diff --

I like the idea of being defensive, but flowFile can be null with 
`evaluateAttributeExpressions`, so we don't need this check using ternary 
operator. Just passing the flowFile (possibly null) is fine, as 
GenerateTableFetch [existing 
code](https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java#L185)
 does.


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175430#comment-16175430
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140352593
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -243,14 +243,14 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 maxValueSelectColumns.add("MAX(" + colName + ") " + 
colName);
 String maxValue = getColumnStateMaxValue(tableName, 
statePropertyMap, colName);
 if (!StringUtils.isEmpty(maxValue)) {
-Integer type = getColumnType(tableName, colName);
+Integer type = getColumnType(context, tableName, 
colName, finalFileToProcess);
--- End diff --

Probably instead of change `getColumnType`, we can add calling `setup` if 
`columnTypeMap.isEmpty()`, before `getColumnType` here. Which makes it more 
readable.


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2166: NIFI-4395 - GenerateTableFetch can't fetch column t...

2017-09-21 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140352593
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -243,14 +243,14 @@ public void onTrigger(final ProcessContext context, 
final ProcessSessionFactory
 maxValueSelectColumns.add("MAX(" + colName + ") " + 
colName);
 String maxValue = getColumnStateMaxValue(tableName, 
statePropertyMap, colName);
 if (!StringUtils.isEmpty(maxValue)) {
-Integer type = getColumnType(tableName, colName);
+Integer type = getColumnType(context, tableName, 
colName, finalFileToProcess);
--- End diff --

Probably instead of change `getColumnType`, we can add calling `setup` if 
`columnTypeMap.isEmpty()`, before `getColumnType` here. Which makes it more 
readable.


---


[GitHub] nifi pull request #2166: NIFI-4395 - GenerateTableFetch can't fetch column t...

2017-09-21 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140349370
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -221,8 +222,12 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String
 return super.customValidate(validationContext);
 }
 
-public void setup(final ProcessContext context) {
-final String maxValueColumnNames = 
context.getProperty(MAX_VALUE_COLUMN_NAMES).evaluateAttributeExpressions().getValue();
+public void setup(final ProcessContext context){
+setup(context,true,null);
+}
+
+public void setup(final ProcessContext context, boolean 
shouldCleanCache,FlowFile flowFile) {
+final String maxValueColumnNames = (flowFile == null) ? 
context.getProperty(MAX_VALUE_COLUMN_NAMES).evaluateAttributeExpressions().getValue()
 : 
context.getProperty(MAX_VALUE_COLUMN_NAMES).evaluateAttributeExpressions(flowFile).getValue();
--- End diff --

I like the idea of being defensive, but flowFile can be null with 
`evaluateAttributeExpressions`, so we don't need this check using ternary 
operator. Just passing the flowFile (possibly null) is fine, as 
GenerateTableFetch [existing 
code](https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java#L185)
 does.


---


[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175423#comment-16175423
 ] 

ASF GitHub Bot commented on NIFI-4360:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2158
  
Lots of test failures.

/code
Tests run: 19, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 0.847 sec 
<<< FAILURE! - in org.apache.nifi.processors.adls.TestPutADLSFile
testPutConflictReplace(org.apache.nifi.processors.adls.TestPutADLSFile)  
Time elapsed: 0.102 sec  <<< FAILURE!
java.lang.AssertionError:
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=3834110d-e3b1-41cc-9ff4-022da542ae4b=3834110d-e3b1-41cc-9ff4-022da542ae4b=2016-11-01"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.nifi.processors.adls.TestPutADLSFile.testPutConflictReplace(TestPutADLSFile.java:161)

testPutConflictAppend(org.apache.nifi.processors.adls.TestPutADLSFile)  
Time elapsed: 0.01 sec  <<< FAILURE!
java.lang.AssertionError:
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=0c697d9a-fa07-47b7-adfd-877b90e15717=0c697d9a-fa07-47b7-adfd-877b90e15717=2016-11-01"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.nifi.processors.adls.TestPutADLSFile.testPutConflictAppend(TestPutADLSFile.java:215)

testPutConflictFail(org.apache.nifi.processors.adls.TestPutADLSFile)  Time 
elapsed: 0 sec  <<< FAILURE!
java.lang.AssertionError:
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=a6fe439c-b2f1-41c2-ba5d-e7d75836d53b=a6fe439c-b2f1-41c2-ba5d-e7d75836d53b=2016-11-01"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.nifi.processors.adls.TestPutADLSFile.testPutConflictFail(TestPutADLSFile.java:108)

Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.249 sec - 
in org.apache.nifi.ranger.authorization.TestRangerBasePluginWithPolicies
Running org.apache.nifi.ranger.authorization.TestRangerNiFiAuthorizer

Results :

Failed tests:
  TestListADLSFile.testFlowFileAttributes:233 expected:<[\]test> but 
was:<[/]test>
  TestPutADLSFile.testPutConflictAppend:215
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=0c697d9a-fa07-47b7-adfd-877b90e15717=0c697d9a-fa07-47b7-adfd-877b90e15717=2016-11-01"
  TestPutADLSFile.testPutConflictFail:108
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=a6fe439c-b2f1-41c2-ba5d-e7d75836d53b=a6fe439c-b2f1-41c2-ba5d-e7d75836d53b=2016-11-01"
  TestPutADLSFile.testPutConflictReplace:161
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=3834110d-e3b1-41cc-9ff4-022da542ae4b=3834110d-e3b1-41cc-9ff4-022da542ae4b=2016-11-01"




> Add support for Azure Data Lake Store (ADLS)
> 
>
> Key: NIFI-4360
> URL: https://issues.apache.org/jira/browse/NIFI-4360
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Milan Chandna
>Assignee: Milan Chandna
>  Labels: adls, azure, hdfs
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently ingress and egress in ADLS account is possible only using HDFS 
> processors.
> Opening this feature to support separate processors for interaction with ADLS 
> accounts directly.
> Benefits are many like 
> - simple configuration.
> - Helping users not familiar with HDFS 
> - Helping users who currently are accessing ADLS accounts directly.
> - using the ADLS SDK rather than HDFS client, one lesser layer to go through.
> Can be achieved by adding separate ADLS processors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2158: NIFI-4360 Adding support for ADLS Processors. Feature incl...

2017-09-21 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2158
  
Lots of test failures.

/code
Tests run: 19, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 0.847 sec 
<<< FAILURE! - in org.apache.nifi.processors.adls.TestPutADLSFile
testPutConflictReplace(org.apache.nifi.processors.adls.TestPutADLSFile)  
Time elapsed: 0.102 sec  <<< FAILURE!
java.lang.AssertionError:
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=3834110d-e3b1-41cc-9ff4-022da542ae4b=3834110d-e3b1-41cc-9ff4-022da542ae4b=2016-11-01"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.nifi.processors.adls.TestPutADLSFile.testPutConflictReplace(TestPutADLSFile.java:161)

testPutConflictAppend(org.apache.nifi.processors.adls.TestPutADLSFile)  
Time elapsed: 0.01 sec  <<< FAILURE!
java.lang.AssertionError:
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=0c697d9a-fa07-47b7-adfd-877b90e15717=0c697d9a-fa07-47b7-adfd-877b90e15717=2016-11-01"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.nifi.processors.adls.TestPutADLSFile.testPutConflictAppend(TestPutADLSFile.java:215)

testPutConflictFail(org.apache.nifi.processors.adls.TestPutADLSFile)  Time 
elapsed: 0 sec  <<< FAILURE!
java.lang.AssertionError:
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=a6fe439c-b2f1-41c2-ba5d-e7d75836d53b=a6fe439c-b2f1-41c2-ba5d-e7d75836d53b=2016-11-01"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.nifi.processors.adls.TestPutADLSFile.testPutConflictFail(TestPutADLSFile.java:108)

Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.249 sec - 
in org.apache.nifi.ranger.authorization.TestRangerBasePluginWithPolicies
Running org.apache.nifi.ranger.authorization.TestRangerNiFiAuthorizer

Results :

Failed tests:
  TestListADLSFile.testFlowFileAttributes:233 expected:<[\]test> but 
was:<[/]test>
  TestPutADLSFile.testPutConflictAppend:215
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=0c697d9a-fa07-47b7-adfd-877b90e15717=0c697d9a-fa07-47b7-adfd-877b90e15717=2016-11-01"
  TestPutADLSFile.testPutConflictFail:108
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=a6fe439c-b2f1-41c2-ba5d-e7d75836d53b=a6fe439c-b2f1-41c2-ba5d-e7d75836d53b=2016-11-01"
  TestPutADLSFile.testPutConflictReplace:161
Expected: a string containing "%5Csample%5Csample.txt.nifipart"
 but: was 
"/webhdfs/v1/sample/sample.txt.nifipart?op=CREATE=DATA=true=true=3834110d-e3b1-41cc-9ff4-022da542ae4b=3834110d-e3b1-41cc-9ff4-022da542ae4b=2016-11-01"




---


[jira] [Commented] (NIFI-4407) Non-EL statement processed as expression language

2017-09-21 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175418#comment-16175418
 ] 

Pierre Villard commented on NIFI-4407:
--

It seems to be the expected behaviour if we look at 
{{org.apache.nifi.attribute.expression.language.TestQuery}}
Instead of a bug, it could be something to document in 
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html

> Non-EL statement processed as expression language
> -
>
> Key: NIFI-4407
> URL: https://issues.apache.org/jira/browse/NIFI-4407
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0, 1.2.0, 1.1.1, 1.0.1, 1.3.0
>Reporter: Pierre Villard
>Priority: Critical
>
> If you take a GFF with custom text: {{test$$foo}}
> The generated text will be: {{test$foo}}
> The property supports expression language and one $ is removed during the EL 
> evaluation step. This can be an issue if a user wants to use a value 
> containing to consecutive $$ (such as in password fields).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-21 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker test the site2site, NiFi receive the right 
header/footer/demarcator


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175409#comment-16175409
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker test the site2site, NiFi receive the right 
header/footer/demarcator


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2158: NIFI-4360 Adding support for ADLS Processors. Feature incl...

2017-09-21 Thread milanchandna
Github user milanchandna commented on the issue:

https://github.com/apache/nifi/pull/2158
  
Thanks @joewitt

These test resource files contains Lorem Ipsum text.
But anyways I will create my own if required but dont want to delete as 
they are required for an important test case.

Thanks.


---


[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175402#comment-16175402
 ] 

ASF GitHub Bot commented on NIFI-4360:
--

Github user milanchandna commented on the issue:

https://github.com/apache/nifi/pull/2158
  
Thanks @joewitt

These test resource files contains Lorem Ipsum text.
But anyways I will create my own if required but dont want to delete as 
they are required for an important test case.

Thanks.


> Add support for Azure Data Lake Store (ADLS)
> 
>
> Key: NIFI-4360
> URL: https://issues.apache.org/jira/browse/NIFI-4360
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Milan Chandna
>Assignee: Milan Chandna
>  Labels: adls, azure, hdfs
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently ingress and egress in ADLS account is possible only using HDFS 
> processors.
> Opening this feature to support separate processors for interaction with ADLS 
> accounts directly.
> Benefits are many like 
> - simple configuration.
> - Helping users not familiar with HDFS 
> - Helping users who currently are accessing ADLS accounts directly.
> - using the ADLS SDK rather than HDFS client, one lesser layer to go through.
> Can be achieved by adding separate ADLS processors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175389#comment-16175389
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140346314
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -245,7 +250,9 @@ public void setup(final ProcessContext context) {
 ResultSetMetaData resultSetMetaData = resultSet.getMetaData();
 int numCols = resultSetMetaData.getColumnCount();
 if (numCols > 0) {
+if (shouldCleanCache){
 columnTypeMap.clear();
+}
--- End diff --

Need indentation at the line of `columnTypeMap.clear()`.


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2166: NIFI-4395 - GenerateTableFetch can't fetch column t...

2017-09-21 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140346314
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -245,7 +250,9 @@ public void setup(final ProcessContext context) {
 ResultSetMetaData resultSetMetaData = resultSet.getMetaData();
 int numCols = resultSetMetaData.getColumnCount();
 if (numCols > 0) {
+if (shouldCleanCache){
 columnTypeMap.clear();
+}
--- End diff --

Need indentation at the line of `columnTypeMap.clear()`.


---


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175388#comment-16175388
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140344171
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -153,13 +153,11 @@ public GenerateTableFetch() {
 
 @Override
 @OnScheduled
-public void setup(final ProcessContext context) {
-maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
--- End diff --

We should keep this line to initialize `maxValueProperties`. I got 
following NPE when I restarted NiFi. Unit tests failed, too:

```
2017-09-22 04:55:35,898 WARN [Timer-Driven Process Thread-1] 
o.a.n.c.t.ContinuallyRunProcessorTask
java.lang.NullPointerException: null
at 
org.apache.nifi.processors.standard.GenerateTableFetch.onTrigger(GenerateTableFetch.java:208)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175387#comment-16175387
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140341776
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -402,16 +402,20 @@ private String getColumnStateMaxValue(String 
tableName, Map stat
 return maxValue;
 }
 
-private Integer getColumnType(String tableName, String colName) {
+private Integer getColumnType(final ProcessContext context, String 
tableName, String colName, FlowFile flowFile) {
 final String fullyQualifiedStateKey = getStateKey(tableName, 
colName);
 Integer type = columnTypeMap.get(fullyQualifiedStateKey);
 if (type == null && !isDynamicTableName) {
 // If the table name is static and the fully-qualified key was 
not found, try just the column name
 type = columnTypeMap.get(getStateKey(null, colName));
 }
+if (type == null || columnTypeMap.size() == 0) {
+// This means column type cache is clean after instance 
reboot. We should re-cache column type
+super.setup(context, false, flowFile);
--- End diff --

Calling `setup()` only updates `columnTypeMap`. The `type` variable will 
stay being null here. Doesn't it throw ProcessException? Shouldn't we add `type 
= columnTypeMap.get` after calling setup?


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2166: NIFI-4395 - GenerateTableFetch can't fetch column t...

2017-09-21 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140344171
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -153,13 +153,11 @@ public GenerateTableFetch() {
 
 @Override
 @OnScheduled
-public void setup(final ProcessContext context) {
-maxValueProperties = 
getDefaultMaxValueProperties(context.getProperties());
--- End diff --

We should keep this line to initialize `maxValueProperties`. I got 
following NPE when I restarted NiFi. Unit tests failed, too:

```
2017-09-22 04:55:35,898 WARN [Timer-Driven Process Thread-1] 
o.a.n.c.t.ContinuallyRunProcessorTask
java.lang.NullPointerException: null
at 
org.apache.nifi.processors.standard.GenerateTableFetch.onTrigger(GenerateTableFetch.java:208)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1119)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```


---


[GitHub] nifi pull request #2166: NIFI-4395 - GenerateTableFetch can't fetch column t...

2017-09-21 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2166#discussion_r140341776
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -402,16 +402,20 @@ private String getColumnStateMaxValue(String 
tableName, Map stat
 return maxValue;
 }
 
-private Integer getColumnType(String tableName, String colName) {
+private Integer getColumnType(final ProcessContext context, String 
tableName, String colName, FlowFile flowFile) {
 final String fullyQualifiedStateKey = getStateKey(tableName, 
colName);
 Integer type = columnTypeMap.get(fullyQualifiedStateKey);
 if (type == null && !isDynamicTableName) {
 // If the table name is static and the fully-qualified key was 
not found, try just the column name
 type = columnTypeMap.get(getStateKey(null, colName));
 }
+if (type == null || columnTypeMap.size() == 0) {
+// This means column type cache is clean after instance 
reboot. We should re-cache column type
+super.setup(context, false, flowFile);
--- End diff --

Calling `setup()` only updates `columnTypeMap`. The `type` variable will 
stay being null here. Doesn't it throw ProcessException? Shouldn't we add `type 
= columnTypeMap.get` after calling setup?


---


[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175379#comment-16175379
 ] 

ASF GitHub Bot commented on NIFI-4360:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2158
  
@milanchandna am checking into the dependency tree and L  Looks good so 
far.

However, please get rid of the davinci and davinci4MB files.  The origin of 
them is unclear and it looks like it comes from common ADLS test files.  We 
have to cite them as source material in such cases.  We're better off just 
making up our own original test files OR removing them altogether.

Thanks


> Add support for Azure Data Lake Store (ADLS)
> 
>
> Key: NIFI-4360
> URL: https://issues.apache.org/jira/browse/NIFI-4360
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Milan Chandna
>Assignee: Milan Chandna
>  Labels: adls, azure, hdfs
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently ingress and egress in ADLS account is possible only using HDFS 
> processors.
> Opening this feature to support separate processors for interaction with ADLS 
> accounts directly.
> Benefits are many like 
> - simple configuration.
> - Helping users not familiar with HDFS 
> - Helping users who currently are accessing ADLS accounts directly.
> - using the ADLS SDK rather than HDFS client, one lesser layer to go through.
> Can be achieved by adding separate ADLS processors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2158: NIFI-4360 Adding support for ADLS Processors. Feature incl...

2017-09-21 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2158
  
@milanchandna am checking into the dependency tree and L  Looks good so 
far.

However, please get rid of the davinci and davinci4MB files.  The origin of 
them is unclear and it looks like it comes from common ADLS test files.  We 
have to cite them as source material in such cases.  We're better off just 
making up our own original test files OR removing them altogether.

Thanks


---


[jira] [Created] (NIFI-4407) Non-EL statement processed as expression language

2017-09-21 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4407:


 Summary: Non-EL statement processed as expression language
 Key: NIFI-4407
 URL: https://issues.apache.org/jira/browse/NIFI-4407
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.3.0, 1.0.1, 1.1.1, 1.2.0, 1.1.0, 1.0.0
Reporter: Pierre Villard
Priority: Critical


If you take a GFF with custom text: {{test$$foo}}
The generated text will be: {{test$foo}}

The property supports expression language and one $ is removed during the EL 
evaluation step. This can be an issue if a user wants to use a value containing 
to consecutive $$ (such as in password fields).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4403) Add group name in bulletin data

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175343#comment-16175343
 ] 

ASF GitHub Bot commented on NIFI-4403:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2167
  
Thanks @mcgilman - it should be OK now: group name is only on server side 
and only exposed through the reporting task.


> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2167: NIFI-4403 - add group name to bulletins model & S2S bullet...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2167
  
Thanks @mcgilman - it should be OK now: group name is only on server side 
and only exposed through the reporting task.


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread cammachusa
Github user cammachusa commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140340082
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -124,11 +148,14 @@ public void OnScheduled(final ProcessContext context) 
{
 kuduTable = this.getKuduTable(kuduClient, tableName);
 getLogger().debug("Kudu connection successfully 
initialized");
 }
+
--- End diff --

And, feel free to let me know what else should be adjusted


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread cammachusa
Github user cammachusa commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140339764
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -124,11 +148,14 @@ public void OnScheduled(final ProcessContext context) 
{
 kuduTable = this.getKuduTable(kuduClient, tableName);
 getLogger().debug("Kudu connection successfully 
initialized");
 }
+
--- End diff --

Got it, and thank you for your notes @pvillard31 . I will have a push to 
fix those soon.


---


[jira] [Commented] (NIFIREG-22) Add a count field to VersionedFlow to be populated when retrieving items

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-22?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175321#comment-16175321
 ] 

ASF GitHub Bot commented on NIFIREG-22:
---

GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/11

NIFIREG-22 Adding versionCount to VersionedFlow with back-end support…

… for populating it

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry NIFIREG-22

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/11.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #11


commit 07a757bc63c282ad4373f6b1ad160830fe8be64d
Author: Bryan Bende 
Date:   2017-09-21T17:02:26Z

NIFIREG-22 Adding versionCount to VersionedFlow with back-end support for 
populating it




> Add a count field to VersionedFlow to be populated when retrieving items
> 
>
> Key: NIFIREG-22
> URL: https://issues.apache.org/jira/browse/NIFIREG-22
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.0.1
>
>
> We should be able to display the number of versions of a flow without 
> returning the list of all the versions. We can add a "versionCount" field to 
> VersionedFlow that can be populated by the database service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #11: NIFIREG-22 Adding versionCount to VersionedF...

2017-09-21 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/11

NIFIREG-22 Adding versionCount to VersionedFlow with back-end support…

… for populating it

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry NIFIREG-22

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/11.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #11


commit 07a757bc63c282ad4373f6b1ad160830fe8be64d
Author: Bryan Bende 
Date:   2017-09-21T17:02:26Z

NIFIREG-22 Adding versionCount to VersionedFlow with back-end support for 
populating it




---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140331750
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -124,11 +148,14 @@ public void OnScheduled(final ProcessContext context) 
{
 kuduTable = this.getKuduTable(kuduClient, tableName);
 getLogger().debug("Kudu connection successfully 
initialized");
 }
+
--- End diff --

I would add

java
.expressionLanguageSupported(true)

To table name, Kudu masters, and batch size properties.

Then, when you retrieve the value you add 
``.evaluateAttributeExpressions()`` before ``getValue`` or ``asInteger``. 
Example:

java
tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions().getValue();


This way a user can use expression language in the property value to 
reference externalized variables: in the variable registry, environment 
variables, etc. It's particularly useful when you're moving a workflow from a 
development environment to a production environment: you can set your table 
name in a variable "myTable" and this way you have the same workflow in both 
environments. It's just a matter of setting a different value for this variable 
(it's easier than modifying a workflow). And if you have multiple instances of 
the processors you can update all the processors by changing the value in one 
place only.

TBH expression language should be enabled on almost all the properties as 
it's really helping the continuous deployment process.


---


[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-21 Thread Atul Sikaria (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175292#comment-16175292
 ] 

Atul Sikaria commented on NIFI-4360:


+1 for the ADLS interaction of the code.

> Add support for Azure Data Lake Store (ADLS)
> 
>
> Key: NIFI-4360
> URL: https://issues.apache.org/jira/browse/NIFI-4360
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Milan Chandna
>Assignee: Milan Chandna
>  Labels: adls, azure, hdfs
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently ingress and egress in ADLS account is possible only using HDFS 
> processors.
> Opening this feature to support separate processors for interaction with ADLS 
> accounts directly.
> Benefits are many like 
> - simple configuration.
> - Helping users not familiar with HDFS 
> - Helping users who currently are accessing ADLS accounts directly.
> - using the ADLS SDK rather than HDFS client, one lesser layer to go through.
> Can be achieved by adding separate ADLS processors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4406) ExecuteScript should implement Searchable interface

2017-09-21 Thread Mark Payne (JIRA)
Mark Payne created NIFI-4406:


 Summary: ExecuteScript should implement Searchable interface
 Key: NIFI-4406
 URL: https://issues.apache.org/jira/browse/NIFI-4406
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Mark Payne


When a user searches in the NiFi UI, he/she will get results that match the 
script being run in ExecuteScript if the script is configured explicitly in the 
processor. However, if the Processor is instead configured to point to a file, 
the search will not search the contents of the file. The ExecuteScript 
processor should be updated to implement Searchable and should search the 
contents of the file also (which is already held as a String in the processor). 
This would allow searching the UI to show results that match the scripts that 
are being run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread cammachusa
Github user cammachusa commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140320346
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -124,11 +148,14 @@ public void OnScheduled(final ProcessContext context) 
{
 kuduTable = this.getKuduTable(kuduClient, tableName);
 getLogger().debug("Kudu connection successfully 
initialized");
 }
+
--- End diff --

@pvillard31 , I'm not quite sure to understand it. What would you suggest 
me to change?


---


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread Paul Gibeault (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175198#comment-16175198
 ] 

Paul Gibeault commented on NIFI-4395:
-

[~ijokarumawak] Are you able to review this change in time for the 1.4.0 
release?
https://github.com/apache/nifi/pull/2166

> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread cammachusa
Github user cammachusa commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140314637
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +95,27 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
+"AUTO_FLUSH_SYNC: the call returns when the operation 
is persisted, else it throws an exception.\n" +
+"AUTO_FLUSH_BACKGROUND: the call returns when the 
operation has been added to the buffer. This call should normally perform only 
fast in-memory" +
+" operations but it may have to wait when the buffer 
is full and there's another buffer being flushed.\n" +
+"MANUAL_FLUSH: the call returns when the operation has 
been added to the buffer, else it throws a KuduException if the buffer is 
full.")
+.allowableValues(SessionConfiguration.FlushMode.values())
+
.defaultValue(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND.toString())
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.description("Set the number of operations that can be 
buffered")
+.defaultValue("100")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
--- End diff --

Excellent!


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread cammachusa
Github user cammachusa commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140314337
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +95,27 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
+"AUTO_FLUSH_SYNC: the call returns when the operation 
is persisted, else it throws an exception.\n" +
+"AUTO_FLUSH_BACKGROUND: the call returns when the 
operation has been added to the buffer. This call should normally perform only 
fast in-memory" +
+" operations but it may have to wait when the buffer 
is full and there's another buffer being flushed.\n" +
+"MANUAL_FLUSH: the call returns when the operation has 
been added to the buffer, else it throws a KuduException if the buffer is 
full.")
+.allowableValues(SessionConfiguration.FlushMode.values())
+
.defaultValue(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND.toString())
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
--- End diff --

Agree!


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread cammachusa
Github user cammachusa commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140314045
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +95,27 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
--- End diff --

@pvillard31 , different methods, archiving the same purpose :-) I don't 
have any opinion. It was they way Ricky suggested in its initial PR #2020 . I 
would leave it like that since it looks straightforward :-)


---


[jira] [Created] (NIFIREG-22) Add a count field to VersionedFlow to be populated when retrieving items

2017-09-21 Thread Bryan Bende (JIRA)
Bryan Bende created NIFIREG-22:
--

 Summary: Add a count field to VersionedFlow to be populated when 
retrieving items
 Key: NIFIREG-22
 URL: https://issues.apache.org/jira/browse/NIFIREG-22
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende
Priority: Minor


We should be able to display the number of versions of a flow without returning 
the list of all the versions. We can add a "versionCount" field to 
VersionedFlow that can be populated by the database service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFIREG-22) Add a count field to VersionedFlow to be populated when retrieving items

2017-09-21 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFIREG-22:
---
Fix Version/s: 0.0.1

> Add a count field to VersionedFlow to be populated when retrieving items
> 
>
> Key: NIFIREG-22
> URL: https://issues.apache.org/jira/browse/NIFIREG-22
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 0.0.1
>
>
> We should be able to display the number of versions of a flow without 
> returning the list of all the versions. We can add a "versionCount" field to 
> VersionedFlow that can be populated by the database service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4403) Add group name in bulletin data

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175094#comment-16175094
 ] 

ASF GitHub Bot commented on NIFI-4403:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2167
  
That is correct. Knowledge of the components (UUID, status, etc) is 
allowed. Knowledge of what those components are/how they are configured is not.


> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2167: NIFI-4403 - add group name to bulletins model & S2S bullet...

2017-09-21 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2167
  
That is correct. Knowledge of the components (UUID, status, etc) is 
allowed. Knowledge of what those components are/how they are configured is not.


---


[jira] [Commented] (NIFI-4403) Add group name in bulletin data

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175027#comment-16175027
 ] 

ASF GitHub Bot commented on NIFI-4403:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2167
  
Good point. Just to be sure: we allow the user to see the group UUID but we 
don't want to make readable the associated group name. Right? I'll remove the 
group name reference in bulletin entity and bulletin dto.


> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2167: NIFI-4403 - add group name to bulletins model & S2S bullet...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2167
  
Good point. Just to be sure: we allow the user to see the group UUID but we 
don't want to make readable the associated group name. Right? I'll remove the 
group name reference in bulletin entity and bulletin dto.


---


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174995#comment-16174995
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2166
  
Hey @yjhyjhyjh0 - Don't worry about the CI builds - it has been failing for 
a while now. We should really find the reason why and fix it... but I didn't 
look into it yet. Regarding your PR, I think @ijokarumawak is going to review 
it when he has time for it.


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2166: NIFI-4395 - GenerateTableFetch can't fetch column type by ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2166
  
Hey @yjhyjhyjh0 - Don't worry about the CI builds - it has been failing for 
a while now. We should really find the reason why and fix it... but I didn't 
look into it yet. Regarding your PR, I think @ijokarumawak is going to review 
it when he has time for it.


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174976#comment-16174976
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker let me test the NiFi site2site to double check


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-21 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker let me test the NiFi site2site to double check


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174966#comment-16174966
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker i tie the merge processor to a put file processor to save the 
content


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-21 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker i tie the merge processor to a put file processor to save the 
content


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174959#comment-16174959
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@minifirocks What did you use to open it?  The procedure is the same, but 
the interpretation of that data is not. Seems that we need to at least verify 
that NiFi can interpret what we send it.


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-21 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@minifirocks What did you use to open it?  The procedure is the same, but 
the interpretation of that data is not. Seems that we need to at least verify 
that NiFi can interpret what we send it.


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174956#comment-16174956
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker i did not send the file over site2site, i save to a file and i 
can open it for read OK. the sending of these flowfile is the same procedure 
that we use to send normal flow files.


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-21 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker i did not send the file over site2site, i save to a file and i 
can open it for read OK. the sending of these flowfile is the same procedure 
that we use to send normal flow files.


---


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174945#comment-16174945
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user yjhyjhyjh0 commented on the issue:

https://github.com/apache/nifi/pull/2166
  
Not quite understand result from travis-ci. 
Seems it stuck at ANTLR Parser AttributeExpressionParser every time?


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2166: NIFI-4395 - GenerateTableFetch can't fetch column type by ...

2017-09-21 Thread yjhyjhyjh0
Github user yjhyjhyjh0 commented on the issue:

https://github.com/apache/nifi/pull/2166
  
Not quite understand result from travis-ci. 
Seems it stuck at ANTLR Parser AttributeExpressionParser every time?


---


[jira] [Updated] (NIFI-4405) GenerateFlowFile should allow charset for custom text

2017-09-21 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4405:
-
Component/s: Extensions

> GenerateFlowFile should allow charset for custom text
> -
>
> Key: NIFI-4405
> URL: https://issues.apache.org/jira/browse/NIFI-4405
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.4.0
>
>
> The user should be allowed to configure a charset used to get the bytes of 
> the custom text.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-4405) GenerateFlowFile should allow charset for custom text

2017-09-21 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-4405.
--
   Resolution: Fixed
Fix Version/s: 1.4.0

> GenerateFlowFile should allow charset for custom text
> -
>
> Key: NIFI-4405
> URL: https://issues.apache.org/jira/browse/NIFI-4405
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.4.0
>
>
> The user should be allowed to configure a charset used to get the bytes of 
> the custom text.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174921#comment-16174921
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@minifirocks I did have a question that popped up after I hit 
approvewith the header and footer you're serializing the data directly. Did 
you have any issues when opening those merged content files in NiFi after it 
was sent via Site To Site?


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-21 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@minifirocks I did have a question that popped up after I hit 
approvewith the header and footer you're serializing the data directly. Did 
you have any issues when opening those merged content files in NiFi after it 
was sent via Site To Site?


---


[jira] [Commented] (NIFI-4403) Add group name in bulletin data

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174911#comment-16174911
 ] 

ASF GitHub Bot commented on NIFI-4403:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2167
  
I don't think we can always add the group name to the BulletinDTO. A user 
could have permissions to the component generating the bulletin but not the 
Process Group the component lives in. Adding it to the Bulletin model 
server-side and exposed via the ReportingContext should ok though.

FYI - The source name is fair game as the BulletinDto is not set in the 
BulletinEntity when they lack permissions to the source. 


> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2167: NIFI-4403 - add group name to bulletins model & S2S bullet...

2017-09-21 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2167
  
I don't think we can always add the group name to the BulletinDTO. A user 
could have permissions to the component generating the bulletin but not the 
Process Group the component lives in. Adding it to the Bulletin model 
server-side and exposed via the ReportingContext should ok though.

FYI - The source name is fair game as the BulletinDto is not set in the 
BulletinEntity when they lack permissions to the source. 


---


[jira] [Commented] (NIFI-4405) GenerateFlowFile should allow charset for custom text

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174909#comment-16174909
 ] 

ASF GitHub Bot commented on NIFI-4405:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2168


> GenerateFlowFile should allow charset for custom text
> -
>
> Key: NIFI-4405
> URL: https://issues.apache.org/jira/browse/NIFI-4405
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> The user should be allowed to configure a charset used to get the bytes of 
> the custom text.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2168: NIFI-4405 Adding charset property to GenerateFlowFi...

2017-09-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2168


---


[jira] [Commented] (NIFI-4405) GenerateFlowFile should allow charset for custom text

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174905#comment-16174905
 ] 

ASF GitHub Bot commented on NIFI-4405:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2168
  
+1, merging to master, build with contrib-check OK and confirmed expected 
behavior, thanks @bbende 


> GenerateFlowFile should allow charset for custom text
> -
>
> Key: NIFI-4405
> URL: https://issues.apache.org/jira/browse/NIFI-4405
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> The user should be allowed to configure a charset used to get the bytes of 
> the custom text.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4405) GenerateFlowFile should allow charset for custom text

2017-09-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174908#comment-16174908
 ] 

ASF subversion and git services commented on NIFI-4405:
---

Commit 329dbe3a64f70e283ee131f8313dadf8ec96f3d4 in nifi's branch 
refs/heads/master from [~bbende]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=329dbe3 ]

NIFI-4405 Adding charset property to GenerateFlowFile

Signed-off-by: Pierre Villard 

This closes #2168.


> GenerateFlowFile should allow charset for custom text
> -
>
> Key: NIFI-4405
> URL: https://issues.apache.org/jira/browse/NIFI-4405
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> The user should be allowed to configure a charset used to get the bytes of 
> the custom text.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2168: NIFI-4405 Adding charset property to GenerateFlowFile

2017-09-21 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2168
  
+1, merging to master, build with contrib-check OK and confirmed expected 
behavior, thanks @bbende 


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174902#comment-16174902
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker @apiri please let me know whether it can be merged to master.


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-21 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker @apiri please let me know whether it can be merged to master.


---


[jira] [Commented] (NIFI-4405) GenerateFlowFile should allow charset for custom text

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174898#comment-16174898
 ] 

ASF GitHub Bot commented on NIFI-4405:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2168
  
reviewing...


> GenerateFlowFile should allow charset for custom text
> -
>
> Key: NIFI-4405
> URL: https://issues.apache.org/jira/browse/NIFI-4405
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> The user should be allowed to configure a charset used to get the bytes of 
> the custom text.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2168: NIFI-4405 Adding charset property to GenerateFlowFile

2017-09-21 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2168
  
reviewing...


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140265302
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -124,11 +148,14 @@ public void OnScheduled(final ProcessContext context) 
{
 kuduTable = this.getKuduTable(kuduClient, tableName);
 getLogger().debug("Kudu connection successfully 
initialized");
 }
+
--- End diff --

I know this is not part of this PR, but would not it make sense to allow 
expression language on table name and Kudu masters? (not with an evaluation 
against flow files but just against the variable registry and such in case 
someone wants to externalize the values between environments). Same remark for 
batch size?


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140267236
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +95,27 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
+"AUTO_FLUSH_SYNC: the call returns when the operation 
is persisted, else it throws an exception.\n" +
+"AUTO_FLUSH_BACKGROUND: the call returns when the 
operation has been added to the buffer. This call should normally perform only 
fast in-memory" +
+" operations but it may have to wait when the buffer 
is full and there's another buffer being flushed.\n" +
+"MANUAL_FLUSH: the call returns when the operation has 
been added to the buffer, else it throws a KuduException if the buffer is 
full.")
+.allowableValues(SessionConfiguration.FlushMode.values())
+
.defaultValue(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND.toString())
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
--- End diff --

I don't think the validator is required when you allow a list of values and 
define a default one.


---


[jira] [Commented] (NIFI-4405) GenerateFlowFile should allow charset for custom text

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174885#comment-16174885
 ] 

ASF GitHub Bot commented on NIFI-4405:
--

GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/2168

NIFI-4405 Adding charset property to GenerateFlowFile

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4405

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2168


commit f5230b9c45a6f24659efdc4e8c9edc8bdcfecb20
Author: Bryan Bende 
Date:   2017-09-21T14:57:03Z

NIFI-4405 Adding charset property to GenerateFlowFile




> GenerateFlowFile should allow charset for custom text
> -
>
> Key: NIFI-4405
> URL: https://issues.apache.org/jira/browse/NIFI-4405
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> The user should be allowed to configure a charset used to get the bytes of 
> the custom text.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140266879
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +95,27 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
--- End diff --

Would it be cleaner to define AllowableValue objects and set the 
description of each allowable value instead of doing it in the property 
description? (example in AbstractPutHBase). It's just a general question... I 
don't have a strong opinion on this.


---


[GitHub] nifi pull request #2160: [NiFi-4384] - Enhance PutKudu processor to support ...

2017-09-21 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2160#discussion_r140263816
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -94,6 +95,27 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor FLUSH_MODE = new 
PropertyDescriptor.Builder()
+.name("Flush Mode")
+.description("Set the new flush mode for a kudu session\n" +
+"AUTO_FLUSH_SYNC: the call returns when the operation 
is persisted, else it throws an exception.\n" +
+"AUTO_FLUSH_BACKGROUND: the call returns when the 
operation has been added to the buffer. This call should normally perform only 
fast in-memory" +
+" operations but it may have to wait when the buffer 
is full and there's another buffer being flushed.\n" +
+"MANUAL_FLUSH: the call returns when the operation has 
been added to the buffer, else it throws a KuduException if the buffer is 
full.")
+.allowableValues(SessionConfiguration.FlushMode.values())
+
.defaultValue(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND.toString())
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.description("Set the number of operations that can be 
buffered")
+.defaultValue("100")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
--- End diff --

I'd suggest an integer validator with a range. And also add in the 
description what would be the behaviour if a user sets the value 0.


---


[GitHub] nifi pull request #2168: NIFI-4405 Adding charset property to GenerateFlowFi...

2017-09-21 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/2168

NIFI-4405 Adding charset property to GenerateFlowFile

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4405

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2168


commit f5230b9c45a6f24659efdc4e8c9edc8bdcfecb20
Author: Bryan Bende 
Date:   2017-09-21T14:57:03Z

NIFI-4405 Adding charset property to GenerateFlowFile




---


[jira] [Created] (NIFI-4405) GenerateFlowFile should allow charset for custom text

2017-09-21 Thread Bryan Bende (JIRA)
Bryan Bende created NIFI-4405:
-

 Summary: GenerateFlowFile should allow charset for custom text
 Key: NIFI-4405
 URL: https://issues.apache.org/jira/browse/NIFI-4405
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.3.0, 1.2.0, 1.1.0
Reporter: Bryan Bende
Assignee: Bryan Bende
Priority: Minor


The user should be allowed to configure a charset used to get the bytes of the 
custom text.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4403) Add group name in bulletin data

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174873#comment-16174873
 ] 

ASF GitHub Bot commented on NIFI-4403:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2167

NIFI-4403 - add group name to bulletins model & S2S bulletin reporting task

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-4403

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2167.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2167


commit f93ea715eb186519579d3bbfa8ab230aa4e28143
Author: Pierre Villard 
Date:   2017-09-21T12:12:29Z

NIFI-4403 - add group name to bulletins model




> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4403) Add group name in bulletin data

2017-09-21 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4403:
-
Status: Patch Available  (was: Open)

> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2167: NIFI-4403 - add group name to bulletins model & S2S...

2017-09-21 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2167

NIFI-4403 - add group name to bulletins model & S2S bulletin reporting task

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-4403

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2167.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2167


commit f93ea715eb186519579d3bbfa8ab230aa4e28143
Author: Pierre Villard 
Date:   2017-09-21T12:12:29Z

NIFI-4403 - add group name to bulletins model




---


[jira] [Resolved] (NIFI-4396) PutElasticsearchHttp Processor: Type property isn't expanding nifi language expressions

2017-09-21 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall resolved NIFI-4396.

Resolution: Cannot Reproduce

> PutElasticsearchHttp Processor: Type property isn't expanding nifi language 
> expressions
> ---
>
> Key: NIFI-4396
> URL: https://issues.apache.org/jira/browse/NIFI-4396
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Joe Warner
>Priority: Minor
>
> Despite the documentation saying that the PutElasticsearchHttp processor 
> should support expression language this isn't happening. The Index property 
> works correctly however. Looking the source I notice that these two 
> properties use different validators and wonder if the issue could be related.
> Technically I guess this isn't a 'major' issue, but it is quite painful to 
> use it without the replacement happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4396) PutElasticsearchHttp Processor: Type property isn't expanding nifi language expressions

2017-09-21 Thread Joseph Percivall (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174860#comment-16174860
 ] 

Joseph Percivall commented on NIFI-4396:


Not a problem, I'm glad it's working for you. I'm going to close the ticket.

> PutElasticsearchHttp Processor: Type property isn't expanding nifi language 
> expressions
> ---
>
> Key: NIFI-4396
> URL: https://issues.apache.org/jira/browse/NIFI-4396
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Joe Warner
>Priority: Minor
>
> Despite the documentation saying that the PutElasticsearchHttp processor 
> should support expression language this isn't happening. The Index property 
> works correctly however. Looking the source I notice that these two 
> properties use different validators and wonder if the issue could be related.
> Technically I guess this isn't a 'major' issue, but it is quite painful to 
> use it without the replacement happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4391) PutTCP not properly closing connections

2017-09-21 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4391:
-
Component/s: Extensions

> PutTCP not properly closing connections
> ---
>
> Key: NIFI-4391
> URL: https://issues.apache.org/jira/browse/NIFI-4391
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
> Fix For: 1.4.0
>
>
> Thread from the mailing list...
> We are using NiFi PutTCP processors to send messages to a number of Moxa
> onCell ip gateway devices.
> These Moxa devices are running on a cellular network with not always the
> most ideal connection. The Moxa only allows for a maximum of 2 simultaneous
> client connections.
> What we notice is that although we specify connection / read timeouts on
> both PutTCP and the Moxa, that sometimes a connection get "stuck". (In the
> moxa network monitoring we see 2 client sockets coming from PutTCP in the
> ESTABLISHED state that never go away).
> This doesn't always happen, but often enough for it to be considered a
> problem, as it requires a restart of the moxa ports to clear the connections
> (manual step). It typically happens when PutTCP experiences a Timeout.
> On the PutTCP processors we have the following settings :
> - Idle Connection Expiration : 30 seconds  (we've set this higher due to bad
> gprs connection)
> - Timeout : 10 seconds (this is only used as a timeout for establishing the
> connection)
> On the Moxas we have
> - TCP alive check time : 2min (this should force the Moxa to close the
> socket)
> Yet for some reason the connection remains established.
> Here's what I found out :
> On the moxa I noticed a connection (with client port 48440) that is in
> ESTABLISHED mode for 4+ hours. (blocking other connections). On the Moxa I
> can see when the connection was established :
> 2017/09/17 14:20:29 [OpMode] Port01 Connect 10.192.2.90:48440
> I can track that down in Nifi via the logs (unfortunately PutTCP doesn't log
> client ports, but from the timestamp  I'm sure it's this connection :
> {code}
> 2017-09-17 14:20:10,837 DEBUG [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> creating a new one...
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> and unable to create a new one, transferring
> StandardFlowFileRecord[uuid=79f2a166-5211-4d2d-9275-03f0ce4d5b29,claim=StandardContentClaim
> [resourceClaim=StandardResourceClaim[id=1505641210025-1, container=default,
> section=1], offset=84519, length=9],offset=0,name=23934743676390659,size=9]
> to failure: java.net.SocketTimeoutException: Timed out connecting to
> 10.32.133.40:4001
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> java.net.SocketTimeoutException: Timed out connecting to 10.32.133.40:4001
> at
> org.apache.nifi.processor.util.put.sender.SocketChannelSender.open(SocketChannelSender.java:66)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.createSender(AbstractPutEventProcessor.java:312)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.createSender(PutTCP.java:121)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.acquireSender(AbstractPutEventProcessor.java:334)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.onTrigger(PutTCP.java:176)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> [na:1.8.0_111]
> at
> 

[jira] [Commented] (NIFI-4391) PutTCP not properly closing connections

2017-09-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174853#comment-16174853
 ] 

ASF subversion and git services commented on NIFI-4391:
---

Commit a813ae113e4d1dfb797845136d65521eb8dc60bb in nifi's branch 
refs/heads/master from [~bbende]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=a813ae1 ]

NIFI-4391 Ensuring channel is closed when unable to connect in 
SocketChannelSender
NIFI-4391 Adding debug logging of client port upon connection

Signed-off-by: Pierre Villard 

This closes #2159.


> PutTCP not properly closing connections
> ---
>
> Key: NIFI-4391
> URL: https://issues.apache.org/jira/browse/NIFI-4391
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
> Fix For: 1.4.0
>
>
> Thread from the mailing list...
> We are using NiFi PutTCP processors to send messages to a number of Moxa
> onCell ip gateway devices.
> These Moxa devices are running on a cellular network with not always the
> most ideal connection. The Moxa only allows for a maximum of 2 simultaneous
> client connections.
> What we notice is that although we specify connection / read timeouts on
> both PutTCP and the Moxa, that sometimes a connection get "stuck". (In the
> moxa network monitoring we see 2 client sockets coming from PutTCP in the
> ESTABLISHED state that never go away).
> This doesn't always happen, but often enough for it to be considered a
> problem, as it requires a restart of the moxa ports to clear the connections
> (manual step). It typically happens when PutTCP experiences a Timeout.
> On the PutTCP processors we have the following settings :
> - Idle Connection Expiration : 30 seconds  (we've set this higher due to bad
> gprs connection)
> - Timeout : 10 seconds (this is only used as a timeout for establishing the
> connection)
> On the Moxas we have
> - TCP alive check time : 2min (this should force the Moxa to close the
> socket)
> Yet for some reason the connection remains established.
> Here's what I found out :
> On the moxa I noticed a connection (with client port 48440) that is in
> ESTABLISHED mode for 4+ hours. (blocking other connections). On the Moxa I
> can see when the connection was established :
> 2017/09/17 14:20:29 [OpMode] Port01 Connect 10.192.2.90:48440
> I can track that down in Nifi via the logs (unfortunately PutTCP doesn't log
> client ports, but from the timestamp  I'm sure it's this connection :
> {code}
> 2017-09-17 14:20:10,837 DEBUG [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> creating a new one...
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> and unable to create a new one, transferring
> StandardFlowFileRecord[uuid=79f2a166-5211-4d2d-9275-03f0ce4d5b29,claim=StandardContentClaim
> [resourceClaim=StandardResourceClaim[id=1505641210025-1, container=default,
> section=1], offset=84519, length=9],offset=0,name=23934743676390659,size=9]
> to failure: java.net.SocketTimeoutException: Timed out connecting to
> 10.32.133.40:4001
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> java.net.SocketTimeoutException: Timed out connecting to 10.32.133.40:4001
> at
> org.apache.nifi.processor.util.put.sender.SocketChannelSender.open(SocketChannelSender.java:66)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.createSender(AbstractPutEventProcessor.java:312)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.createSender(PutTCP.java:121)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.acquireSender(AbstractPutEventProcessor.java:334)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.onTrigger(PutTCP.java:176)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> 

[jira] [Updated] (NIFI-4391) PutTCP not properly closing connections

2017-09-21 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4391:
-
   Resolution: Fixed
Fix Version/s: 1.4.0
   Status: Resolved  (was: Patch Available)

> PutTCP not properly closing connections
> ---
>
> Key: NIFI-4391
> URL: https://issues.apache.org/jira/browse/NIFI-4391
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
> Fix For: 1.4.0
>
>
> Thread from the mailing list...
> We are using NiFi PutTCP processors to send messages to a number of Moxa
> onCell ip gateway devices.
> These Moxa devices are running on a cellular network with not always the
> most ideal connection. The Moxa only allows for a maximum of 2 simultaneous
> client connections.
> What we notice is that although we specify connection / read timeouts on
> both PutTCP and the Moxa, that sometimes a connection get "stuck". (In the
> moxa network monitoring we see 2 client sockets coming from PutTCP in the
> ESTABLISHED state that never go away).
> This doesn't always happen, but often enough for it to be considered a
> problem, as it requires a restart of the moxa ports to clear the connections
> (manual step). It typically happens when PutTCP experiences a Timeout.
> On the PutTCP processors we have the following settings :
> - Idle Connection Expiration : 30 seconds  (we've set this higher due to bad
> gprs connection)
> - Timeout : 10 seconds (this is only used as a timeout for establishing the
> connection)
> On the Moxas we have
> - TCP alive check time : 2min (this should force the Moxa to close the
> socket)
> Yet for some reason the connection remains established.
> Here's what I found out :
> On the moxa I noticed a connection (with client port 48440) that is in
> ESTABLISHED mode for 4+ hours. (blocking other connections). On the Moxa I
> can see when the connection was established :
> 2017/09/17 14:20:29 [OpMode] Port01 Connect 10.192.2.90:48440
> I can track that down in Nifi via the logs (unfortunately PutTCP doesn't log
> client ports, but from the timestamp  I'm sure it's this connection :
> {code}
> 2017-09-17 14:20:10,837 DEBUG [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> creating a new one...
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> and unable to create a new one, transferring
> StandardFlowFileRecord[uuid=79f2a166-5211-4d2d-9275-03f0ce4d5b29,claim=StandardContentClaim
> [resourceClaim=StandardResourceClaim[id=1505641210025-1, container=default,
> section=1], offset=84519, length=9],offset=0,name=23934743676390659,size=9]
> to failure: java.net.SocketTimeoutException: Timed out connecting to
> 10.32.133.40:4001
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> java.net.SocketTimeoutException: Timed out connecting to 10.32.133.40:4001
> at
> org.apache.nifi.processor.util.put.sender.SocketChannelSender.open(SocketChannelSender.java:66)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.createSender(AbstractPutEventProcessor.java:312)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.createSender(PutTCP.java:121)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.acquireSender(AbstractPutEventProcessor.java:334)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.onTrigger(PutTCP.java:176)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> [na:1.8.0_111]
> at
> 

[jira] [Commented] (NIFI-4391) PutTCP not properly closing connections

2017-09-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174854#comment-16174854
 ] 

ASF subversion and git services commented on NIFI-4391:
---

Commit a813ae113e4d1dfb797845136d65521eb8dc60bb in nifi's branch 
refs/heads/master from [~bbende]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=a813ae1 ]

NIFI-4391 Ensuring channel is closed when unable to connect in 
SocketChannelSender
NIFI-4391 Adding debug logging of client port upon connection

Signed-off-by: Pierre Villard 

This closes #2159.


> PutTCP not properly closing connections
> ---
>
> Key: NIFI-4391
> URL: https://issues.apache.org/jira/browse/NIFI-4391
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
> Fix For: 1.4.0
>
>
> Thread from the mailing list...
> We are using NiFi PutTCP processors to send messages to a number of Moxa
> onCell ip gateway devices.
> These Moxa devices are running on a cellular network with not always the
> most ideal connection. The Moxa only allows for a maximum of 2 simultaneous
> client connections.
> What we notice is that although we specify connection / read timeouts on
> both PutTCP and the Moxa, that sometimes a connection get "stuck". (In the
> moxa network monitoring we see 2 client sockets coming from PutTCP in the
> ESTABLISHED state that never go away).
> This doesn't always happen, but often enough for it to be considered a
> problem, as it requires a restart of the moxa ports to clear the connections
> (manual step). It typically happens when PutTCP experiences a Timeout.
> On the PutTCP processors we have the following settings :
> - Idle Connection Expiration : 30 seconds  (we've set this higher due to bad
> gprs connection)
> - Timeout : 10 seconds (this is only used as a timeout for establishing the
> connection)
> On the Moxas we have
> - TCP alive check time : 2min (this should force the Moxa to close the
> socket)
> Yet for some reason the connection remains established.
> Here's what I found out :
> On the moxa I noticed a connection (with client port 48440) that is in
> ESTABLISHED mode for 4+ hours. (blocking other connections). On the Moxa I
> can see when the connection was established :
> 2017/09/17 14:20:29 [OpMode] Port01 Connect 10.192.2.90:48440
> I can track that down in Nifi via the logs (unfortunately PutTCP doesn't log
> client ports, but from the timestamp  I'm sure it's this connection :
> {code}
> 2017-09-17 14:20:10,837 DEBUG [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> creating a new one...
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> and unable to create a new one, transferring
> StandardFlowFileRecord[uuid=79f2a166-5211-4d2d-9275-03f0ce4d5b29,claim=StandardContentClaim
> [resourceClaim=StandardResourceClaim[id=1505641210025-1, container=default,
> section=1], offset=84519, length=9],offset=0,name=23934743676390659,size=9]
> to failure: java.net.SocketTimeoutException: Timed out connecting to
> 10.32.133.40:4001
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> java.net.SocketTimeoutException: Timed out connecting to 10.32.133.40:4001
> at
> org.apache.nifi.processor.util.put.sender.SocketChannelSender.open(SocketChannelSender.java:66)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.createSender(AbstractPutEventProcessor.java:312)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.createSender(PutTCP.java:121)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.acquireSender(AbstractPutEventProcessor.java:334)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.onTrigger(PutTCP.java:176)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> 

[jira] [Commented] (NIFI-4391) PutTCP not properly closing connections

2017-09-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16174855#comment-16174855
 ] 

ASF GitHub Bot commented on NIFI-4391:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2159


> PutTCP not properly closing connections
> ---
>
> Key: NIFI-4391
> URL: https://issues.apache.org/jira/browse/NIFI-4391
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
> Fix For: 1.4.0
>
>
> Thread from the mailing list...
> We are using NiFi PutTCP processors to send messages to a number of Moxa
> onCell ip gateway devices.
> These Moxa devices are running on a cellular network with not always the
> most ideal connection. The Moxa only allows for a maximum of 2 simultaneous
> client connections.
> What we notice is that although we specify connection / read timeouts on
> both PutTCP and the Moxa, that sometimes a connection get "stuck". (In the
> moxa network monitoring we see 2 client sockets coming from PutTCP in the
> ESTABLISHED state that never go away).
> This doesn't always happen, but often enough for it to be considered a
> problem, as it requires a restart of the moxa ports to clear the connections
> (manual step). It typically happens when PutTCP experiences a Timeout.
> On the PutTCP processors we have the following settings :
> - Idle Connection Expiration : 30 seconds  (we've set this higher due to bad
> gprs connection)
> - Timeout : 10 seconds (this is only used as a timeout for establishing the
> connection)
> On the Moxas we have
> - TCP alive check time : 2min (this should force the Moxa to close the
> socket)
> Yet for some reason the connection remains established.
> Here's what I found out :
> On the moxa I noticed a connection (with client port 48440) that is in
> ESTABLISHED mode for 4+ hours. (blocking other connections). On the Moxa I
> can see when the connection was established :
> 2017/09/17 14:20:29 [OpMode] Port01 Connect 10.192.2.90:48440
> I can track that down in Nifi via the logs (unfortunately PutTCP doesn't log
> client ports, but from the timestamp  I'm sure it's this connection :
> {code}
> 2017-09-17 14:20:10,837 DEBUG [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> creating a new one...
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> PutTCP[id=80231a39-1008-1159-a6fa-1f9e3751d608] No available connections,
> and unable to create a new one, transferring
> StandardFlowFileRecord[uuid=79f2a166-5211-4d2d-9275-03f0ce4d5b29,claim=StandardContentClaim
> [resourceClaim=StandardResourceClaim[id=1505641210025-1, container=default,
> section=1], offset=84519, length=9],offset=0,name=23934743676390659,size=9]
> to failure: java.net.SocketTimeoutException: Timed out connecting to
> 10.32.133.40:4001
> 2017-09-17 14:20:20,860 ERROR [Timer-Driven Process Thread-10]
> o.apache.nifi.processors.standard.PutTCP
> java.net.SocketTimeoutException: Timed out connecting to 10.32.133.40:4001
> at
> org.apache.nifi.processor.util.put.sender.SocketChannelSender.open(SocketChannelSender.java:66)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.createSender(AbstractPutEventProcessor.java:312)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.createSender(PutTCP.java:121)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processor.util.put.AbstractPutEventProcessor.acquireSender(AbstractPutEventProcessor.java:334)
> ~[nifi-processor-utils-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.processors.standard.PutTCP.onTrigger(PutTCP.java:176)
> [nifi-standard-processors-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
> [nifi-framework-core-1.1.0.jar:1.1.0]
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> [na:1.8.0_111]
> at
> 

  1   2   >