[jira] [Commented] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-12 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085196#comment-16085196
 ] 

Jorge Machado commented on NIFI-4174:
-

Yeah, you are right. 

One question: Is there a processor that can just execute a list of sql 
statements. ? that would be nice

> GenerateTableFetch does not work with oracle on Nifi 1.2
> 
>
> Key: NIFI-4174
> URL: https://issues.apache.org/jira/browse/NIFI-4174
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jorge Machado
>Priority: Minor
>
> I'm trying to extract some data from a oracle DB.  
> I'm getting : 
> {code:java}
> 2017-07-11 16:19:29,612 WARN [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Timed out while waiting for 
> OnScheduled of 'GenerateTableFetch' processor to finish. An attempt is made 
> to cancel the task via Thread.interrupt(). However it does not guarantee that 
> the task will be canceled since the code inside current OnScheduled operation 
> may have been written to ignore interrupts which may result in a runaway 
> thread. This could lead to more issues, eventually requiring NiFi to be 
> restarted. This is usually a bug in the target Processor 
> 'GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4]' that needs to 
> be documented, reported and eventually fixed.
> 2017-07-11 16:19:29,612 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.p.standard.GenerateTableFetch 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.; processor will not be 
> scheduled to run for 30 seconds: java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> 2017-07-11 16:19:29,613 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method 
> due to java.lang.RuntimeException: Timed out while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> 

[jira] [Resolved] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-12 Thread Jorge Machado (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Machado resolved NIFI-4174.
-
Resolution: Won't Fix

> GenerateTableFetch does not work with oracle on Nifi 1.2
> 
>
> Key: NIFI-4174
> URL: https://issues.apache.org/jira/browse/NIFI-4174
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jorge Machado
>Priority: Minor
>
> I'm trying to extract some data from a oracle DB.  
> I'm getting : 
> {code:java}
> 2017-07-11 16:19:29,612 WARN [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Timed out while waiting for 
> OnScheduled of 'GenerateTableFetch' processor to finish. An attempt is made 
> to cancel the task via Thread.interrupt(). However it does not guarantee that 
> the task will be canceled since the code inside current OnScheduled operation 
> may have been written to ignore interrupts which may result in a runaway 
> thread. This could lead to more issues, eventually requiring NiFi to be 
> restarted. This is usually a bug in the target Processor 
> 'GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4]' that needs to 
> be documented, reported and eventually fixed.
> 2017-07-11 16:19:29,612 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.p.standard.GenerateTableFetch 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.; processor will not be 
> scheduled to run for 30 seconds: java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> 2017-07-11 16:19:29,613 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method 
> due to java.lang.RuntimeException: Timed out while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> {code}
> Database Connection Pooling Service:
> 

[GitHub] nifi issue #1983: NiFi-2829: Add Date and Time Format Support for PutSQL

2017-07-12 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1983
  
@yjhyjhyjh0 Thanks for the updates!

For AppVeyor test, it has been failing and Travis test failure is caused by 
nifi-persistent-provenance-repository, which has also been failing 
occasionally, so please don't worry about those.

I'll continue reviewing..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085168#comment-16085168
 ] 

ASF GitHub Bot commented on NIFI-1613:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1976
  
@mattyb149  I confirmed that changes are merged, closing. Thanks for 
reviewing!


> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1976: NIFI-1613: Make use of column type correctly at ConvertJSO...

2017-07-12 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1976
  
@mattyb149  I confirmed that changes are merged, closing. Thanks for 
reviewing!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085169#comment-16085169
 ] 

ASF GitHub Bot commented on NIFI-1613:
--

Github user ijokarumawak closed the pull request at:

https://github.com/apache/nifi/pull/1976


> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1976: NIFI-1613: Make use of column type correctly at Con...

2017-07-12 Thread ijokarumawak
Github user ijokarumawak closed the pull request at:

https://github.com/apache/nifi/pull/1976


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085132#comment-16085132
 ] 

ASF GitHub Bot commented on NIFI-4181:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2003#discussion_r127120413
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVRecordSetWriter.java
 ---
@@ -48,6 +54,7 @@
 @Override
 protected List getSupportedPropertyDescriptors() {
 final List properties = new 
ArrayList<>(super.getSupportedPropertyDescriptors());
+properties.add(CSVUtils.EXPLICIT_COLUMNS);
--- End diff --

Sorry this is the wrong spot to leave the comment but since 
@CapabilityDescription (line 40/46) wasn't part of the diff, I couldn't leave 
the comment there. Setting Explicit Columns can affect the first line written, 
so the CapabilityDescription text should be updated to include that.

In addition, explicitly setting the output columns is subject to the same 
rules as if you had an output Avro schema; namely, the output field/column 
names have to match the input names or else there will be empty columns/fields 
in the output. In the general case this is covered by processor or 
reader/writer doc, but since this is CSV-specific I think we should make this 
clear. On one hand, there is the interesting feature that columns can be 
re-arranged by specifying the input fields in a different order in the explicit 
output columns; but on the other hand, if the user expects to use the writer to 
rename the fields (because the names are positional), that won't work.

In general, I'd like this extra flexibility/power to not be too confusing 
for the user, or its usefulness will be overshadowed by its complexity. For 
example, you can use the Explicit Columns in the CSVReader to rename the 
columns, and in the CSVRecordSetWriter to reorder the columns, but the inverse 
is not true. These could remain undocumented/unsupported features, and/or we'd 
need very clear documentation explaining their use.


> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085130#comment-16085130
 ] 

ASF GitHub Bot commented on NIFI-4181:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2003#discussion_r127120939
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVReader.java
 ---
@@ -49,7 +51,7 @@
 + "the values. See Controller Service's Usage for further 
documentation.")
 public class CSVReader extends SchemaRegistryService implements 
RecordReaderFactory {
 
-private final AllowableValue headerDerivedAllowableValue = new 
AllowableValue("csv-header-derived", "Use String Fields From Header",
+static final AllowableValue HEADER_DERIVED_ALLOWABLE_VALUE = new 
AllowableValue("csv-header-derived", "Use String Fields From Header",
--- End diff --

Making this static (and capitalizing the name) aligns with the common 
constant / property pattern, thanks! However it also looks like it can remain 
private (at least that's what IntelliJ tells me ;)


> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085131#comment-16085131
 ] 

ASF GitHub Bot commented on NIFI-4181:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2003#discussion_r127116107
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVUtils.java
 ---
@@ -37,6 +45,16 @@
 "The format used by Informix when issuing the UNLOAD TO file_name 
command with escaping disabled");
 static final AllowableValue MYSQL = new AllowableValue("mysql", "MySQL 
Format", "CSV data follows the format used by MySQL");
 
+static final AllowableValue SCHEMA_ACCESS_STRATEGY_EXPLICIT_COLUMNS = 
new AllowableValue("csv-explicit-columns", "Use '" + 
EXPLICIT_COLUMNS_DISPLAY_NAME + "' Property",
+"Takes the '" + EXPLICIT_COLUMNS_DISPLAY_NAME + "' property 
value as the explicit definition of the CSV columns.");
+
+static final PropertyDescriptor EXPLICIT_COLUMNS = new 
PropertyDescriptor.Builder()
+.name(EXPLICIT_COLUMNS_DISPLAY_NAME)
+.description("Specifies the CSV columns expected as a comma 
separated list. Only used with the Schema Access Strategy '" + 
SCHEMA_ACCESS_STRATEGY_EXPLICIT_COLUMNS.getDisplayName() + "'.")
+.expressionLanguageSupported(false)
--- End diff --

Is there any reason why expression language should not be supported? Using 
a Variable Registry for example, the header list could be set externally.


> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085129#comment-16085129
 ] 

ASF GitHub Bot commented on NIFI-4181:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2003#discussion_r127115982
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVUtils.java
 ---
@@ -37,6 +45,16 @@
 "The format used by Informix when issuing the UNLOAD TO file_name 
command with escaping disabled");
 static final AllowableValue MYSQL = new AllowableValue("mysql", "MySQL 
Format", "CSV data follows the format used by MySQL");
 
+static final AllowableValue SCHEMA_ACCESS_STRATEGY_EXPLICIT_COLUMNS = 
new AllowableValue("csv-explicit-columns", "Use '" + 
EXPLICIT_COLUMNS_DISPLAY_NAME + "' Property",
+"Takes the '" + EXPLICIT_COLUMNS_DISPLAY_NAME + "' property 
value as the explicit definition of the CSV columns.");
+
+static final PropertyDescriptor EXPLICIT_COLUMNS = new 
PropertyDescriptor.Builder()
+.name(EXPLICIT_COLUMNS_DISPLAY_NAME)
--- End diff --

The common convention here is to set .name() to a machine-friendly name 
(like 'csv-explicit-columns') and set .displayName() to the user-friendly name 
(EXPLICIT_COLUMNS_DISPLAY_NAME)


> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be u...

2017-07-12 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2003#discussion_r127115982
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVUtils.java
 ---
@@ -37,6 +45,16 @@
 "The format used by Informix when issuing the UNLOAD TO file_name 
command with escaping disabled");
 static final AllowableValue MYSQL = new AllowableValue("mysql", "MySQL 
Format", "CSV data follows the format used by MySQL");
 
+static final AllowableValue SCHEMA_ACCESS_STRATEGY_EXPLICIT_COLUMNS = 
new AllowableValue("csv-explicit-columns", "Use '" + 
EXPLICIT_COLUMNS_DISPLAY_NAME + "' Property",
+"Takes the '" + EXPLICIT_COLUMNS_DISPLAY_NAME + "' property 
value as the explicit definition of the CSV columns.");
+
+static final PropertyDescriptor EXPLICIT_COLUMNS = new 
PropertyDescriptor.Builder()
+.name(EXPLICIT_COLUMNS_DISPLAY_NAME)
--- End diff --

The common convention here is to set .name() to a machine-friendly name 
(like 'csv-explicit-columns') and set .displayName() to the user-friendly name 
(EXPLICIT_COLUMNS_DISPLAY_NAME)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be u...

2017-07-12 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2003#discussion_r127120413
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVRecordSetWriter.java
 ---
@@ -48,6 +54,7 @@
 @Override
 protected List getSupportedPropertyDescriptors() {
 final List properties = new 
ArrayList<>(super.getSupportedPropertyDescriptors());
+properties.add(CSVUtils.EXPLICIT_COLUMNS);
--- End diff --

Sorry this is the wrong spot to leave the comment but since 
@CapabilityDescription (line 40/46) wasn't part of the diff, I couldn't leave 
the comment there. Setting Explicit Columns can affect the first line written, 
so the CapabilityDescription text should be updated to include that.

In addition, explicitly setting the output columns is subject to the same 
rules as if you had an output Avro schema; namely, the output field/column 
names have to match the input names or else there will be empty columns/fields 
in the output. In the general case this is covered by processor or 
reader/writer doc, but since this is CSV-specific I think we should make this 
clear. On one hand, there is the interesting feature that columns can be 
re-arranged by specifying the input fields in a different order in the explicit 
output columns; but on the other hand, if the user expects to use the writer to 
rename the fields (because the names are positional), that won't work.

In general, I'd like this extra flexibility/power to not be too confusing 
for the user, or its usefulness will be overshadowed by its complexity. For 
example, you can use the Explicit Columns in the CSVReader to rename the 
columns, and in the CSVRecordSetWriter to reorder the columns, but the inverse 
is not true. These could remain undocumented/unsupported features, and/or we'd 
need very clear documentation explaining their use.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be u...

2017-07-12 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2003#discussion_r127116107
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVUtils.java
 ---
@@ -37,6 +45,16 @@
 "The format used by Informix when issuing the UNLOAD TO file_name 
command with escaping disabled");
 static final AllowableValue MYSQL = new AllowableValue("mysql", "MySQL 
Format", "CSV data follows the format used by MySQL");
 
+static final AllowableValue SCHEMA_ACCESS_STRATEGY_EXPLICIT_COLUMNS = 
new AllowableValue("csv-explicit-columns", "Use '" + 
EXPLICIT_COLUMNS_DISPLAY_NAME + "' Property",
+"Takes the '" + EXPLICIT_COLUMNS_DISPLAY_NAME + "' property 
value as the explicit definition of the CSV columns.");
+
+static final PropertyDescriptor EXPLICIT_COLUMNS = new 
PropertyDescriptor.Builder()
+.name(EXPLICIT_COLUMNS_DISPLAY_NAME)
+.description("Specifies the CSV columns expected as a comma 
separated list. Only used with the Schema Access Strategy '" + 
SCHEMA_ACCESS_STRATEGY_EXPLICIT_COLUMNS.getDisplayName() + "'.")
+.expressionLanguageSupported(false)
--- End diff --

Is there any reason why expression language should not be supported? Using 
a Variable Registry for example, the header list could be set externally.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be u...

2017-07-12 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2003#discussion_r127120939
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/csv/CSVReader.java
 ---
@@ -49,7 +51,7 @@
 + "the values. See Controller Service's Usage for further 
documentation.")
 public class CSVReader extends SchemaRegistryService implements 
RecordReaderFactory {
 
-private final AllowableValue headerDerivedAllowableValue = new 
AllowableValue("csv-header-derived", "Use String Fields From Header",
+static final AllowableValue HEADER_DERIVED_ALLOWABLE_VALUE = new 
AllowableValue("csv-header-derived", "Use String Fields From Header",
--- End diff --

Making this static (and capitalizing the name) aligns with the common 
constant / property pattern, thanks! However it also looks like it can remain 
private (at least that's what IntelliJ tells me ;)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2162) InvokeHttp's underlying library for Digest Auth uses the Android logger

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085097#comment-16085097
 ] 

ASF GitHub Bot commented on NIFI-2162:
--

GitHub user JPercivall opened a pull request:

https://github.com/apache/nifi/pull/2004

NIFI-2162 Updating OkHttp to 3.8.1 and OkHttp-Digest to 1.13 and refa…

…ctoring InvokeHttp to adjust for changes

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [X] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [X] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [X] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [X] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JPercivall/nifi NIFI-2162

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2004.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2004


commit b67ca1532765e8296442e78e7d6a781bed1167d9
Author: Joe Percivall 
Date:   2017-07-13T03:16:15Z

NIFI-2162 Updating OkHttp to 3.8.1 and OkHttp-Digest to 1.13 and 
refactoring InvokeHttp to adjust for changes




> InvokeHttp's underlying library for Digest Auth uses the Android logger
> ---
>
> Key: NIFI-2162
> URL: https://issues.apache.org/jira/browse/NIFI-2162
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Joseph Percivall
>
> A user emailed the User mailing list with an issue that InvokeHttp was 
> failing due to not being able to find "android/util/Log"[1]. InvokeHttp uses 
> OkHttp and the library they recommend for digest authentication is 
> okhttp-digest[2]. Currently okhttp-digest assumes it's running on an Android 
> device and has access to the Android logger (OkHttp does not assume it's on 
> an Android device). 
> I raised an issue about it on the project's github page[3] and the creator 
> said he "Will change this soonish."
> Once that is addressed, InvokeHttp will need to update the versions of OkHttp 
> and okhttp-digest. 
> [1] http://mail-archives.apache.org/mod_mbox/nifi-users/201606.mbox/browser
> [2] https://github.com/square/okhttp/issues/205
> [3] https://github.com/rburgst/okhttp-digest/issues/13



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2004: NIFI-2162 Updating OkHttp to 3.8.1 and OkHttp-Diges...

2017-07-12 Thread JPercivall
GitHub user JPercivall opened a pull request:

https://github.com/apache/nifi/pull/2004

NIFI-2162 Updating OkHttp to 3.8.1 and OkHttp-Digest to 1.13 and refa…

…ctoring InvokeHttp to adjust for changes

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [X] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [X] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [X] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [X] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [X] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JPercivall/nifi NIFI-2162

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2004.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2004


commit b67ca1532765e8296442e78e7d6a781bed1167d9
Author: Joe Percivall 
Date:   2017-07-13T03:16:15Z

NIFI-2162 Updating OkHttp to 3.8.1 and OkHttp-Digest to 1.13 and 
refactoring InvokeHttp to adjust for changes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084701#comment-16084701
 ] 

ASF subversion and git services commented on NIFI-1613:
---

Commit 3844a821f18e6a76e44e289c923be333e49d5251 in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3844a82 ]

NIFI-1613: ConvertJSONToSQL truncates numeric value wrongly.

- Changed boolean value conversion to use Boolean.valueOf.
- Updated comments in source code to reflect current situation more clearly.
- Updated tests those have been added since the original commits were made.


> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084698#comment-16084698
 ] 

ASF GitHub Bot commented on NIFI-4181:
--

GitHub user Wesley-Lawrence opened a pull request:

https://github.com/apache/nifi/pull/2003

NIFI-4181 CSVReader and CSVRecordSetWriter can be used by just explictly 
declaring their columns.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [✓] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [✓] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [✓] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [✓] Is your initial contribution a single, squashed commit?

### For code changes:
- [✓] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [✓] Have you written or updated unit tests to verify your changes?
- [N/A] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [N/A] If applicable, have you updated the LICENSE file, including the 
main LICENSE file under nifi-assembly?
- [N/A] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [✓] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [N/A] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Wesley-Lawrence/nifi NIFI-4181

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2003.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2003


commit cc3f6af7c4d751e813384e166aa87821ace42273
Author: Wesley-Lawrence 
Date:   2017-07-12T21:12:55Z

NIFI-4181 CSVReader and CSVRecordSetWriter can be used by just explicitly 
declaring their columns.




> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-1613:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084715#comment-16084715
 ] 

ASF GitHub Bot commented on NIFI-1613:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1976
  
@ijokarumawak For some reason my rebase/squash didn't work so it also 
didn't close this PR. But it has been merged, can you close this PR? Please and 
thanks!


> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread Wesley L Lawrence (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wesley L Lawrence updated NIFI-4181:

Attachment: NIFI-4181.patch

Slight patch update based on running contrib checks.

> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-1613:
---
Fix Version/s: 1.4.0

> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1976: NIFI-1613: Make use of column type correctly at ConvertJSO...

2017-07-12 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1976
  
@ijokarumawak For some reason my rebase/squash didn't work so it also 
didn't close this PR. But it has been merged, can you close this PR? Please and 
thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084702#comment-16084702
 ] 

ASF subversion and git services commented on NIFI-1613:
---

Commit 8acee02393f9557b9679038b933ba49705984cf8 in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=8acee02 ]

NIFI-1613

- Truncate text data types only.
- Added conversion from a boolean to number.


> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084705#comment-16084705
 ] 

ASF GitHub Bot commented on NIFI-1613:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1976
  
+1 LGTM, tested with MySQL, Oracle, and Postgres (and ran unit tests which 
use Derby). Thanks for the fix/improvement @ijokarumawak and @ToivoAdams! 
Merging to master


> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1613) ConvertJSONToSQL Drops Type Information

2017-07-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084700#comment-16084700
 ] 

ASF subversion and git services commented on NIFI-1613:
---

Commit 3b2e43b75c80be854cc854c3941e882c794c1d76 in nifi's branch 
refs/heads/master from [~Toivo Adams]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3b2e43b ]

NIFI-1613 Initial version, try to improve conversion for different SQL types. 
New test and refactored existing test to reuse DBCP service.

nifi-1613 Adding numeric and Date/time types conversion and test.


> ConvertJSONToSQL Drops Type Information
> ---
>
> Key: NIFI-1613
> URL: https://issues.apache.org/jira/browse/NIFI-1613
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1, 0.5.1
> Environment: Ubuntu 14.04 LTS
>Reporter: Aaron Stephens
>Assignee: Toivo Adams
>  Labels: ConvertJSONToSQL, Phoenix, SQL
> Fix For: 1.4.0
>
>
> It appears that the ConvertJSONToSQL processor is turning Boolean (and 
> possibly Integer and Float) values into Strings.  This is okay for some 
> drivers (like PostgreSQL) which can coerce a String back into a Boolean, but 
> it causes issues for others (specifically Phoenix in my case).
> {noformat}
> org.apache.phoenix.schema.ConstraintViolationException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
>  ~[na:na]
> at 
> org.apache.phoenix.schema.types.PBoolean.toObject(PBoolean.java:136) ~[na:na]
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.commons.dbcp.DelegatingPreparedStatement.setObject(DelegatingPreparedStatement.java:166)
>  ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameter(PutSQL.java:728) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.setParameters(PutSQL.java:606) 
> ~[na:na]
> at 
> org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:223) ~[na:na]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146)
>  ~[nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>  [nifi-framework-core-0.4.1.jar:0.4.1]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_79]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
> [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_79]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_79]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. VARCHAR cannot be coerced to BOOLEAN
> at 
> org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
>  ~[na:na]
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>  ~[na:na]
> ... 20 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread Wesley L Lawrence (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wesley L Lawrence updated NIFI-4181:

Attachment: (was: NIFI-4181.patch)

> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1976: NIFI-1613: Make use of column type correctly at ConvertJSO...

2017-07-12 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1976
  
+1 LGTM, tested with MySQL, Oracle, and Postgres (and ran unit tests which 
use Derby). Thanks for the fix/improvement @ijokarumawak and @ToivoAdams! 
Merging to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be u...

2017-07-12 Thread Wesley-Lawrence
GitHub user Wesley-Lawrence opened a pull request:

https://github.com/apache/nifi/pull/2003

NIFI-4181 CSVReader and CSVRecordSetWriter can be used by just explictly 
declaring their columns.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [✓] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [✓] Does your PR title start with NIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [✓] Has your PR been rebased against the latest commit within the 
target branch (typically master)?

- [✓] Is your initial contribution a single, squashed commit?

### For code changes:
- [✓] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [✓] Have you written or updated unit tests to verify your changes?
- [N/A] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [N/A] If applicable, have you updated the LICENSE file, including the 
main LICENSE file under nifi-assembly?
- [N/A] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [✓] If adding new Properties, have you added .displayName in addition 
to .name (programmatic access) for each of the new properties?

### For documentation related changes:
- [N/A] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Wesley-Lawrence/nifi NIFI-4181

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2003.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2003


commit cc3f6af7c4d751e813384e166aa87821ace42273
Author: Wesley-Lawrence 
Date:   2017-07-12T21:12:55Z

NIFI-4181 CSVReader and CSVRecordSetWriter can be used by just explicitly 
declaring their columns.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4060) Create a MergeRecord Processor

2017-07-12 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4060:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Create a MergeRecord Processor
> --
>
> Key: NIFI-4060
> URL: https://issues.apache.org/jira/browse/NIFI-4060
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When record-oriented data is received one record or a time or needs to be 
> split into small chunks for one reason or another, it will be helpful to be 
> able to combine those records into a single FlowFile that is made up of many 
> records for efficiency purposes, or to deliver to downstream systems as 
> larger batches. This processor should function similarly to MergeContent but 
> make use of Record Readers and Record Writer so that users don't have to deal 
> with headers, footers, demarcators, etc.
> The Processor will also need to ensure that records only get merge into the 
> same FlowFile if they have compatible schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4060) Create a MergeRecord Processor

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084675#comment-16084675
 ] 

ASF GitHub Bot commented on NIFI-4060:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1958


> Create a MergeRecord Processor
> --
>
> Key: NIFI-4060
> URL: https://issues.apache.org/jira/browse/NIFI-4060
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When record-oriented data is received one record or a time or needs to be 
> split into small chunks for one reason or another, it will be helpful to be 
> able to combine those records into a single FlowFile that is made up of many 
> records for efficiency purposes, or to deliver to downstream systems as 
> larger batches. This processor should function similarly to MergeContent but 
> make use of Record Readers and Record Writer so that users don't have to deal 
> with headers, footers, demarcators, etc.
> The Processor will also need to ensure that records only get merge into the 
> same FlowFile if they have compatible schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4060) Create a MergeRecord Processor

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084669#comment-16084669
 ] 

ASF GitHub Bot commented on NIFI-4060:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1958
  
+1 LGTM, verified most recent updates and re-ran my functional tests, also 
verified updated doc. Thanks much! Merging to master


> Create a MergeRecord Processor
> --
>
> Key: NIFI-4060
> URL: https://issues.apache.org/jira/browse/NIFI-4060
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When record-oriented data is received one record or a time or needs to be 
> split into small chunks for one reason or another, it will be helpful to be 
> able to combine those records into a single FlowFile that is made up of many 
> records for efficiency purposes, or to deliver to downstream systems as 
> larger batches. This processor should function similarly to MergeContent but 
> make use of Record Readers and Record Writer so that users don't have to deal 
> with headers, footers, demarcators, etc.
> The Processor will also need to ensure that records only get merge into the 
> same FlowFile if they have compatible schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4060) Create a MergeRecord Processor

2017-07-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084673#comment-16084673
 ] 

ASF subversion and git services commented on NIFI-4060:
---

Commit b603cb955dcd1d3d9b5e374e5760f2f9b047bda9 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b603cb9 ]

NIFI-4060: Initial implementation of MergeRecord

NIFI-4060: Addressed threading issue with RecordBin being updated after it is 
completed; fixed issue that caused mime.type attribute not to be written 
properly if all incoming flowfiles already have a different value for that 
attribute

NIFI-4060: Bug fixes; improved documentation; added a lot of debug information; 
updated StandardProcessSession to produce more accurate logs in case of a 
session being committed/rolled back with open input/output streams
Signed-off-by: Matt Burgess 

This closes #1958


> Create a MergeRecord Processor
> --
>
> Key: NIFI-4060
> URL: https://issues.apache.org/jira/browse/NIFI-4060
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When record-oriented data is received one record or a time or needs to be 
> split into small chunks for one reason or another, it will be helpful to be 
> able to combine those records into a single FlowFile that is made up of many 
> records for efficiency purposes, or to deliver to downstream systems as 
> larger batches. This processor should function similarly to MergeContent but 
> make use of Record Readers and Record Writer so that users don't have to deal 
> with headers, footers, demarcators, etc.
> The Processor will also need to ensure that records only get merge into the 
> same FlowFile if they have compatible schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4060) Create a MergeRecord Processor

2017-07-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084671#comment-16084671
 ] 

ASF subversion and git services commented on NIFI-4060:
---

Commit b603cb955dcd1d3d9b5e374e5760f2f9b047bda9 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b603cb9 ]

NIFI-4060: Initial implementation of MergeRecord

NIFI-4060: Addressed threading issue with RecordBin being updated after it is 
completed; fixed issue that caused mime.type attribute not to be written 
properly if all incoming flowfiles already have a different value for that 
attribute

NIFI-4060: Bug fixes; improved documentation; added a lot of debug information; 
updated StandardProcessSession to produce more accurate logs in case of a 
session being committed/rolled back with open input/output streams
Signed-off-by: Matt Burgess 

This closes #1958


> Create a MergeRecord Processor
> --
>
> Key: NIFI-4060
> URL: https://issues.apache.org/jira/browse/NIFI-4060
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When record-oriented data is received one record or a time or needs to be 
> split into small chunks for one reason or another, it will be helpful to be 
> able to combine those records into a single FlowFile that is made up of many 
> records for efficiency purposes, or to deliver to downstream systems as 
> larger batches. This processor should function similarly to MergeContent but 
> make use of Record Readers and Record Writer so that users don't have to deal 
> with headers, footers, demarcators, etc.
> The Processor will also need to ensure that records only get merge into the 
> same FlowFile if they have compatible schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4060) Create a MergeRecord Processor

2017-07-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084672#comment-16084672
 ] 

ASF subversion and git services commented on NIFI-4060:
---

Commit b603cb955dcd1d3d9b5e374e5760f2f9b047bda9 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b603cb9 ]

NIFI-4060: Initial implementation of MergeRecord

NIFI-4060: Addressed threading issue with RecordBin being updated after it is 
completed; fixed issue that caused mime.type attribute not to be written 
properly if all incoming flowfiles already have a different value for that 
attribute

NIFI-4060: Bug fixes; improved documentation; added a lot of debug information; 
updated StandardProcessSession to produce more accurate logs in case of a 
session being committed/rolled back with open input/output streams
Signed-off-by: Matt Burgess 

This closes #1958


> Create a MergeRecord Processor
> --
>
> Key: NIFI-4060
> URL: https://issues.apache.org/jira/browse/NIFI-4060
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When record-oriented data is received one record or a time or needs to be 
> split into small chunks for one reason or another, it will be helpful to be 
> able to combine those records into a single FlowFile that is made up of many 
> records for efficiency purposes, or to deliver to downstream systems as 
> larger batches. This processor should function similarly to MergeContent but 
> make use of Record Readers and Record Writer so that users don't have to deal 
> with headers, footers, demarcators, etc.
> The Processor will also need to ensure that records only get merge into the 
> same FlowFile if they have compatible schemas.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1958: NIFI-4060: Initial implementation of MergeRecord

2017-07-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1958


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1958: NIFI-4060: Initial implementation of MergeRecord

2017-07-12 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1958
  
+1 LGTM, verified most recent updates and re-ran my functional tests, also 
verified updated doc. Thanks much! Merging to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread Wesley L Lawrence (JIRA)
Wesley L Lawrence created NIFI-4181:
---

 Summary: CSVReader and CSVRecordSetWriter services should be able 
to work given an explicit list of columns.
 Key: NIFI-4181
 URL: https://issues.apache.org/jira/browse/NIFI-4181
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Wesley L Lawrence
Priority: Minor


Currently, to read or write a CSV file with *Record processors, the CSVReader 
and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-07-12 Thread Wesley L Lawrence (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wesley L Lawrence updated NIFI-4181:

Status: Patch Available  (was: Open)

GH PR coming too, if that's easier to review.

> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4180) Site To Site Port Math Seemingly Incorrect

2017-07-12 Thread Tommy Young (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommy Young updated NIFI-4180:
--
Description: 
It seems as though Nifi is either using something other than flow file count 
when logging the "will receive x% of data" message (in which case it would be 
nice to print the value of what it is using in the logs) OR there is a math 
error, based on the below log messages.

PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=0] will receive 
0.78125% of data
PeerStatus[hostname=node2,port=9090,secure=true,flowFileCount=2] will receive 
28.125% of data
PeerStatus[hostname=node3,port=9090,secure=true,flowFileCount=5] will receive 
71.09375% of data 

PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=0] will receive 
0.9615384615384616% of data
PeerStatus[hostname=node3,port=9090,secure=true,flowFileCount=0] will receive 
0.9615384615384616% of data
PeerStatus[hostname=node2,port=9090,secure=true,flowFileCount=1] will receive 
98.07692307692308% of data 

  was:
It seems as though Nifi is either using something other than flow file count 
when logging the "will receive x% of data" message (in which case it would be 
nice to print the value of what it is using in the logs) OR there is a math 
error, based on the below log messages.

PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=0] will receive 
0.78125% of data
PeerStatus[hostname=node2,port=9090,secure=true,flowFileCount=2] will receive 
28.125% of data
PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=5] will receive 
71.09375% of data 

PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=0] will receive 
0.9615384615384616% of data
PeerStatus[hostname=node3,port=9090,secure=true,flowFileCount=0] will receive 
0.9615384615384616% of data
PeerStatus[hostname=node2,port=9090,secure=true,flowFileCount=1] will receive 
98.07692307692308% of data 


> Site To Site Port Math Seemingly Incorrect
> --
>
> Key: NIFI-4180
> URL: https://issues.apache.org/jira/browse/NIFI-4180
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0
> Environment: SLES 11
>Reporter: Tommy Young
>Priority: Minor
>
> It seems as though Nifi is either using something other than flow file count 
> when logging the "will receive x% of data" message (in which case it would be 
> nice to print the value of what it is using in the logs) OR there is a math 
> error, based on the below log messages.
> PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=0] will receive 
> 0.78125% of data
> PeerStatus[hostname=node2,port=9090,secure=true,flowFileCount=2] will receive 
> 28.125% of data
> PeerStatus[hostname=node3,port=9090,secure=true,flowFileCount=5] will receive 
> 71.09375% of data 
> PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=0] will receive 
> 0.9615384615384616% of data
> PeerStatus[hostname=node3,port=9090,secure=true,flowFileCount=0] will receive 
> 0.9615384615384616% of data
> PeerStatus[hostname=node2,port=9090,secure=true,flowFileCount=1] will receive 
> 98.07692307692308% of data 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4180) Site To Site Port Math Seemingly Incorrect

2017-07-12 Thread Tommy Young (JIRA)
Tommy Young created NIFI-4180:
-

 Summary: Site To Site Port Math Seemingly Incorrect
 Key: NIFI-4180
 URL: https://issues.apache.org/jira/browse/NIFI-4180
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.2.0
 Environment: SLES 11
Reporter: Tommy Young
Priority: Minor


It seems as though Nifi is either using something other than flow file count 
when logging the "will receive x% of data" message (in which case it would be 
nice to print the value of what it is using in the logs) OR there is a math 
error, based on the below log messages.

PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=0] will receive 
0.78125% of data
PeerStatus[hostname=node2,port=9090,secure=true,flowFileCount=2] will receive 
28.125% of data
PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=5] will receive 
71.09375% of data 

PeerStatus[hostname=node1,port=9090,secure=true,flowFileCount=0] will receive 
0.9615384615384616% of data
PeerStatus[hostname=node3,port=9090,secure=true,flowFileCount=0] will receive 
0.9615384615384616% of data
PeerStatus[hostname=node2,port=9090,secure=true,flowFileCount=1] will receive 
98.07692307692308% of data 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4149) Indicate if EL is evaluated against FFs or not

2017-07-12 Thread Joey Frazee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084405#comment-16084405
 ] 

Joey Frazee commented on NIFI-4149:
---

It'd definitely be useful to enable EL on any property by default with the 
assumption/clear indication that what it's eval'ing is java properties, 
variable registry properties or the environment. Are there any circumstances 
where you'd want to prohibit EL from these sources? I can't think of any 
(assuming NIFI-2767 never goes in).

So the important thing then is how to clearly indicate the behavior in the UI 
for the user and walk back all the supportsExpressionLanguage() methods on the 
builders. For the former, I assume it'd be a tooltip update. For the latter, we 
could either say no harm, no foul or deprecate the existing method and add 
something like supportsFlowFileExpressions() so people will go clean stuff up.

Since I mentioned NIFI-2767, I'll add that the observation's been made that 
what counts as a valid attribute expression is also entangled with the 
processor lifecycle. So a more general solution would include a way to indicate 
at what parts of the lifecycle the expressions can be evaluated against which 
sources.

> Indicate if EL is evaluated against FFs or not
> --
>
> Key: NIFI-4149
> URL: https://issues.apache.org/jira/browse/NIFI-4149
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>
> With the addition of EL in a lot of places to improve SDLC and workflow 
> staging, it becomes important to indicate to users if the expression language 
> enabled on a property will be evaluated against the attributes of incoming 
> flow files or if it will only be evaluated against various variable stores 
> (env variables, variable registry, etc).
> Actually, the expression language (without evaluation against flow files) 
> could be allowed on any property by default, and evaluation against flow 
> files would be what is actually indicated in the UI as we are doing today. 
> Adopting this approach could solve a lot of JIRA/PRs we are seeing to add EL 
> on some specific properties (without evaluation against FFs).
> Having expression language to access external values could make sense on any 
> property for any user. However evaluating the expression language against FFs 
> is clearly a more complex challenge when it comes to session management and 
> such.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4179) Enabling existing Processor to support more features (GetFTP/GetSFTP) and FetchFTP/FetchSFTO

2017-07-12 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4179:
--
Fix Version/s: (was: 1.3.0)

> Enabling existing Processor to support more features (GetFTP/GetSFTP) and 
> FetchFTP/FetchSFTO
> 
>
> Key: NIFI-4179
> URL: https://issues.apache.org/jira/browse/NIFI-4179
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
> Environment: Windows/Unix
>Reporter: Vijaya Kumar Reddy Maddela
>  Labels: features
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hi All,
> We are looking to have dynamic behavior for FTP/SFTP processors.
> 1)GETFTP processor should be used only as starting point in a flow (Not 
> supported to use it in middle of the flow)
> 2)FetchFTP: This supports using in middle of the flow, but doesnot 
> support to pass Dynamic properties
> When we look into code we identified reasons for not supporting
> In
> GetFTP/GetSFTP:
> @InputRequirement(Requirement.INPUT_FORBIDDEN) instead of 
> @InputRequirement(Requirement.INPUT_REQUIRED)
> FetchFTP/FetchSFTO:
> Doesn’t contain @DynamicProperties() in the code
> Can you please some help someone how can we build the code and deploy by our 
> self. Appreciate a link or tutorial for building code after modification



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4179) Enabling existing Processor to support more features (GetFTP/GetSFTP) and FetchFTP/FetchSFTO

2017-07-12 Thread Vijaya Kumar Reddy Maddela (JIRA)
Vijaya Kumar Reddy Maddela created NIFI-4179:


 Summary: Enabling existing Processor to support more features 
(GetFTP/GetSFTP) and FetchFTP/FetchSFTO
 Key: NIFI-4179
 URL: https://issues.apache.org/jira/browse/NIFI-4179
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.3.0
 Environment: Windows/Unix
Reporter: Vijaya Kumar Reddy Maddela
 Fix For: 1.3.0


Hi All,
We are looking to have dynamic behavior for FTP/SFTP processors.
1)  GETFTP processor should be used only as starting point in a flow (Not 
supported to use it in middle of the flow)
2)  FetchFTP: This supports using in middle of the flow, but doesnot 
support to pass Dynamic properties

When we look into code we identified reasons for not supporting

In
GetFTP/GetSFTP:
@InputRequirement(Requirement.INPUT_FORBIDDEN) instead of 
@InputRequirement(Requirement.INPUT_REQUIRED)

FetchFTP/FetchSFTO:
Doesn’t contain @DynamicProperties() in the code


Can you please some help someone how can we build the code and deploy by our 
self. Appreciate a link or tutorial for building code after modification




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4082) Enable nifi expression language for GetMongo - Query property

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084293#comment-16084293
 ] 

ASF GitHub Bot commented on NIFI-4082:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1969
  
@jfrazee - I just pushed a new commit and added support for expression 
language with FFs evaluation on DB and Collection name, and without FFs 
evaluation on URI. Regarding this very subject, what are your thoughts about 
NIFI-4149 and how this could solve this kind of need?


> Enable nifi expression language for GetMongo - Query property
> -
>
> Key: NIFI-4082
> URL: https://issues.apache.org/jira/browse/NIFI-4082
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Dmitry Lukyanov
>Assignee: Pierre Villard
>Priority: Trivial
>
> Currently the `Query` property of the  `GetMongo` processor does not support 
> expression language.
> That disables query parametrization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1969: NIFI-4082 - Added EL on GetMongo properties

2017-07-12 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1969
  
@jfrazee - I just pushed a new commit and added support for expression 
language with FFs evaluation on DB and Collection name, and without FFs 
evaluation on URI. Regarding this very subject, what are your thoughts about 
NIFI-4149 and how this could solve this kind of need?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #118: MINIFI-311 Move to alpine base for docker...

2017-07-12 Thread achristianson
GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/118

MINIFI-311 Move to alpine base for docker image.

This change takes the 1.5G ubuntu minifi image and reduces it to a 39MB 
image by using a multi-stage alpine build.

MINIFI-311 ported dockerfile to alpine
MINIFI-311 allow warnings on Linux for compatibility with Alpine
MINIFI-311 fixed spdlog path in BuildTests cmake file
MINIFI-349 moved to multi-stage docker build

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFI-311

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #118


commit 95e58d0fba850f9643bf13774863124fc4b0659c
Author: Andrew Christianson 
Date:   2017-07-11T16:05:33Z

MINIFI-311 upgraded spdlog to snapshot of master
MINIFI-311 ported dockerfile to alpine
MINIFI-311 allow warnings on Linux for compatibility with Alpine
MINIFI-311 fixed spdlog path in BuildTests cmake file
MINIFI-349 moved to multi-stage docker build




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-12 Thread Y Wikander (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084238#comment-16084238
 ] 

Y Wikander edited comment on NIFI-4169 at 7/12/17 4:09 PM:
---

Had this crazy idea.

In the spirit of "simplifying" PutWebSocket...

What if there was a different processor - BroadcastWebSocket - who's job it was 
to attach websocket.session.id and such. Namely all the things that 
PutWebSocket needs to sends a message (like when non-broadcast is involved).
The broadcast logic would be removed from PutWebSocket.

You'd route the message flow from x --> BroadcastWebSocket --> PutWebSocket.

BroadcastWebSocket would add an attribute noting that it's a broadcast style 
message -- in case you wanted to handle errors in PutWebSocket differently.

BroadcastWebSocket would transfer to 'failure' if the broadcast list was empty 
- so one could point it back in (to BroadcastWebSocket) to try again -- if you 
couldn't afford the data loss.

When the user routes PutWebSocket failures they would have the option to route 
that back to itself (to retry the same sessionsId) *or*  to BroadcastWebSocket 
to get the _current_ list of broadcast recipients.
The downside of being routed back to BroadcastWebSocket is that there could be 
x number of flowfiles with the same data contents coming back (because 
BroadcastWebSocket created x number of flowfiles). Transmission problems could 
grow the number of flowfiles exponentially. And think of the poor data 
recipient -- getting the same message multiple times.



In my use case I was using ConnectWebSocket and PutWebSocket; such that - I 
presume that - the sessionId would change when ConnectWebSocket  closed and 
opened the connection. Such that retrying the same sessionId would always fail. 
Hence my interest in being able to get the current list of broadcast recipients 
-- which in my case would be 1.



was (Author: ywik):
Had this crazy idea.

In the spirit of "simplifying" PutWebSocket...

What if there was a different processor - BroadcastWebSocket - who's job it was 
to attach websocket.session.id and such. Namely all the things that 
PutWebSocket needs to sends a message (like when non-broadcast is involved).
The broadcast logic would be removed from PutWebSocket.

You'd route the message flow from x --> BroadcastWebSocket --> PutWebSocket.

BroadcastWebSocket would add an attribute noting that it's a broadcast style 
message -- in case you wanted to handle errors in PutWebSocket differently.

BroadcastWebSocket would transfer to 'failure' if the broadcast list was empty 
- so one could point it back in (to BroadcastWebSocket) to try again -- if you 
couldn't afford the data loss.

When the user routes PutWebSocket failures they would have the option to route 
that back to itself (to retry the same sessionsId) *or*  to BroadcastWebSocket 
to get the _current_ list of broadcast recipients.
The downside of being routed back to BroadcastWebSocket is that there could be 
x number of flowfiles with the same data contents coming back (because 
BroadcastWebSocket created x number of flowfiles). Transmission problems could 
grow the number of flowfiles exponentially. And think of the poor data 
recipient -- getting the same message multiple times.



In my use case I was using ConnectWebSocket and PutWebSocket; such that - I 
presume - that the sessionId would change when ConnectWebSocket  closed and 
opened the connection. Such that retrying the same sessionId would always fail. 
Hence my interest in being able to get the current list of broadcast recipients 
-- which in my case would be 1.


> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-12 Thread Y Wikander (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084238#comment-16084238
 ] 

Y Wikander edited comment on NIFI-4169 at 7/12/17 4:08 PM:
---

Had this crazy idea.

In the spirit of "simplifying" PutWebSocket...

What if there was a different processor - BroadcastWebSocket - who's job it was 
to attach websocket.session.id and such. Namely all the things that 
PutWebSocket needs to sends a message (like when non-broadcast is involved).
The broadcast logic would be removed from PutWebSocket.

You'd route the message flow from x --> BroadcastWebSocket --> PutWebSocket.

BroadcastWebSocket would add an attribute noting that it's a broadcast style 
message -- in case you wanted to handle errors in PutWebSocket differently.

BroadcastWebSocket would transfer to 'failure' if the broadcast list was empty 
- so one could point it back in (to BroadcastWebSocket) to try again -- if you 
couldn't afford the data loss.

When the user routes PutWebSocket failures they would have the option to route 
that back to itself (to retry the same sessionsId) *or*  to BroadcastWebSocket 
to get the _current_ list of broadcast recipients.
The downside of being routed back to BroadcastWebSocket is that there could be 
x number of flowfiles with the same data contents coming back (because 
BroadcastWebSocket created x number of flowfiles). Transmission problems could 
grow the number of flowfiles exponentially. And think of the poor data 
recipient -- getting the same message multiple times.


In my use case I was using ConnectWebSocket and PutWebSocket; such that - I 
presume - that the sessionId would change when ConnectWebSocket  closed and 
opened the connection. Such that retrying the same sessionId would always fail. 
Hence my interest in being able to get the current list of broadcast recipients 
-- which in my case would be 1.



was (Author: ywik):
Had this crazy idea.

In the spirit of "simplifying" PutWebSocket...

What if there was a different processor - BroadcastWebSocket - who's job it was 
to attach websocket.session.id and such. Namely all the things that 
PutWebSocket needs to sends a message (like when non-broadcast is involved).
The broadcast logic would be removed from PutWebSocket.

You'd route the message flow from x --> BroadcastWebSocket --> PutWebSocket.

BroadcastWebSocket would add an attribute noting that it's a broadcast style 
message -- in case you wanted to handle errors in PutWebSocket differently.

BroadcastWebSocket would transfer to 'failure' if the broadcast list was empty 
- so one could point it back in (to BroadcastWebSocket) to try again -- if you 
couldn't afford the data loss.

When the user routes PutWebSocket failures they would have the option to route 
that back to itself (to retry the same sessionsId) *or*  to BroadcastWebSocket 
to get the _current_ list of broadcast recipients.
The downside of being routed back to BroadcastWebSocket is that there could be 
x number of flowfiles with the same data contents coming back (because 
BroadcastWebSocket created x number of flowfiles). Transmission problems could 
grow the number of flowfiles exponentially. And think of the poor data 
recipient -- getting the same message multiple times.

- - - -
In my use case I was using ConnectWebSocket and PutWebSocket; such that - I 
presume - that the sessionId would change when ConnectWebSocket  closed and 
opened the connection. Such that retrying the same sessionId would always fail. 
Hence my interest in being able to get the current list of broadcast recipients 
-- which in my case would be 1.


> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-12 Thread Y Wikander (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084238#comment-16084238
 ] 

Y Wikander edited comment on NIFI-4169 at 7/12/17 4:08 PM:
---

Had this crazy idea.

In the spirit of "simplifying" PutWebSocket...

What if there was a different processor - BroadcastWebSocket - who's job it was 
to attach websocket.session.id and such. Namely all the things that 
PutWebSocket needs to sends a message (like when non-broadcast is involved).
The broadcast logic would be removed from PutWebSocket.

You'd route the message flow from x --> BroadcastWebSocket --> PutWebSocket.

BroadcastWebSocket would add an attribute noting that it's a broadcast style 
message -- in case you wanted to handle errors in PutWebSocket differently.

BroadcastWebSocket would transfer to 'failure' if the broadcast list was empty 
- so one could point it back in (to BroadcastWebSocket) to try again -- if you 
couldn't afford the data loss.

When the user routes PutWebSocket failures they would have the option to route 
that back to itself (to retry the same sessionsId) *or*  to BroadcastWebSocket 
to get the _current_ list of broadcast recipients.
The downside of being routed back to BroadcastWebSocket is that there could be 
x number of flowfiles with the same data contents coming back (because 
BroadcastWebSocket created x number of flowfiles). Transmission problems could 
grow the number of flowfiles exponentially. And think of the poor data 
recipient -- getting the same message multiple times.



In my use case I was using ConnectWebSocket and PutWebSocket; such that - I 
presume - that the sessionId would change when ConnectWebSocket  closed and 
opened the connection. Such that retrying the same sessionId would always fail. 
Hence my interest in being able to get the current list of broadcast recipients 
-- which in my case would be 1.



was (Author: ywik):
Had this crazy idea.

In the spirit of "simplifying" PutWebSocket...

What if there was a different processor - BroadcastWebSocket - who's job it was 
to attach websocket.session.id and such. Namely all the things that 
PutWebSocket needs to sends a message (like when non-broadcast is involved).
The broadcast logic would be removed from PutWebSocket.

You'd route the message flow from x --> BroadcastWebSocket --> PutWebSocket.

BroadcastWebSocket would add an attribute noting that it's a broadcast style 
message -- in case you wanted to handle errors in PutWebSocket differently.

BroadcastWebSocket would transfer to 'failure' if the broadcast list was empty 
- so one could point it back in (to BroadcastWebSocket) to try again -- if you 
couldn't afford the data loss.

When the user routes PutWebSocket failures they would have the option to route 
that back to itself (to retry the same sessionsId) *or*  to BroadcastWebSocket 
to get the _current_ list of broadcast recipients.
The downside of being routed back to BroadcastWebSocket is that there could be 
x number of flowfiles with the same data contents coming back (because 
BroadcastWebSocket created x number of flowfiles). Transmission problems could 
grow the number of flowfiles exponentially. And think of the poor data 
recipient -- getting the same message multiple times.


In my use case I was using ConnectWebSocket and PutWebSocket; such that - I 
presume - that the sessionId would change when ConnectWebSocket  closed and 
opened the connection. Such that retrying the same sessionId would always fail. 
Hence my interest in being able to get the current list of broadcast recipients 
-- which in my case would be 1.


> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-12 Thread Y Wikander (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084238#comment-16084238
 ] 

Y Wikander commented on NIFI-4169:
--

Had this crazy idea.

In the spirit of "simplifying" PutWebSocket...

What if there was a different processor - BroadcastWebSocket - who's job it was 
to attach websocket.session.id and such. Namely all the things that 
PutWebSocket needs to sends a message (like when non-broadcast is involved).
The broadcast logic would be removed from PutWebSocket.

You'd route the message flow from x --> BroadcastWebSocket --> PutWebSocket.

BroadcastWebSocket would add an attribute noting that it's a broadcast style 
message -- in case you wanted to handle errors in PutWebSocket differently.

BroadcastWebSocket would transfer to 'failure' if the broadcast list was empty 
- so one could point it back in (to BroadcastWebSocket) to try again -- if you 
couldn't afford the data loss.

When the user routes PutWebSocket failures they would have the option to route 
that back to itself (to retry the same sessionsId) *or*  to BroadcastWebSocket 
to get the _current_ list of broadcast recipients.
The downside of being routed back to BroadcastWebSocket is that there could be 
x number of flowfiles with the same data contents coming back (because 
BroadcastWebSocket created x number of flowfiles). Transmission problems could 
grow the number of flowfiles exponentially. And think of the poor data 
recipient -- getting the same message multiple times.

- - - -
In my use case I was using ConnectWebSocket and PutWebSocket; such that - I 
presume - that the sessionId would change when ConnectWebSocket  closed and 
opened the connection. Such that retrying the same sessionId would always fail. 
Hence my interest in being able to get the current list of broadcast recipients 
-- which in my case would be 1.


> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4177) MergeContent - Tar - Save modification timestamp like Tar does

2017-07-12 Thread Wayne Steel (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wayne Steel updated NIFI-4177:
--
Status: Patch Available  (was: Open)

> MergeContent - Tar - Save modification timestamp like Tar does
> --
>
> Key: NIFI-4177
> URL: https://issues.apache.org/jira/browse/NIFI-4177
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Wayne Steel
>Priority: Trivial
> Fix For: 1.4.0
>
>
> Tar by default saves the modification timestamp of entries.
> This mainly affects file based entries so could be done on reading the 
> attribute file.lastModifiedTime, if it exists, which is written to the 
> flowfile by GetFile or ListFile processors.
> Otherwise just leave it out as it does now.
> I propose a property with the default expression ${file.lastModifiedTime} but 
> the value must resolve to a date format of "-MM-dd'T'HH:mm:ssZ". It 
> should only be enabled when MERGE_FORMAT is set to MERGE_FORMAT_TAR



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-2923) Add expression language support for Kerberos parameters used by processors

2017-07-12 Thread Maurizio Colleluori (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maurizio Colleluori updated NIFI-2923:
--
Summary: Add expression language support for Kerberos parameters used by 
processors  (was: Add expression language support to Kerberos parameters used 
by processors)

> Add expression language support for Kerberos parameters used by processors
> --
>
> Key: NIFI-2923
> URL: https://issues.apache.org/jira/browse/NIFI-2923
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Maurizio Colleluori
>Priority: Minor
> Fix For: 1.4.0
>
>
> Kerberos properties (e.g. principal, keytab) available as attributes in 
> certain processors (e.g. HDFS processors) only accept a constant value. 
> Support for expression language could be enabled and allow for 
> parameterisation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4082) Enable nifi expression language for GetMongo - Query property

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084184#comment-16084184
 ] 

ASF GitHub Bot commented on NIFI-4082:
--

Github user jfrazee commented on the issue:

https://github.com/apache/nifi/pull/1969
  
@pvillard31 I think we can add EL to collection without having to do 
anything fancy since getCollection() in AbstractMongoProcessor re-uses the 
mongoClient already.

That said, is it worth also adding EL support for URI and DB in this PR? 
The simple case will mean it'll get eval'd against the variable registry, which 
is definitely useful for migrating from env to env.

There's another case though in being able to eval against a FlowFile for 
PutMongo so you can route to different DBs if you're doing some 
application-level sharding. But that would mean we'd have to manage a pool/map 
of connections since the mongoClient currently is set with @OnScheduled. Makes 
sense if you think that's for another PR though.


> Enable nifi expression language for GetMongo - Query property
> -
>
> Key: NIFI-4082
> URL: https://issues.apache.org/jira/browse/NIFI-4082
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Dmitry Lukyanov
>Assignee: Pierre Villard
>Priority: Trivial
>
> Currently the `Query` property of the  `GetMongo` processor does not support 
> expression language.
> That disables query parametrization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1969: NIFI-4082 - Added EL on GetMongo properties

2017-07-12 Thread jfrazee
Github user jfrazee commented on the issue:

https://github.com/apache/nifi/pull/1969
  
@pvillard31 I think we can add EL to collection without having to do 
anything fancy since getCollection() in AbstractMongoProcessor re-uses the 
mongoClient already.

That said, is it worth also adding EL support for URI and DB in this PR? 
The simple case will mean it'll get eval'd against the variable registry, which 
is definitely useful for migrating from env to env.

There's another case though in being able to eval against a FlowFile for 
PutMongo so you can route to different DBs if you're doing some 
application-level sharding. But that would mean we'd have to manage a pool/map 
of connections since the mongoClient currently is set with @OnScheduled. Makes 
sense if you think that's for another PR though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4177) MergeContent - Tar - Save modification timestamp like Tar does

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084170#comment-16084170
 ] 

ASF GitHub Bot commented on NIFI-4177:
--

GitHub user makosteel opened a pull request:

https://github.com/apache/nifi/pull/2002

NIFI-4177 MergeContent - Tar - Save modification timestamp like Tar does

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/makosteel/nifi NIFI-4177

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2002


commit 958843cb19a279734aee7cceae3615962f4f5cf0
Author: Wayne Steel 
Date:   2017-07-12T13:52:52Z

NIFI-4177 MergeContent - Tar - Save modification timestamp like Tar does




> MergeContent - Tar - Save modification timestamp like Tar does
> --
>
> Key: NIFI-4177
> URL: https://issues.apache.org/jira/browse/NIFI-4177
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.3.0
>Reporter: Wayne Steel
>Priority: Trivial
>
> Tar by default saves the modification timestamp of entries.
> This mainly affects file based entries so could be done on reading the 
> attribute file.lastModifiedTime, if it exists, which is written to the 
> flowfile by GetFile or ListFile processors.
> Otherwise just leave it out as it does now.
> I propose a property with the default expression ${file.lastModifiedTime} but 
> the value must resolve to a date format of "-MM-dd'T'HH:mm:ssZ". It 
> should only be enabled when MERGE_FORMAT is set to MERGE_FORMAT_TAR



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2002: NIFI-4177 MergeContent - Tar - Save modification ti...

2017-07-12 Thread makosteel
GitHub user makosteel opened a pull request:

https://github.com/apache/nifi/pull/2002

NIFI-4177 MergeContent - Tar - Save modification timestamp like Tar does

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/makosteel/nifi NIFI-4177

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2002


commit 958843cb19a279734aee7cceae3615962f4f5cf0
Author: Wayne Steel 
Date:   2017-07-12T13:52:52Z

NIFI-4177 MergeContent - Tar - Save modification timestamp like Tar does




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1623) ConvertXMLToJSON addition

2017-07-12 Thread Claudiu Stanciu (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084117#comment-16084117
 ] 

Claudiu Stanciu commented on NIFI-1623:
---

[~trixpan] the Transform XML is not up to the task for some XMLs which can be 
transformed easily to JSON, while this processor has no issues.

> ConvertXMLToJSON addition
> -
>
> Key: NIFI-1623
> URL: https://issues.apache.org/jira/browse/NIFI-1623
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.5.1
>Reporter: Einsteins Do
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-nifi-1623-Adding-a-processor-to-convert-XML-to-JSON.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Add a new processor for XML to JSON conversion:
> * Takes an XML Flow File
> * Performs conversion from XML to JSON
> * Outputs the JSON Flow File as well as the Original XML Flow File



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1983: NiFi-2829: Add Date and Time Format Support for PutSQL

2017-07-12 Thread yjhyjhyjh0
Github user yjhyjhyjh0 commented on the issue:

https://github.com/apache/nifi/pull/1983
  
Thanks for the reply and detail review.
Just update the commit title, document part and remove unnecessary if 
condition.

Seems AppVevor fail at same part.
Travis CI only fail at build 2 but pass build4,5.
Not sure why. 
I'll keep track of it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4178) Validation error tool tip too big on ConvertJSONToAvro processor

2017-07-12 Thread Wil Selwood (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084040#comment-16084040
 ] 

Wil Selwood commented on NIFI-4178:
---

Note for any one who finds this later I have been able to extract the message 
by looking in chromes dev tools. 

Mouse over the element to make the popup appear. 
Press F12 to open the dev tools. 
Find the CanvasToolTips div under the canvas container.
Under processor tooltips hopefully there will only be one entry.
Inside that should be the full text of the tool tip.

Unfortunately in my case it's two copies of the schema separated by some text 
that says "' is invalid because Failed to parse schema:"

Good luck.

> Validation error tool tip too big on ConvertJSONToAvro processor
> 
>
> Key: NIFI-4178
> URL: https://issues.apache.org/jira/browse/NIFI-4178
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Wil Selwood
> Attachments: toobigerror_nifi_cutdown.png
>
>
> We are trying to process the twitter stream and have a avro schema to convert 
> their large json objects. Unfortunately we have an error in the schema 
> somewhere.
> The schema is a little over 300 lines. The validation fails and the little 
> warning triangle appears on the processor as expected. However the pop-up 
> text is so big that I can't actually see an error due to it printing out the 
> schema first. See attached screen shot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4178) Validation error tool tip too big on ConvertJSONToAvro processor

2017-07-12 Thread Wil Selwood (JIRA)
Wil Selwood created NIFI-4178:
-

 Summary: Validation error tool tip too big on ConvertJSONToAvro 
processor
 Key: NIFI-4178
 URL: https://issues.apache.org/jira/browse/NIFI-4178
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Wil Selwood
 Attachments: toobigerror_nifi_cutdown.png

We are trying to process the twitter stream and have a avro schema to convert 
their large json objects. Unfortunately we have an error in the schema 
somewhere.

The schema is a little over 300 lines. The validation fails and the little 
warning triangle appears on the processor as expected. However the pop-up text 
is so big that I can't actually see an error due to it printing out the schema 
first. See attached screen shot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083987#comment-16083987
 ] 

ASF GitHub Bot commented on NIFI-4130:
--

Github user jfrazee commented on the issue:

https://github.com/apache/nifi/pull/1953
  
@pvillard31 Right, wouldn't want to do that. I misunderstood what you were 
doing. This makes sense.


> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4177) MergeContent - Tar - Save modification timestamp like Tar does

2017-07-12 Thread Wayne Steel (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wayne Steel updated NIFI-4177:
--
Description: 
Tar by default saves the modification timestamp of entries.
This mainly affects file based entries so could be done on reading the 
attribute file.lastModifiedTime, if it exists, which is written to the flowfile 
by GetFile or ListFile processors.
Otherwise just leave it out as it does now.

I propose a property with the default expression ${file.lastModifiedTime} but 
the value must resolve to a date format of "-MM-dd'T'HH:mm:ssZ". It should 
only be enabled when MERGE_FORMAT is set to MERGE_FORMAT_TAR

  was:
Tar by default saves the modification timestamp of entries.
This mainly affects file based entries so could be done on reading the 
attribute file.lastModifiedTime, if it exists, which is written to the flowfile 
by GetFile or ListFile processors.
Otherwise just leave it out as it does now.

I propose a property with the default expression ${file.lastModifiedTime} but 
the value must resolve to a date format of "-MM-dd'T'HH:mm:ssZ". It should 
only be visible and enabled when MERGE_FORMAT is set to MERGE_FORMAT_TAR


> MergeContent - Tar - Save modification timestamp like Tar does
> --
>
> Key: NIFI-4177
> URL: https://issues.apache.org/jira/browse/NIFI-4177
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.3.0
>Reporter: Wayne Steel
>Priority: Trivial
>
> Tar by default saves the modification timestamp of entries.
> This mainly affects file based entries so could be done on reading the 
> attribute file.lastModifiedTime, if it exists, which is written to the 
> flowfile by GetFile or ListFile processors.
> Otherwise just leave it out as it does now.
> I propose a property with the default expression ${file.lastModifiedTime} but 
> the value must resolve to a date format of "-MM-dd'T'HH:mm:ssZ". It 
> should only be enabled when MERGE_FORMAT is set to MERGE_FORMAT_TAR



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4177) MergeContent - Tar - Save modification timestamp like Tar does

2017-07-12 Thread Wayne Steel (JIRA)
Wayne Steel created NIFI-4177:
-

 Summary: MergeContent - Tar - Save modification timestamp like Tar 
does
 Key: NIFI-4177
 URL: https://issues.apache.org/jira/browse/NIFI-4177
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.3.0
Reporter: Wayne Steel
Priority: Trivial


Tar by default saves the modification timestamp of entries.
This mainly affects file based entries so could be done on reading the 
attribute file.lastModifiedTime, if it exists, which is written to the flowfile 
by GetFile or ListFile processors.
Otherwise just leave it out as it does now.

I propose a property with the default expression ${file.lastModifiedTime} but 
the value must resolve to a date format of "-MM-dd'T'HH:mm:ssZ". It should 
only be visible and enabled when MERGE_FORMAT is set to MERGE_FORMAT_TAR



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-12 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083644#comment-16083644
 ] 

Koji Kawamura commented on NIFI-4174:
-

There are two things, one is connection timeout setting, and the other is the 
error message.

In your case, OracleDriver kept waiting to make a connection and NiFi framework 
decided to time it out. In this case I don't think we can produce better error 
message. By default, connection timeout is not set and a driver keeps waiting.

In order to set connection timeout, we need to set different JDBC properties 
for different drivers:
For [Oracle Thin 
driver|http://docs.oracle.com/cd/E18283_01/appdev.112/e13995/constant-values.html#oracle_jdbc_OracleConnection_CONNECTION_PROPERTY_THIN_READ_TIMEOUT],
 use "oracle.net.CONNECT_TIMEOUT",
[MySQL|https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-configuration-properties.html]
 uses "connectTimeout", 
[PostgreSQL|https://jdbc.postgresql.org/documentation/91/connect.html] seems 
using "loginTimeout".

By setting these JDBC property by 'User Defined property' (Dynamic processor 
property), a driver can throw an Exception when it's timeout.

Even though, since NiFi framework uses async call and the root cause is deep in 
a exception that NiFi catches, we need to change NiFi framework code to utilize 
the meaningful exception message to show as a bulletin message.

By setting Oracle timeout, I got following stacktrace:
{code}
2017-07-12 17:06:09,483 ERROR [StandardProcessScheduler Thread-1] 
o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method due 
to java.lang.RuntimeException: Failed while executing one
of processor's OnScheduled task.
java.lang.RuntimeException: Failed while executing one of processor's 
OnScheduled task.
at 
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1482)
at 
org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
at 
org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.reflect.InvocationTargetException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
... 9 common frames omitted
Caused by: java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
at 
org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
at 
org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
... 6 common frames omitted
Caused by: org.apache.nifi.processor.exception.ProcessException: 
org.apache.commons.dbcp.SQLNestedException: Cannot create 
PoolableConnectionFactory (IO Error: The Network Adapter could not establish 
the connection)
at 
org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:275)
at sun.reflect.GeneratedMethodAccessor434.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:89)
at 

[jira] [Commented] (NIFI-4174) GenerateTableFetch does not work with oracle on Nifi 1.2

2017-07-12 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083577#comment-16083577
 ] 

Jorge Machado commented on NIFI-4174:
-

After some tests I found out that I don't have connection to the DB. 
This should throw an error like: Cannot connect to database instead of 
schedulers error. 

> GenerateTableFetch does not work with oracle on Nifi 1.2
> 
>
> Key: NIFI-4174
> URL: https://issues.apache.org/jira/browse/NIFI-4174
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Jorge Machado
>Priority: Minor
>
> I'm trying to extract some data from a oracle DB.  
> I'm getting : 
> {code:java}
> 2017-07-11 16:19:29,612 WARN [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Timed out while waiting for 
> OnScheduled of 'GenerateTableFetch' processor to finish. An attempt is made 
> to cancel the task via Thread.interrupt(). However it does not guarantee that 
> the task will be canceled since the code inside current OnScheduled operation 
> may have been written to ignore interrupts which may result in a runaway 
> thread. This could lead to more issues, eventually requiring NiFi to be 
> restarted. This is usually a bug in the target Processor 
> 'GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4]' that needs to 
> be documented, reported and eventually fixed.
> 2017-07-11 16:19:29,612 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.p.standard.GenerateTableFetch 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] 
> GenerateTableFetch[id=f08a3acd-ac7e-17d7-598b-8f9720fd92d4] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.; processor will not be 
> scheduled to run for 30 seconds: java.lang.RuntimeException: Timed out while 
> executing one of processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1465)
>   ... 9 common frames omitted
> 2017-07-11 16:19:29,613 ERROR [StandardProcessScheduler Thread-7] 
> o.a.n.controller.StandardProcessorNode Failed to invoke @OnScheduled method 
> due to java.lang.RuntimeException: Timed out while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Timed out while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1480)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.TimeoutException: null
>   at java.util.concurrent.FutureTask.get(FutureTask.java:205)
>   at 
> 

[jira] [Commented] (NIFI-4130) TransformXml - provide a way to define XSLT without external files

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083571#comment-16083571
 ] 

ASF GitHub Bot commented on NIFI-4130:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1953
  
Yes @jfrazee, that could be an option. However, do you think that's a good 
idea to have the XSLT as an attribute of the flow files? The XSLT could be 
really big and I'm not sure users will have the reflex to use an 
UpdateAttribute to remove it after the TransformXml.


> TransformXml - provide a way to define XSLT without external files
> --
>
> Key: NIFI-4130
> URL: https://issues.apache.org/jira/browse/NIFI-4130
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> In cluster deployments the need to reference external configuration files can 
> be annoying since it requires to access to all the NiFi nodes and to 
> correctly deploy the files. It would be interesting to leverage the lookup 
> controller services in TransformXml to provide a way to define XSLT directly 
> from the UI without external configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1953: NIFI-4130 Add lookup controller service in TransformXML to...

2017-07-12 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1953
  
Yes @jfrazee, that could be an option. However, do you think that's a good 
idea to have the XSLT as an attribute of the flow files? The XSLT could be 
really big and I'm not sure users will have the reflex to use an 
UpdateAttribute to remove it after the TransformXml.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4169) PutWebSocket processor with blank WebSocket session id attribute cannot transfer to failure queue

2017-07-12 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083547#comment-16083547
 ] 

Koji Kawamura commented on NIFI-4169:
-

Yes, I was suggesting cloning each flowfile and set a single 
'websocket.session.id' attribute.
With this approach, user can route the failed FlowFiles back to PutWebSocket if 
necessary to retry sending the message to only the failed peers, without 
duplicating the same message to succeeded ones.

> PutWebSocket processor with blank WebSocket session id attribute cannot 
> transfer to failure queue
> -
>
> Key: NIFI-4169
> URL: https://issues.apache.org/jira/browse/NIFI-4169
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Y Wikander
>Priority: Critical
>  Labels: patch
> Attachments: 
> 0001-websocket-when-sendMessage-fails-under-blank-session.patch
>
>
> If a PutWebSocket processor is setup with a blank WebSocket session id 
> attribute (see NIFI-3318; Send message from PutWebSocket to all connected 
> clients) and it is not connected to a websocket server it will log the 
> failure and mark the flowfile with Success (rather than Failure) -- and the 
> data is effectively lost.
> If there are multiple connected clients, and some succeed and others fail, 
> routing Failure back into the PutWebSocket could result in duplicate data to 
> some clients.
> Other NiFi processors seem to err on the side of "at least once".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)