[jira] [Commented] (NIFI-4004) Refactor RecordReaderFactory and SchemaAccessStrategy to be used without incoming FlowFile

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131676#comment-16131676
 ] 

ASF GitHub Bot commented on NIFI-4004:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1877
  
@markap14 Thanks for the suggestion, I agree with that. I've made following 
changes:

- Added default method at RecordReaderFactory, so that existing processors 
needn't change
- Reverted changes to classes those can utilize the newly added default 
method, PutElasticsearchHttpRecord, AbstractPutHDFSRecord, PutParquetTes, 
PutDatabaseRecord, FlowFileEnumerator and FlowFileTable. This reduced the 
volume of this PR, a little bit.
- Rebased with the latest master, and updated few new classes to meet the 
new RecordSetWriterFactory method signatures.

Local contrib check passed without issue. Also tested a live flow with 
various record readers and writers. 
https://gist.github.com/ijokarumawak/a6c33eef30d0cd9786eab7eeacccb7ff

I hope it's now ready to be merged, thanks!


> Refactor RecordReaderFactory and SchemaAccessStrategy to be used without 
> incoming FlowFile
> --
>
> Key: NIFI-4004
> URL: https://issues.apache.org/jira/browse/NIFI-4004
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.2.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> Current RecordReaderFactory and SchemaAccessStrategy implementation assumes 
> there's always an incoming FlowFile available, and use it to resolve Record 
> Schema.
> That is fine for components those convert or update incoming FlowFiles, 
> however there are other components those does not have any incoming 
> FlowFiles, for example, ConsumeKafkaRecord_0_10. Typically, ones fetches data 
> from external system do not have incoming FlowFile. And current API doesn't 
> fit well with these as it requires a FlowFile.
> In fact, [ConsumeKafkaRecord creates a temporal 
> FlowFile|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-0-10-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumerLease.java#L426]
>  only to get RecordSchema. This should be avoided as we expect more 
> components start using Record reader mechanism.
> This JIRA proposes refactoring current API to allow accessing RecordReaders 
> without needing an incoming FlowFile.
> Additionally, since there's Schema Access Strategy that requires incoming 
> FlowFile containing attribute values to access schema registry, it'd be 
> useful if we could tell user when such RecordReader is specified that it 
> can't be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1877: NIFI-4004: Use RecordReaderFactory without FlowFile.

2017-08-17 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1877
  
@markap14 Thanks for the suggestion, I agree with that. I've made following 
changes:

- Added default method at RecordReaderFactory, so that existing processors 
needn't change
- Reverted changes to classes those can utilize the newly added default 
method, PutElasticsearchHttpRecord, AbstractPutHDFSRecord, PutParquetTes, 
PutDatabaseRecord, FlowFileEnumerator and FlowFileTable. This reduced the 
volume of this PR, a little bit.
- Rebased with the latest master, and updated few new classes to meet the 
new RecordSetWriterFactory method signatures.

Local contrib check passed without issue. Also tested a live flow with 
various record readers and writers. 
https://gist.github.com/ijokarumawak/a6c33eef30d0cd9786eab7eeacccb7ff

I hope it's now ready to be merged, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3484) GenerateTableFetch Should Allow for Right Boundary

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131659#comment-16131659
 ] 

ASF GitHub Bot commented on NIFI-3484:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2091#discussion_r133872895
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -112,6 +112,17 @@
 
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor RIGHT_BOUND_WHERE = new 
PropertyDescriptor.Builder()
--- End diff --

I am OK with proceeding that way, though I'd feel better if I knew how many 
databases this has been tested on. When I wrote it my focus was on one, 
relatively uncommon (SAP HANA) system. It tests out fine, but I just worry 
about making it the default.


> GenerateTableFetch Should Allow for Right Boundary
> --
>
> Key: NIFI-3484
> URL: https://issues.apache.org/jira/browse/NIFI-3484
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
>
> When using GenerateTableFetch it places no right hand boundary on pages of 
> data.  This can lead to issues when the statement says to get the next 1000 
> records greater then a specific key, but records were added to the table 
> between the time the processor executed and when the SQL is being executed. 
> As a result it pulls in records that did not exist when the processor was 
> run.  On the next execution of the processor these records will be pulled in 
> a second time.
> Example:
> Partition Size = 1000
> First run (no state): Count(*)=4700 and MAX(ID)=4700.
> 5 FlowFiles are generated, the last one will say to fetch 1000, not 700. (But 
> I don't think this is really a bug, just an observation).
> 5 Flow Files are now in queue to be executed by ExecuteSQL.  Before the 5th 
> file can execute 400 new rows are added to the table.  When the final SQL 
> statement is executed 300 extra records, with higher ID values, will also be 
> pulled into NiFi.
> Second run (state: ID=4700).  Count(*) ID>4700 = 400 and MAX(ID)=5100.
> 1 Flow File is generated, but includes 300 records already pulled into NiFI.
> The solution is to have an optional property that will let users use the new 
> MAX(ID) as a right boundary when generating queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2091: NIFI-3484 GenerateTableFetch Should Allow for Right...

2017-08-17 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2091#discussion_r133872895
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -112,6 +112,17 @@
 
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor RIGHT_BOUND_WHERE = new 
PropertyDescriptor.Builder()
--- End diff --

I am OK with proceeding that way, though I'd feel better if I knew how many 
databases this has been tested on. When I wrote it my focus was on one, 
relatively uncommon (SAP HANA) system. It tests out fine, but I just worry 
about making it the default.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3927) Extract HL7 Attributes throwing NULLpointerException

2017-08-17 Thread Douglas Moore (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131329#comment-16131329
 ] 

Douglas Moore commented on NIFI-3927:
-

[~rajkaran] Yes we can confirm Z segments were producing a problem. We'd need 
to get that fixed.

> Extract HL7 Attributes throwing NULLpointerException
> 
>
> Key: NIFI-3927
> URL: https://issues.apache.org/jira/browse/NIFI-3927
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
> Environment: NiFi with HortonWorks
>Reporter: Raj karan
>Assignee: Joey Frazee
> Attachments: null pointer.png, pipe_ended_charcter_encoded_ascii.png, 
> resultWithDefault.txt, source.txt, when not ended with pipe.png
>
>
> I have an HL7 file which I want to put in HBase, So I am parsing this file 
> through ExtractHL7Attributes processor. With the default value for every 
> property processor works with no error but resultant attributes file only 
> have one segment. When I sets `Use Segment Names` property true it throws 
> NULLPointerException.
> Stack trace:
> 2017-05-17 11:11:58,390 INFO [Heartbeat Monitor Thread-1] 
> o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 1 heartbeats in 4756 
> nanos
> 2017-05-17 11:11:58,847 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.hl7.ExtractHL7Attributes 
> ExtractHL7Attributes[id=bea89fef-86db-1094--81c2e524] Failed to 
> extract attributes from 
> StandardFlowFileRecord[uuid=73a649fe-261c-40d2-bad7-b0bc595c0158,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1495030753601-25550, 
> container=default, section=974], offset=912561, 
> length=288],offset=0,name=source.txt,size=288] due to 
> ca.uhn.hl7v2.HL7Exception: The HL7 version 2.3
> EVN is not recognized: ca.uhn.hl7v2.HL7Exception: The HL7 version 2.3
> EVN is not recognized
> 2017-05-17 11:11:58,848 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.hl7.ExtractHL7Attributes 
> ca.uhn.hl7v2.HL7Exception: The HL7 version 2.3
> EVN is not recognized
>   at ca.uhn.hl7v2.parser.Parser.assertVersionExists(Parser.java:527) 
> ~[hapi-base-2.2.jar:na]
>   at ca.uhn.hl7v2.parser.Parser.parse(Parser.java:208) 
> ~[hapi-base-2.2.jar:na]
>   at ca.uhn.hl7v2.parser.PipeParser.parse(PipeParser.java:1018) 
> ~[hapi-base-2.2.jar:na]
>   at 
> org.apache.nifi.processors.hl7.ExtractHL7Attributes.onTrigger(ExtractHL7Attributes.java:195)
>  ~[nifi-hl7-processors-1.0.0.2.0.2.0-17.jar:1.0.0.2.0.2.0-17]
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.0.0.2.0.2.0-17.jar:1.0.0.2.0.2.0-17]
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064)
>  [nifi-framework-core-1.0.0.2.0.2.0-17.jar:1.0.0.2.0.2.0-17]
>   at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.0.0.2.0.2.0-17.jar:1.0.0.2.0.2.0-17]
>   at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.0.0.2.0.2.0-17.jar:1.0.0.2.0.2.0-17]
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.0.0.2.0.2.0-17.jar:1.0.0.2.0.2.0-17]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_77]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_77]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_77]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_77]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_77]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_77]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> 2017-05-17 11:11:58,852 ERROR [Timer-Driven Process Thread-1] 
> o.a.n.p.hl7.ExtractHL7Attributes 
> ExtractHL7Attributes[id=bea89fef-86db-1094--81c2e524] 
> ExtractHL7Attributes[id=bea89fef-86db-1094--81c2e524] failed to 
> process due to java.lang.NullPointerException; rolling back session: 
> java.lang.NullPointerException
> 2017-05-17 11:11:58,852 ERROR [Timer-Driven Process Thread-1] 
> o.a.n.p.hl7.ExtractHL7Attributes 
> java.lang.NullPointerException: null
> 2017-05-17 11:11:58,852 ERROR [Timer-Driven Process Thread-1] 
> o.a.n.p.hl7.ExtractHL7Attributes 
> ExtractHL7Attributes[id=bea89fef-86db-1094--81c2e524] 
> 

[jira] [Commented] (NIFI-3484) GenerateTableFetch Should Allow for Right Boundary

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131224#comment-16131224
 ] 

ASF GitHub Bot commented on NIFI-3484:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2091#discussion_r133817061
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -112,6 +112,17 @@
 
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor RIGHT_BOUND_WHERE = new 
PropertyDescriptor.Builder()
--- End diff --

Is there any situation where we'd want this false? I'm thinking this might 
be another case where changing the behavior is a fix and doesn't necessarily 
need to offer another property to maintain original behavior.


> GenerateTableFetch Should Allow for Right Boundary
> --
>
> Key: NIFI-3484
> URL: https://issues.apache.org/jira/browse/NIFI-3484
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
>
> When using GenerateTableFetch it places no right hand boundary on pages of 
> data.  This can lead to issues when the statement says to get the next 1000 
> records greater then a specific key, but records were added to the table 
> between the time the processor executed and when the SQL is being executed. 
> As a result it pulls in records that did not exist when the processor was 
> run.  On the next execution of the processor these records will be pulled in 
> a second time.
> Example:
> Partition Size = 1000
> First run (no state): Count(*)=4700 and MAX(ID)=4700.
> 5 FlowFiles are generated, the last one will say to fetch 1000, not 700. (But 
> I don't think this is really a bug, just an observation).
> 5 Flow Files are now in queue to be executed by ExecuteSQL.  Before the 5th 
> file can execute 400 new rows are added to the table.  When the final SQL 
> statement is executed 300 extra records, with higher ID values, will also be 
> pulled into NiFi.
> Second run (state: ID=4700).  Count(*) ID>4700 = 400 and MAX(ID)=5100.
> 1 Flow File is generated, but includes 300 records already pulled into NiFI.
> The solution is to have an optional property that will let users use the new 
> MAX(ID) as a right boundary when generating queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4250) Create support for deleting document by id from elasticsearch 5

2017-08-17 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4250:
---
Status: Patch Available  (was: Open)

> Create support for deleting document by id from elasticsearch 5
> ---
>
> Key: NIFI-4250
> URL: https://issues.apache.org/jira/browse/NIFI-4250
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: delete, elasticsearch
>
> Create a processor to delete documents from elasticsearch 5 based on document 
> id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2091: NIFI-3484 GenerateTableFetch Should Allow for Right...

2017-08-17 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2091#discussion_r133817061
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -112,6 +112,17 @@
 
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor RIGHT_BOUND_WHERE = new 
PropertyDescriptor.Builder()
--- End diff --

Is there any situation where we'd want this false? I'm thinking this might 
be another case where changing the behavior is a fix and doesn't necessarily 
need to offer another property to maintain original behavior.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3612) Add support for Parquet to Nifi-Registry-Bundle

2017-08-17 Thread Daniel Chaffelson (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131220#comment-16131220
 ] 

Daniel Chaffelson commented on NIFI-3612:
-

Sounds perfectly reasonable to me.

> Add support for Parquet to Nifi-Registry-Bundle
> ---
>
> Key: NIFI-3612
> URL: https://issues.apache.org/jira/browse/NIFI-3612
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Daniel Chaffelson
>Assignee: Oleg Zhurakousky
>
> This bundle could potentially be extended to include a Parquet transform by 
> leveraging the Apache 2.0 licensed parquet-mr/avro libraries:
> https://github.com/apache/parquet-mr/tree/master/parquet-avro
> This would provide coverage of this popular format to complement the ORC 
> support in the Hive Bundle and the other schema-dependent formats already in 
> this bundle.
> Existing NiFi Parquet support in the kite bundle can only write to a 
> non-kerberised Kite Dataset, which prevents usage on secured environments or 
> writing to a FlowFile.
> As the main competitor to ORC, providing more generic Parquet Transform 
> support will greatly widen the pool of potential NiFi adopters, particularly 
> in the Spark community.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4304:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Create build property for HWX schema registry client version
> 
>
> Key: NIFI-4304
> URL: https://issues.apache.org/jira/browse/NIFI-4304
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.4.0
>
>
> Currently the version of the HWX schema registry client is defined directly 
> in the pom of the service module:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61
> We should define a property in the root pom for the version and reference it 
> there, like we do for many other versions, this would let someone easily 
> override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4304:
---
Fix Version/s: 1.4.0

> Create build property for HWX schema registry client version
> 
>
> Key: NIFI-4304
> URL: https://issues.apache.org/jira/browse/NIFI-4304
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.4.0
>
>
> Currently the version of the HWX schema registry client is defined directly 
> in the pom of the service module:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61
> We should define a property in the root pom for the version and reference it 
> there, like we do for many other versions, this would let someone easily 
> override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131216#comment-16131216
 ] 

ASF subversion and git services commented on NIFI-4304:
---

Commit 60d4672195b89cb12e1f75f0a537c9adbab654bd in nifi's branch 
refs/heads/master from [~bbende]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=60d4672 ]

NIFI-4304 Extracting HWX Schema Registry client version to a property and 
bumping to latest 0.3.0 release

Signed-off-by: Matthew Burgess 

This closes #2096


> Create build property for HWX schema registry client version
> 
>
> Key: NIFI-4304
> URL: https://issues.apache.org/jira/browse/NIFI-4304
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Currently the version of the HWX schema registry client is defined directly 
> in the pom of the service module:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61
> We should define a property in the root pom for the version and reference it 
> there, like we do for many other versions, this would let someone easily 
> override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2096: NIFI-4304 Extracting HWX Schema Registry client ver...

2017-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2096


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131217#comment-16131217
 ] 

ASF GitHub Bot commented on NIFI-4304:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2096


> Create build property for HWX schema registry client version
> 
>
> Key: NIFI-4304
> URL: https://issues.apache.org/jira/browse/NIFI-4304
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Currently the version of the HWX schema registry client is defined directly 
> in the pom of the service module:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61
> We should define a property in the root pom for the version and reference it 
> there, like we do for many other versions, this would let someone easily 
> override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131215#comment-16131215
 ] 

ASF GitHub Bot commented on NIFI-4304:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2096
  
+1 LGTM, ran tests and verified version upgrade, thanks! Merging to master


> Create build property for HWX schema registry client version
> 
>
> Key: NIFI-4304
> URL: https://issues.apache.org/jira/browse/NIFI-4304
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Currently the version of the HWX schema registry client is defined directly 
> in the pom of the service module:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61
> We should define a property in the root pom for the version and reference it 
> there, like we do for many other versions, this would let someone easily 
> override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2096: NIFI-4304 Extracting HWX Schema Registry client version to...

2017-08-17 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2096
  
+1 LGTM, ran tests and verified version upgrade, thanks! Merging to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130982#comment-16130982
 ] 

ASF GitHub Bot commented on NIFI-4181:
--

Github user Wesley-Lawrence commented on the issue:

https://github.com/apache/nifi/pull/2003
  
Yea, I think we're on the same page.


> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be used by ...

2017-08-17 Thread Wesley-Lawrence
Github user Wesley-Lawrence commented on the issue:

https://github.com/apache/nifi/pull/2003
  
Yea, I think we're on the same page.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4218) ElasticsearchHttp processors should support dynamic properties as query parameters

2017-08-17 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4218:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ElasticsearchHttp processors should support dynamic properties as query 
> parameters
> --
>
> Key: NIFI-4218
> URL: https://issues.apache.org/jira/browse/NIFI-4218
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.4.0
>
>
> The Elasticsearch HTTP API has a number of fields that can be specified as 
> query parameters in the URL, such as support for 
> [pipelines|https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html].
>  Rather than including all possibilities as processor properties, it might be 
> more flexible to allow the user to specify dynamic properties on 
> ElasticsearchHttp processors, and then use those to set query parameters on 
> the API URL.
> Documentation should include a note that not all features are available to 
> all versions of Elasticsearch, and thus the ES documentation should be 
> consulted before adding dynamic properties. For example, pipelines were 
> introduced in ES 5.x, so using pipeline parameters in an ElasticsearchHttp 
> processor will not work if connecting to an ES 2.x cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4218) ElasticsearchHttp processors should support dynamic properties as query parameters

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130971#comment-16130971
 ] 

ASF GitHub Bot commented on NIFI-4218:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2049


> ElasticsearchHttp processors should support dynamic properties as query 
> parameters
> --
>
> Key: NIFI-4218
> URL: https://issues.apache.org/jira/browse/NIFI-4218
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.4.0
>
>
> The Elasticsearch HTTP API has a number of fields that can be specified as 
> query parameters in the URL, such as support for 
> [pipelines|https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html].
>  Rather than including all possibilities as processor properties, it might be 
> more flexible to allow the user to specify dynamic properties on 
> ElasticsearchHttp processors, and then use those to set query parameters on 
> the API URL.
> Documentation should include a note that not all features are available to 
> all versions of Elasticsearch, and thus the ES documentation should be 
> consulted before adding dynamic properties. For example, pipelines were 
> introduced in ES 5.x, so using pipeline parameters in an ElasticsearchHttp 
> processor will not work if connecting to an ES 2.x cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4218) ElasticsearchHttp processors should support dynamic properties as query parameters

2017-08-17 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4218:
--
Fix Version/s: 1.4.0

> ElasticsearchHttp processors should support dynamic properties as query 
> parameters
> --
>
> Key: NIFI-4218
> URL: https://issues.apache.org/jira/browse/NIFI-4218
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.4.0
>
>
> The Elasticsearch HTTP API has a number of fields that can be specified as 
> query parameters in the URL, such as support for 
> [pipelines|https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html].
>  Rather than including all possibilities as processor properties, it might be 
> more flexible to allow the user to specify dynamic properties on 
> ElasticsearchHttp processors, and then use those to set query parameters on 
> the API URL.
> Documentation should include a note that not all features are available to 
> all versions of Elasticsearch, and thus the ES documentation should be 
> consulted before adding dynamic properties. For example, pipelines were 
> introduced in ES 5.x, so using pipeline parameters in an ElasticsearchHttp 
> processor will not work if connecting to an ES 2.x cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4218) ElasticsearchHttp processors should support dynamic properties as query parameters

2017-08-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130970#comment-16130970
 ] 

ASF subversion and git services commented on NIFI-4218:
---

Commit 6b5015e39b4233cf230151fb45bebcb21df03730 in nifi's branch 
refs/heads/master from [~mattyb149]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6b5015e ]

NIFI-4218: Dynamic properties as query parameters in ESHttp processors

This closes #2049.

Signed-off-by: Bryan Bende 


> ElasticsearchHttp processors should support dynamic properties as query 
> parameters
> --
>
> Key: NIFI-4218
> URL: https://issues.apache.org/jira/browse/NIFI-4218
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.4.0
>
>
> The Elasticsearch HTTP API has a number of fields that can be specified as 
> query parameters in the URL, such as support for 
> [pipelines|https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html].
>  Rather than including all possibilities as processor properties, it might be 
> more flexible to allow the user to specify dynamic properties on 
> ElasticsearchHttp processors, and then use those to set query parameters on 
> the API URL.
> Documentation should include a note that not all features are available to 
> all versions of Elasticsearch, and thus the ES documentation should be 
> consulted before adding dynamic properties. For example, pipelines were 
> introduced in ES 5.x, so using pipeline parameters in an ElasticsearchHttp 
> processor will not work if connecting to an ES 2.x cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2049: NIFI-4218: Dynamic properties as query parameters i...

2017-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2049


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4218) ElasticsearchHttp processors should support dynamic properties as query parameters

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130966#comment-16130966
 ] 

ASF GitHub Bot commented on NIFI-4218:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2049
  
+1 Looks good, tested passing params from dynamic properties and seems to 
be working well, will go ahead and merge to master, thanks!


> ElasticsearchHttp processors should support dynamic properties as query 
> parameters
> --
>
> Key: NIFI-4218
> URL: https://issues.apache.org/jira/browse/NIFI-4218
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
>
> The Elasticsearch HTTP API has a number of fields that can be specified as 
> query parameters in the URL, such as support for 
> [pipelines|https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html].
>  Rather than including all possibilities as processor properties, it might be 
> more flexible to allow the user to specify dynamic properties on 
> ElasticsearchHttp processors, and then use those to set query parameters on 
> the API URL.
> Documentation should include a note that not all features are available to 
> all versions of Elasticsearch, and thus the ES documentation should be 
> consulted before adding dynamic properties. For example, pipelines were 
> introduced in ES 5.x, so using pipeline parameters in an ElasticsearchHttp 
> processor will not work if connecting to an ES 2.x cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2049: NIFI-4218: Dynamic properties as query parameters in ESHtt...

2017-08-17 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2049
  
+1 Looks good, tested passing params from dynamic properties and seems to 
be working well, will go ahead and merge to master, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4081) GrokReader - add the option to keep raw message in a dedicated field

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130935#comment-16130935
 ] 

ASF GitHub Bot commented on NIFI-4081:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1921
  
@markap14 done and updated a bit to match the recent refactoring.


> GrokReader - add the option to keep raw message in a dedicated field
> 
>
> Key: NIFI-4081
> URL: https://issues.apache.org/jira/browse/NIFI-4081
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> In some use cases, it can be useful to keep the raw message in the record. I 
> propose to add a parameter to the GrokReader allowing the user to store in 
> the field {{rawMessage}} the raw content of the record (stack trace included 
> if any).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1921: NIFI-4081 - Added raw message option in GrokReader

2017-08-17 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1921
  
@markap14 done and updated a bit to match the recent refactoring.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #2020: [NiFi-3973] Add PutKudu Processor for ingesting data to Ku...

2017-08-17 Thread rickysaltzer
Github user rickysaltzer commented on the issue:

https://github.com/apache/nifi/pull/2020
  
You could push my force push my commit to this PR and that'll be fine, too. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130856#comment-16130856
 ] 

ASF GitHub Bot commented on NIFI-4181:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2003
  
@Wesley-Lawrence sorry - i must have missed the email that you'd commented 
on the PR. Sorry I didn't notice until now. So I think that what you're 
proposing here is that we should have a property named "Schema Text Format" on 
the readers/writers, in addition to "Schema Text", and the Schema Text Format 
would tell the service how to parse the "Schema Text." For example, there would 
be two options: "Avro" and "String Column Names". In addition to this, we could 
potentially also introduce a new Schema Registry as I described above. This 
means that for one-off types of schemas, such as when we have a CSV file and we 
only need to use the schema once, we don't have to go to the trouble of 
creating a Schema Registry and adding entries to it - we'd just set the column 
names (optionally using Expression Language) in the Reader. And we would add 
this capability to JSON, etc. as well.

Does that all sound accurate? Just want to make sure that we are on the 
same page here.


> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be used by ...

2017-08-17 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2003
  
@Wesley-Lawrence sorry - i must have missed the email that you'd commented 
on the PR. Sorry I didn't notice until now. So I think that what you're 
proposing here is that we should have a property named "Schema Text Format" on 
the readers/writers, in addition to "Schema Text", and the Schema Text Format 
would tell the service how to parse the "Schema Text." For example, there would 
be two options: "Avro" and "String Column Names". In addition to this, we could 
potentially also introduce a new Schema Registry as I described above. This 
means that for one-off types of schemas, such as when we have a CSV file and we 
only need to use the schema once, we don't have to go to the trouble of 
creating a Schema Registry and adding entries to it - we'd just set the column 
names (optionally using Expression Language) in the Reader. And we would add 
this capability to JSON, etc. as well.

Does that all sound accurate? Just want to make sure that we are on the 
same page here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4304:
--
Status: Patch Available  (was: Open)

> Create build property for HWX schema registry client version
> 
>
> Key: NIFI-4304
> URL: https://issues.apache.org/jira/browse/NIFI-4304
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Currently the version of the HWX schema registry client is defined directly 
> in the pom of the service module:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61
> We should define a property in the root pom for the version and reference it 
> there, like we do for many other versions, this would let someone easily 
> override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130843#comment-16130843
 ] 

ASF GitHub Bot commented on NIFI-4304:
--

GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/2096

NIFI-4304 Extracting HWX Schema Registry client version to a property…

… and bumping to latest 0.3.0 release

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4304

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2096.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2096


commit 147b5280fbd7f70886eecdd051d014d2099a7760
Author: Bryan Bende 
Date:   2017-08-17T17:08:33Z

NIFI-4304 Extracting HWX Schema Registry client version to a property and 
bumping to latest 0.3.0 release




> Create build property for HWX schema registry client version
> 
>
> Key: NIFI-4304
> URL: https://issues.apache.org/jira/browse/NIFI-4304
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Currently the version of the HWX schema registry client is defined directly 
> in the pom of the service module:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61
> We should define a property in the root pom for the version and reference it 
> there, like we do for many other versions, this would let someone easily 
> override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2096: NIFI-4304 Extracting HWX Schema Registry client ver...

2017-08-17 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/2096

NIFI-4304 Extracting HWX Schema Registry client version to a property…

… and bumping to latest 0.3.0 release

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4304

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2096.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2096


commit 147b5280fbd7f70886eecdd051d014d2099a7760
Author: Bryan Bende 
Date:   2017-08-17T17:08:33Z

NIFI-4304 Extracting HWX Schema Registry client version to a property and 
bumping to latest 0.3.0 release




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4081) GrokReader - add the option to keep raw message in a dedicated field

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130830#comment-16130830
 ] 

ASF GitHub Bot commented on NIFI-4081:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1921
  
@pvillard31 looks like this PR has some conflicts. Do you mind rebasing?


> GrokReader - add the option to keep raw message in a dedicated field
> 
>
> Key: NIFI-4081
> URL: https://issues.apache.org/jira/browse/NIFI-4081
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>
> In some use cases, it can be useful to keep the raw message in the record. I 
> propose to add a parameter to the GrokReader allowing the user to store in 
> the field {{rawMessage}} the raw content of the record (stack trace included 
> if any).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1921: NIFI-4081 - Added raw message option in GrokReader

2017-08-17 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1921
  
@pvillard31 looks like this PR has some conflicts. Do you mind rebasing?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4290) PublishKafkaRecord_0_10: failed to process due to java.lang.NullPointerException

2017-08-17 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-4290:
-
Fix Version/s: 1.4.0

> PublishKafkaRecord_0_10: failed to process due to 
> java.lang.NullPointerException
> 
>
> Key: NIFI-4290
> URL: https://issues.apache.org/jira/browse/NIFI-4290
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.4.0
> Environment: NiFi 1.4 Master
> Confluent Kafka 3.2
>Reporter: Mayank Rathi
>Assignee: Mark Payne
> Fix For: 1.4.0
>
> Attachments: Kafka_Record_1.4_Bug.docx, Kafka_Record_1.4_Bug.xml, 
> nifi-app_latest.log, nifi-app.log
>
>
> Hello All,
> I am moving data to Kafka using NiFi's PublishKafkaRecord processor. I am 
> using ConfluentSchemaRegistry Controller service and getting below error:
> 2017-08-11 20:54:25,937 ERROR [Timer-Driven Process Thread-4] 
> o.a.n.p.k.pubsub.PublishKafkaRecord_0_10 
> PublishKafkaRecord_0_10[id=b3c03961-015d-1000-0946-79ccbe2ffbbd] 
> PublishKafkaRecord_0_10[id=b3c03961-015d-1000-0946-79ccbe2ffbbd] failed to 
> process due to java.lang.NullPointerException; rolling back session: {}
> java.lang.NullPointerException: null
> I do not see any error on Kafka side. 
> Attached are the logs after setting processors in Debug mode. Here is the 
> flow:
> ExecuteSQL --> SplitAvro --> PublishKafkaRecord_0_10
> Thanks!!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4227) Create a ForkRecord processor

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130808#comment-16130808
 ] 

ASF GitHub Bot commented on NIFI-4227:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2037#discussion_r133768739
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ForkRecord/additionalDetails.html
 ---
@@ -0,0 +1,365 @@
+
+
+
+
+
+ForkRecord
+
+
+
+
+
+   
+   ForkRecord allows the user to fork a record into 
multiple records. To do that, the user must specify
+   a RecordPath pointing to a 
field of type 
+   ARRAY containing RECORD elements. The generated flow 
file will contain the records from the specified 
+   array. It is also possible to add in each record all 
the fields of the parent records from the root 
+   level to the record element being forked. However it 
supposes the fields to add are defined in the 
+   schema of the Record Writer controller service.
+   
+   
+   Examples
+   
+   
+   To better understand how this Processor works, we will 
lay out a few examples. For the sake of these examples, let's assume that our 
input
+   data is JSON formatted and looks like this:
+   
+
+
+
+[{
+   "id": 1,
+   "name": "John Doe",
+   "address": "123 My Street",
+   "city": "My City", 
+   "state": "MS",
+   "zipCode": "1",
+   "country": "USA",
+   "accounts": [{
+   "id": 42,
+   "balance": 4750.89
+   }, {
+   "id": 43,
+   "balance": 48212.38
+   }]
+}, 
+{
+   "id": 2,
+   "name": "Jane Doe",
+   "address": "345 My Street",
+   "city": "Her City", 
+   "state": "NY",
+   "zipCode": "2",
+   "country": "USA",
+   "accounts": [{
+   "id": 45,
+   "balance": 6578.45
+   }, {
+   "id": 46,
+   "balance": 34567.21
+   }]
+}]
+
+
+
+
+   Example 1 - Fork without parent fields
+   
+   
+   For this case, we want to create one record per 
account and we don't care about 
+   the other fields. We'll set the Record path property to 
/accounts. The resulting 
+   flow file will contain 4 records and will look like 
(assuming the Record Writer schema is 
+   correctly set):
+   
+
+
+
+[{
+   "id": 42,
+   "balance": 4750.89
+}, {
+   "id": 43,
+   "balance": 48212.38
+}, {
+   "id": 45,
+   "balance": 6578.45
+}, {
+   "id": 46,
+   "balance": 34567.21
+}]
+
+
+
+   
+   Example 2 - Fork with parent fields
+   
+   
+   Now, if we set the property "Include parent fields" to 
true, this will recursively include 
--- End diff --

In such a case, I would have actually expected the result to have an 
'accounts' field that is a 1-element array. 
If we wanted to promote that up to the top level, an UpdateRecord processor 
could be used. 
So in general, the way that i would expect the processor to work is to 
create a copy of the Record that 
contains the exact same structure as the original except each element in 
the array denoted by the 
RecordPath would have a single element in the result. For example, if the 
input looked like:

```
{
"id": 1,
"members": [{
"id": 42,
"name": "John Doe",
"accounts": [
{
   "id": 382,
   "name": "first account",
   "balance": 17.82
}, {
   "id": 482,
   "name": "other account",
   "balance": 182.34
}
]
}, {
"id": 43,
"name": "Jane Doe",
"accounts": [
{
   "id": 492,
   "name": "yet another account",
   "balance": 21.12
}, {
   "id": 513,
   "name": "final account",
   "balance": 142.22
}
 

[GitHub] nifi pull request #2037: NIFI-4227 - add a ForkRecord processor

2017-08-17 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2037#discussion_r133768739
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/resources/docs/org.apache.nifi.processors.standard.ForkRecord/additionalDetails.html
 ---
@@ -0,0 +1,365 @@
+
+
+
+
+
+ForkRecord
+
+
+
+
+
+   
+   ForkRecord allows the user to fork a record into 
multiple records. To do that, the user must specify
+   a RecordPath pointing to a 
field of type 
+   ARRAY containing RECORD elements. The generated flow 
file will contain the records from the specified 
+   array. It is also possible to add in each record all 
the fields of the parent records from the root 
+   level to the record element being forked. However it 
supposes the fields to add are defined in the 
+   schema of the Record Writer controller service.
+   
+   
+   Examples
+   
+   
+   To better understand how this Processor works, we will 
lay out a few examples. For the sake of these examples, let's assume that our 
input
+   data is JSON formatted and looks like this:
+   
+
+
+
+[{
+   "id": 1,
+   "name": "John Doe",
+   "address": "123 My Street",
+   "city": "My City", 
+   "state": "MS",
+   "zipCode": "1",
+   "country": "USA",
+   "accounts": [{
+   "id": 42,
+   "balance": 4750.89
+   }, {
+   "id": 43,
+   "balance": 48212.38
+   }]
+}, 
+{
+   "id": 2,
+   "name": "Jane Doe",
+   "address": "345 My Street",
+   "city": "Her City", 
+   "state": "NY",
+   "zipCode": "2",
+   "country": "USA",
+   "accounts": [{
+   "id": 45,
+   "balance": 6578.45
+   }, {
+   "id": 46,
+   "balance": 34567.21
+   }]
+}]
+
+
+
+
+   Example 1 - Fork without parent fields
+   
+   
+   For this case, we want to create one record per 
account and we don't care about 
+   the other fields. We'll set the Record path property to 
/accounts. The resulting 
+   flow file will contain 4 records and will look like 
(assuming the Record Writer schema is 
+   correctly set):
+   
+
+
+
+[{
+   "id": 42,
+   "balance": 4750.89
+}, {
+   "id": 43,
+   "balance": 48212.38
+}, {
+   "id": 45,
+   "balance": 6578.45
+}, {
+   "id": 46,
+   "balance": 34567.21
+}]
+
+
+
+   
+   Example 2 - Fork with parent fields
+   
+   
+   Now, if we set the property "Include parent fields" to 
true, this will recursively include 
--- End diff --

In such a case, I would have actually expected the result to have an 
'accounts' field that is a 1-element array. 
If we wanted to promote that up to the top level, an UpdateRecord processor 
could be used. 
So in general, the way that i would expect the processor to work is to 
create a copy of the Record that 
contains the exact same structure as the original except each element in 
the array denoted by the 
RecordPath would have a single element in the result. For example, if the 
input looked like:

```
{
"id": 1,
"members": [{
"id": 42,
"name": "John Doe",
"accounts": [
{
   "id": 382,
   "name": "first account",
   "balance": 17.82
}, {
   "id": 482,
   "name": "other account",
   "balance": 182.34
}
]
}, {
"id": 43,
"name": "Jane Doe",
"accounts": [
{
   "id": 492,
   "name": "yet another account",
   "balance": 21.12
}, {
   "id": 513,
   "name": "final account",
   "balance": 142.22
}
]
}]
 }
```

Then, if we set the Record Path to `/members`, I would expect output of:

```
[{
"id": 1,
"members": [{
  "id": 42,
  "name": "John Doe",
  

[jira] [Commented] (NIFI-4224) Add Variable Registry at Process Group level

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130804#comment-16130804
 ] 

ASF GitHub Bot commented on NIFI-4224:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2051


> Add Variable Registry at Process Group level
> 
>
> Key: NIFI-4224
> URL: https://issues.apache.org/jira/browse/NIFI-4224
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> Currently, NiFi exposes a variable registry that is configurable by adding 
> the name of a properties file to nifi.properties and then treating the 
> referenced properties file as key/value pairs for the variable registry. 
> This, however, is very limiting, as it provides a global scope for all 
> variables, and it requires a restart of NiFi in order to pick up any updates 
> to the file. We should expose a Process Group-level Variable Registry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4295) When selecting a controller service for a processor, services that belong to the wrong scope are shown

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130805#comment-16130805
 ] 

ASF GitHub Bot commented on NIFI-4295:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2087


> When selecting a controller service for a processor, services that belong to 
> the wrong scope are shown
> --
>
> Key: NIFI-4295
> URL: https://issues.apache.org/jira/browse/NIFI-4295
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a Process Group, and in that Process Group I have defined a Controller 
> Service. If I then move up to the root group and add a Processor, that takes 
> a Controller Service of that type, I can now choose the Controller Service 
> that is defined in a child group. When I do so, my processor becomes invalid 
> and the reason given for the processor to be invalid is a NullPointer:
> {code}
> 2017-08-14 12:38:24,016 WARN [NiFi Web Server-76] 
> o.a.n.controller.StandardProcessorNode Failed during validation
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processor.StandardValidationContext.isValidationRequired(StandardValidationContext.java:143)
>   at 
> org.apache.nifi.components.PropertyDescriptor.validate(PropertyDescriptor.java:150)
>   at 
> org.apache.nifi.components.AbstractConfigurableComponent.validate(AbstractConfigurableComponent.java:103)
>   at 
> org.apache.nifi.controller.AbstractConfiguredComponent.validate(AbstractConfiguredComponent.java:329)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.isValid(StandardProcessorNode.java:968)
>   at 
> org.apache.nifi.controller.FlowController.getProcessorStatus(FlowController.java:2964)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2559)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2518)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2485)
>   at 
> org.apache.nifi.web.controller.ControllerFacade.getProcessGroupStatus(ControllerFacade.java:599)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$FastClassBySpringCGLIB$$5a42ba54.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$EnhancerBySpringCGLIB$$344e3ba2.getProcessGroupStatus()
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade.getProcessGroupFlow(StandardNiFiServiceFacade.java:3054)
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithReadLock(NiFiServiceFacadeLock.java:137)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.getLock(NiFiServiceFacadeLock.java:108)
>   at sun.reflect.GeneratedMethodAccessor168.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610)
>   at 
> org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:68)
>   at 
> 

[jira] [Updated] (NIFI-4295) When selecting a controller service for a processor, services that belong to the wrong scope are shown

2017-08-17 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-4295:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> When selecting a controller service for a processor, services that belong to 
> the wrong scope are shown
> --
>
> Key: NIFI-4295
> URL: https://issues.apache.org/jira/browse/NIFI-4295
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a Process Group, and in that Process Group I have defined a Controller 
> Service. If I then move up to the root group and add a Processor, that takes 
> a Controller Service of that type, I can now choose the Controller Service 
> that is defined in a child group. When I do so, my processor becomes invalid 
> and the reason given for the processor to be invalid is a NullPointer:
> {code}
> 2017-08-14 12:38:24,016 WARN [NiFi Web Server-76] 
> o.a.n.controller.StandardProcessorNode Failed during validation
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processor.StandardValidationContext.isValidationRequired(StandardValidationContext.java:143)
>   at 
> org.apache.nifi.components.PropertyDescriptor.validate(PropertyDescriptor.java:150)
>   at 
> org.apache.nifi.components.AbstractConfigurableComponent.validate(AbstractConfigurableComponent.java:103)
>   at 
> org.apache.nifi.controller.AbstractConfiguredComponent.validate(AbstractConfiguredComponent.java:329)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.isValid(StandardProcessorNode.java:968)
>   at 
> org.apache.nifi.controller.FlowController.getProcessorStatus(FlowController.java:2964)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2559)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2518)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2485)
>   at 
> org.apache.nifi.web.controller.ControllerFacade.getProcessGroupStatus(ControllerFacade.java:599)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$FastClassBySpringCGLIB$$5a42ba54.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$EnhancerBySpringCGLIB$$344e3ba2.getProcessGroupStatus()
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade.getProcessGroupFlow(StandardNiFiServiceFacade.java:3054)
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithReadLock(NiFiServiceFacadeLock.java:137)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.getLock(NiFiServiceFacadeLock.java:108)
>   at sun.reflect.GeneratedMethodAccessor168.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610)
>   at 
> org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:68)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> 

[jira] [Commented] (NIFI-4295) When selecting a controller service for a processor, services that belong to the wrong scope are shown

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130806#comment-16130806
 ] 

ASF GitHub Bot commented on NIFI-4295:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2087
  
Thanks @markap14! This has been merged to master.


> When selecting a controller service for a processor, services that belong to 
> the wrong scope are shown
> --
>
> Key: NIFI-4295
> URL: https://issues.apache.org/jira/browse/NIFI-4295
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a Process Group, and in that Process Group I have defined a Controller 
> Service. If I then move up to the root group and add a Processor, that takes 
> a Controller Service of that type, I can now choose the Controller Service 
> that is defined in a child group. When I do so, my processor becomes invalid 
> and the reason given for the processor to be invalid is a NullPointer:
> {code}
> 2017-08-14 12:38:24,016 WARN [NiFi Web Server-76] 
> o.a.n.controller.StandardProcessorNode Failed during validation
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processor.StandardValidationContext.isValidationRequired(StandardValidationContext.java:143)
>   at 
> org.apache.nifi.components.PropertyDescriptor.validate(PropertyDescriptor.java:150)
>   at 
> org.apache.nifi.components.AbstractConfigurableComponent.validate(AbstractConfigurableComponent.java:103)
>   at 
> org.apache.nifi.controller.AbstractConfiguredComponent.validate(AbstractConfiguredComponent.java:329)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.isValid(StandardProcessorNode.java:968)
>   at 
> org.apache.nifi.controller.FlowController.getProcessorStatus(FlowController.java:2964)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2559)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2518)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2485)
>   at 
> org.apache.nifi.web.controller.ControllerFacade.getProcessGroupStatus(ControllerFacade.java:599)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$FastClassBySpringCGLIB$$5a42ba54.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$EnhancerBySpringCGLIB$$344e3ba2.getProcessGroupStatus()
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade.getProcessGroupFlow(StandardNiFiServiceFacade.java:3054)
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithReadLock(NiFiServiceFacadeLock.java:137)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.getLock(NiFiServiceFacadeLock.java:108)
>   at sun.reflect.GeneratedMethodAccessor168.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610)
>   at 
> org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:68)
>   at 
> 

[GitHub] nifi issue #2087: NIFI-4295: When determining which controller services to r...

2017-08-17 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2087
  
Thanks @markap14! This has been merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2087: NIFI-4295: When determining which controller servic...

2017-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2087


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-4304:
--
Fix Version/s: (was: 1.4.0)

> Create build property for HWX schema registry client version
> 
>
> Key: NIFI-4304
> URL: https://issues.apache.org/jira/browse/NIFI-4304
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Currently the version of the HWX schema registry client is defined directly 
> in the pom of the service module:
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61
> We should define a property in the root pom for the version and reference it 
> there, like we do for many other versions, this would let someone easily 
> override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2051: NIFI-4224: Initial implementation of Process Group ...

2017-08-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2051


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4304) Create build property for HWX schema registry client version

2017-08-17 Thread Bryan Bende (JIRA)
Bryan Bende created NIFI-4304:
-

 Summary: Create build property for HWX schema registry client 
version
 Key: NIFI-4304
 URL: https://issues.apache.org/jira/browse/NIFI-4304
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende
Priority: Minor
 Fix For: 1.4.0


Currently the version of the HWX schema registry client is defined directly in 
the pom of the service module:

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hwx-schema-registry-bundle/nifi-hwx-schema-registry-service/pom.xml#L61

We should define a property in the root pom for the version and reference it 
there, like we do for many other versions, this would let someone easily 
override it at build time. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4295) When selecting a controller service for a processor, services that belong to the wrong scope are shown

2017-08-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130803#comment-16130803
 ] 

ASF subversion and git services commented on NIFI-4295:
---

Commit 69a08e78c2cd47661e3d775ceece94ae82ac567e in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=69a08e7 ]

NIFI-4295:
- When determining which controller services to return for a component, ensure 
that we don't show services that belong to 'child groups'
- Fixed a logic bug that determined which process group to use for obtaining 
controller services
- This closes #2087


> When selecting a controller service for a processor, services that belong to 
> the wrong scope are shown
> --
>
> Key: NIFI-4295
> URL: https://issues.apache.org/jira/browse/NIFI-4295
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a Process Group, and in that Process Group I have defined a Controller 
> Service. If I then move up to the root group and add a Processor, that takes 
> a Controller Service of that type, I can now choose the Controller Service 
> that is defined in a child group. When I do so, my processor becomes invalid 
> and the reason given for the processor to be invalid is a NullPointer:
> {code}
> 2017-08-14 12:38:24,016 WARN [NiFi Web Server-76] 
> o.a.n.controller.StandardProcessorNode Failed during validation
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processor.StandardValidationContext.isValidationRequired(StandardValidationContext.java:143)
>   at 
> org.apache.nifi.components.PropertyDescriptor.validate(PropertyDescriptor.java:150)
>   at 
> org.apache.nifi.components.AbstractConfigurableComponent.validate(AbstractConfigurableComponent.java:103)
>   at 
> org.apache.nifi.controller.AbstractConfiguredComponent.validate(AbstractConfiguredComponent.java:329)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.isValid(StandardProcessorNode.java:968)
>   at 
> org.apache.nifi.controller.FlowController.getProcessorStatus(FlowController.java:2964)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2559)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2518)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2485)
>   at 
> org.apache.nifi.web.controller.ControllerFacade.getProcessGroupStatus(ControllerFacade.java:599)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$FastClassBySpringCGLIB$$5a42ba54.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$EnhancerBySpringCGLIB$$344e3ba2.getProcessGroupStatus()
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade.getProcessGroupFlow(StandardNiFiServiceFacade.java:3054)
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithReadLock(NiFiServiceFacadeLock.java:137)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.getLock(NiFiServiceFacadeLock.java:108)
>   at sun.reflect.GeneratedMethodAccessor168.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> 

[jira] [Commented] (NIFI-3612) Add support for Parquet to Nifi-Registry-Bundle

2017-08-17 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130793#comment-16130793
 ] 

Bryan Bende commented on NIFI-3612:
---

Now that we have added PutParquet and FetchParquet, which both support writing 
and reading records using the RecordWriter and RecordReader, can this JIRA be 
closed?

> Add support for Parquet to Nifi-Registry-Bundle
> ---
>
> Key: NIFI-3612
> URL: https://issues.apache.org/jira/browse/NIFI-3612
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Daniel Chaffelson
>Assignee: Oleg Zhurakousky
>
> This bundle could potentially be extended to include a Parquet transform by 
> leveraging the Apache 2.0 licensed parquet-mr/avro libraries:
> https://github.com/apache/parquet-mr/tree/master/parquet-avro
> This would provide coverage of this popular format to complement the ORC 
> support in the Hive Bundle and the other schema-dependent formats already in 
> this bundle.
> Existing NiFi Parquet support in the kite bundle can only write to a 
> non-kerberised Kite Dataset, which prevents usage on secured environments or 
> writing to a FlowFile.
> As the main competitor to ORC, providing more generic Parquet Transform 
> support will greatly widen the pool of potential NiFi adopters, particularly 
> in the Spark community.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4227) Create a ForkRecord processor

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130734#comment-16130734
 ] 

ASF GitHub Bot commented on NIFI-4227:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2037#discussion_r133756360
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ForkRecord.java
 ---
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+
+@SideEffectFree
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"fork", "record", "content", "array", "stream", "event"})
+@CapabilityDescription("This processor allows the user to fork a record 
into multiple records. The user must specify a RecordPath pointing "
++ "to a field of type ARRAY containing RECORD elements. The 
generated flow file will contain the records from the specified array. "
++ "It is also possible to add in each record all the fields of the 
parent records from the root level to the record element being "
++ "forked. However it supposes the fields to add are defined in 
the schema of the Record Writer controller service. See examples in "
++ "the additional details documentation of the processor.")
+@WritesAttributes({
+@WritesAttribute(attribute = "record.count", description = "The merged 
FlowFile will have a 'record.count' attribute indicating the number of records "
++ "that were written to the FlowFile."),
+ 

[GitHub] nifi pull request #2037: NIFI-4227 - add a ForkRecord processor

2017-08-17 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2037#discussion_r133756360
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ForkRecord.java
 ---
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+
+@SideEffectFree
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"fork", "record", "content", "array", "stream", "event"})
+@CapabilityDescription("This processor allows the user to fork a record 
into multiple records. The user must specify a RecordPath pointing "
++ "to a field of type ARRAY containing RECORD elements. The 
generated flow file will contain the records from the specified array. "
++ "It is also possible to add in each record all the fields of the 
parent records from the root level to the record element being "
++ "forked. However it supposes the fields to add are defined in 
the schema of the Record Writer controller service. See examples in "
++ "the additional details documentation of the processor.")
+@WritesAttributes({
+@WritesAttribute(attribute = "record.count", description = "The merged 
FlowFile will have a 'record.count' attribute indicating the number of records "
++ "that were written to the FlowFile."),
+@WritesAttribute(attribute = "mime.type", description = "The MIME Type 
indicated by the Record Writer"),
+@WritesAttribute(attribute = "", 
description = "Any Attribute that the configured Record Writer returns will be 

[jira] [Commented] (NIFI-4227) Create a ForkRecord processor

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130722#comment-16130722
 ] 

ASF GitHub Bot commented on NIFI-4227:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2037#discussion_r133754979
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ForkRecord.java
 ---
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+
+@SideEffectFree
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"fork", "record", "content", "array", "stream", "event"})
+@CapabilityDescription("This processor allows the user to fork a record 
into multiple records. The user must specify a RecordPath pointing "
++ "to a field of type ARRAY containing RECORD elements. The 
generated flow file will contain the records from the specified array. "
++ "It is also possible to add in each record all the fields of the 
parent records from the root level to the record element being "
++ "forked. However it supposes the fields to add are defined in 
the schema of the Record Writer controller service. See examples in "
++ "the additional details documentation of the processor.")
+@WritesAttributes({
+@WritesAttribute(attribute = "record.count", description = "The merged 
FlowFile will have a 'record.count' attribute indicating the number of records "
++ "that were written to the FlowFile."),
+   

[GitHub] nifi pull request #2037: NIFI-4227 - add a ForkRecord processor

2017-08-17 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2037#discussion_r133754979
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ForkRecord.java
 ---
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+
+@SideEffectFree
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"fork", "record", "content", "array", "stream", "event"})
+@CapabilityDescription("This processor allows the user to fork a record 
into multiple records. The user must specify a RecordPath pointing "
++ "to a field of type ARRAY containing RECORD elements. The 
generated flow file will contain the records from the specified array. "
++ "It is also possible to add in each record all the fields of the 
parent records from the root level to the record element being "
++ "forked. However it supposes the fields to add are defined in 
the schema of the Record Writer controller service. See examples in "
++ "the additional details documentation of the processor.")
+@WritesAttributes({
+@WritesAttribute(attribute = "record.count", description = "The merged 
FlowFile will have a 'record.count' attribute indicating the number of records "
++ "that were written to the FlowFile."),
+@WritesAttribute(attribute = "mime.type", description = "The MIME Type 
indicated by the Record Writer"),
+@WritesAttribute(attribute = "", 
description = "Any Attribute that the configured Record Writer returns will be 
added 

[jira] [Commented] (NIFI-4227) Create a ForkRecord processor

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130721#comment-16130721
 ] 

ASF GitHub Bot commented on NIFI-4227:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2037#discussion_r133754811
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ForkRecord.java
 ---
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+
+@SideEffectFree
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"fork", "record", "content", "array", "stream", "event"})
+@CapabilityDescription("This processor allows the user to fork a record 
into multiple records. The user must specify a RecordPath pointing "
++ "to a field of type ARRAY containing RECORD elements. The 
generated flow file will contain the records from the specified array. "
++ "It is also possible to add in each record all the fields of the 
parent records from the root level to the record element being "
++ "forked. However it supposes the fields to add are defined in 
the schema of the Record Writer controller service. See examples in "
++ "the additional details documentation of the processor.")
+@WritesAttributes({
+@WritesAttribute(attribute = "record.count", description = "The merged 
FlowFile will have a 'record.count' attribute indicating the number of records "
++ "that were written to the FlowFile."),
+   

[GitHub] nifi pull request #2037: NIFI-4227 - add a ForkRecord processor

2017-08-17 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2037#discussion_r133754811
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ForkRecord.java
 ---
@@ -0,0 +1,293 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+
+@SideEffectFree
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"fork", "record", "content", "array", "stream", "event"})
+@CapabilityDescription("This processor allows the user to fork a record 
into multiple records. The user must specify a RecordPath pointing "
++ "to a field of type ARRAY containing RECORD elements. The 
generated flow file will contain the records from the specified array. "
++ "It is also possible to add in each record all the fields of the 
parent records from the root level to the record element being "
++ "forked. However it supposes the fields to add are defined in 
the schema of the Record Writer controller service. See examples in "
++ "the additional details documentation of the processor.")
+@WritesAttributes({
+@WritesAttribute(attribute = "record.count", description = "The merged 
FlowFile will have a 'record.count' attribute indicating the number of records "
++ "that were written to the FlowFile."),
+@WritesAttribute(attribute = "mime.type", description = "The MIME Type 
indicated by the Record Writer"),
+@WritesAttribute(attribute = "", 
description = "Any Attribute that the configured Record Writer returns will be 
added 

[jira] [Commented] (NIFI-4004) Refactor RecordReaderFactory and SchemaAccessStrategy to be used without incoming FlowFile

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130698#comment-16130698
 ] 

ASF GitHub Bot commented on NIFI-4004:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1877
  
@ijokarumawak it does look good now. Unfortunately, though, it looks like 
it needs to be rebased. It looks like this is a rather large PR, though, that 
touches a lot of different points. And it looks like there are several new 
processors, controller services, etc. that have been merged to master since the 
PR was created. I wonder if it may make sense, instead of changing 
RecordReaderFactory to take `Map` variables instead of 
`FlowFIle flowFile`, I wonder if it makes sense to add both methods to the 
interface, and then have a default method in the interface, such as:

```
default RecordReader createRecordReader(FlowFile flowFile, InputStream in, 
ComponentLog logger) throws MalformedRecordException, IOException, 
SchemaNotFoundException {
return createRecordReader(flowFile == null ? Collections.emptyMap() 
: flowFile.getAttributes(), in, logger);
}

RecordReader createRecordReader(Map variables, 
InputStream in, ComponentLog logger) throws MalformedRecordException, 
IOException, SchemaNotFoundException;

```

Thoughts?


> Refactor RecordReaderFactory and SchemaAccessStrategy to be used without 
> incoming FlowFile
> --
>
> Key: NIFI-4004
> URL: https://issues.apache.org/jira/browse/NIFI-4004
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.2.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> Current RecordReaderFactory and SchemaAccessStrategy implementation assumes 
> there's always an incoming FlowFile available, and use it to resolve Record 
> Schema.
> That is fine for components those convert or update incoming FlowFiles, 
> however there are other components those does not have any incoming 
> FlowFiles, for example, ConsumeKafkaRecord_0_10. Typically, ones fetches data 
> from external system do not have incoming FlowFile. And current API doesn't 
> fit well with these as it requires a FlowFile.
> In fact, [ConsumeKafkaRecord creates a temporal 
> FlowFile|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-0-10-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumerLease.java#L426]
>  only to get RecordSchema. This should be avoided as we expect more 
> components start using Record reader mechanism.
> This JIRA proposes refactoring current API to allow accessing RecordReaders 
> without needing an incoming FlowFile.
> Additionally, since there's Schema Access Strategy that requires incoming 
> FlowFile containing attribute values to access schema registry, it'd be 
> useful if we could tell user when such RecordReader is specified that it 
> can't be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1877: NIFI-4004: Use RecordReaderFactory without FlowFile.

2017-08-17 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1877
  
@ijokarumawak it does look good now. Unfortunately, though, it looks like 
it needs to be rebased. It looks like this is a rather large PR, though, that 
touches a lot of different points. And it looks like there are several new 
processors, controller services, etc. that have been merged to master since the 
PR was created. I wonder if it may make sense, instead of changing 
RecordReaderFactory to take `Map` variables instead of 
`FlowFIle flowFile`, I wonder if it makes sense to add both methods to the 
interface, and then have a default method in the interface, such as:

```
default RecordReader createRecordReader(FlowFile flowFile, InputStream in, 
ComponentLog logger) throws MalformedRecordException, IOException, 
SchemaNotFoundException {
return createRecordReader(flowFile == null ? Collections.emptyMap() 
: flowFile.getAttributes(), in, logger);
}

RecordReader createRecordReader(Map variables, 
InputStream in, ComponentLog logger) throws MalformedRecordException, 
IOException, SchemaNotFoundException;

```

Thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-4301) ExecuteScript Processor executing Python Script fails at os.getpid()

2017-08-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4301:
-
   Priority: Major  (was: Blocker)
Component/s: (was: Core Framework)
 Extensions

Updating priority as this is clearly not a blocker.
PR submitted.

> ExecuteScript Processor executing Python Script fails at os.getpid()
> 
>
> Key: NIFI-4301
> URL: https://issues.apache.org/jira/browse/NIFI-4301
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Will Lieu
>Assignee: Pierre Villard
>
> Currently NiFi Version 1.3.0 uses Jython-Shaded-2.7.0 which contains a bug of 
> the os.getpid() method not being implemented. Is there any way you guys can 
> rev this jar to use 2.7.1? 
> See: [Jython Issue 2405|http://bugs.jython.org/issue2405.)]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4301) ExecuteScript Processor executing Python Script fails at os.getpid()

2017-08-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4301:
-
 Assignee: Pierre Villard
Fix Version/s: (was: 1.4.0)
   (was: 1.3.0)
   Status: Patch Available  (was: Open)

> ExecuteScript Processor executing Python Script fails at os.getpid()
> 
>
> Key: NIFI-4301
> URL: https://issues.apache.org/jira/browse/NIFI-4301
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Will Lieu
>Assignee: Pierre Villard
>Priority: Blocker
>
> Currently NiFi Version 1.3.0 uses Jython-Shaded-2.7.0 which contains a bug of 
> the os.getpid() method not being implemented. Is there any way you guys can 
> rev this jar to use 2.7.1? 
> See: [Jython Issue 2405|http://bugs.jython.org/issue2405.)]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2095: NIFI-4301 - bumped jython-shaded version to 2.7.1

2017-08-17 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2095

NIFI-4301 - bumped jython-shaded version to 2.7.1

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-4301

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2095.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2095


commit 554322f334d65df4c367375510a2d23f5c5598ff
Author: Pierre Villard 
Date:   2017-08-17T15:11:45Z

NIFI-4301 - bumped jython-shaded version to 2.7.1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4301) ExecuteScript Processor executing Python Script fails at os.getpid()

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130649#comment-16130649
 ] 

ASF GitHub Bot commented on NIFI-4301:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2095

NIFI-4301 - bumped jython-shaded version to 2.7.1

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-4301

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2095.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2095


commit 554322f334d65df4c367375510a2d23f5c5598ff
Author: Pierre Villard 
Date:   2017-08-17T15:11:45Z

NIFI-4301 - bumped jython-shaded version to 2.7.1




> ExecuteScript Processor executing Python Script fails at os.getpid()
> 
>
> Key: NIFI-4301
> URL: https://issues.apache.org/jira/browse/NIFI-4301
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Will Lieu
>Priority: Blocker
> Fix For: 1.3.0, 1.4.0
>
>
> Currently NiFi Version 1.3.0 uses Jython-Shaded-2.7.0 which contains a bug of 
> the os.getpid() method not being implemented. Is there any way you guys can 
> rev this jar to use 2.7.1? 
> See: [Jython Issue 2405|http://bugs.jython.org/issue2405.)]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-3866) Create PartitionRecord processor to compliment UpdateRecord and RecordPath

2017-08-17 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-3866.
--
Resolution: Duplicate

> Create PartitionRecord processor to compliment UpdateRecord and RecordPath
> --
>
> Key: NIFI-3866
> URL: https://issues.apache.org/jira/browse/NIFI-3866
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Witt
>
> In https://issues.apache.org/jira/browse/NIFI-3838 an UpdateRecord and 
> RecordPath construct is introduced.  We should also build a PartitionRecord 
> processor which will allow format/schema aware partitioning of record streams 
> along identified matching column values.  The matching value should show up 
> as a FlowFile attribute on a given bundle of records such that it can be used 
> for merging and other cases later.  That also highlights the value of a need 
> for MergeRecord which is format/schema aware.  We can do merge later on if 
> appropriate.  This should all work as well with a LookupRecord processor 
> which can interact with a controller service to lookup a value found in a 
> record using RecordPath and then then the value can be turned into a flowfile 
> attribute, used for routing, or using RecordPath placed back into the record 
> to replace an existing value or make a new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-385) Add Kerberos support in nifi-kite-nar

2017-08-17 Thread Didier Petiron (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130506#comment-16130506
 ] 

Didier Petiron commented on NIFI-385:
-

Is there any chance to get the jira implemented in future releases?

> Add Kerberos support in nifi-kite-nar
> -
>
> Key: NIFI-385
> URL: https://issues.apache.org/jira/browse/NIFI-385
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Ryan Blue
>
> Kite should be able to connect to a Kerberized Hadoop cluster to store data. 
> Kite's Flume connector has working code. The Kite dataset needs to be 
> instantiated in a {{doPrivileged}} block and its internal {{FileSystem}} 
> object will hold the credentials after that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-3074) Upgrade logback to 1.1.7

2017-08-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-3074.
--
   Resolution: Duplicate
Fix Version/s: 1.2.0

> Upgrade logback to 1.1.7
> 
>
> Key: NIFI-3074
> URL: https://issues.apache.org/jira/browse/NIFI-3074
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.0.0, 0.7.1
>Reporter: Aldrin Piri
>Priority: Minor
> Fix For: 1.2.0
>
>
> [~JDP10101] pointed out in 
> https://github.com/apache/nifi-minifi/pull/56#issuecomment-261640697 that 
> there seems to be some inconsistencies with how logback handles its 
> configuration.  
> As highlighted in the linked comment, the functionality needed for what I 
> believe is the intended to configuration is only available in 1.1.7.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-4224) Add Variable Registry at Process Group level

2017-08-17 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-4224.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

> Add Variable Registry at Process Group level
> 
>
> Key: NIFI-4224
> URL: https://issues.apache.org/jira/browse/NIFI-4224
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> Currently, NiFi exposes a variable registry that is configurable by adding 
> the name of a properties file to nifi.properties and then treating the 
> referenced properties file as key/value pairs for the variable registry. 
> This, however, is very limiting, as it provides a global scope for all 
> variables, and it requires a restart of NiFi in order to pick up any updates 
> to the file. We should expose a Process Group-level Variable Registry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4224) Add Variable Registry at Process Group level

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130503#comment-16130503
 ] 

ASF GitHub Bot commented on NIFI-4224:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2051
  
Thanks @markap14! This has been merged to master.


> Add Variable Registry at Process Group level
> 
>
> Key: NIFI-4224
> URL: https://issues.apache.org/jira/browse/NIFI-4224
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> Currently, NiFi exposes a variable registry that is configurable by adding 
> the name of a properties file to nifi.properties and then treating the 
> referenced properties file as key/value pairs for the variable registry. 
> This, however, is very limiting, as it provides a global scope for all 
> variables, and it requires a restart of NiFi in order to pick up any updates 
> to the file. We should expose a Process Group-level Variable Registry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2051: NIFI-4224: Initial implementation of Process Group level V...

2017-08-17 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2051
  
Thanks @markap14! This has been merged to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-4303) RabbitMQConsumer: Publish the message routing key as an attribute to flowfile

2017-08-17 Thread Mehdi Avdi (JIRA)
Mehdi Avdi created NIFI-4303:


 Summary: RabbitMQConsumer: Publish the message routing key as an 
attribute to flowfile
 Key: NIFI-4303
 URL: https://issues.apache.org/jira/browse/NIFI-4303
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mehdi Avdi
 Attachments: Screen Shot 2017-08-17 at 15.35.57.png

When retrieving messages from a queue that is bound to a topic exchange, 
message come with a routing key which might contain crucial identifying 
information about where the message came from.

Current implementation of the RabbitMQConsumer processor doesn't publish this 
property of the message to the attributes of the flowfile it creates. Can this 
be added?

I found the relevant code here: 
https://github.com/apache/nifi/blob/d838f61291d2582592754a37314911b701c6891b/nifi-nar-bundles/nifi-amqp-bundle/nifi-amqp-processors/src/main/java/org/apache/nifi/amqp/processors/AMQPUtils.java#L95

I have very sketchy Java skills, otherwise would submit a PR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-2916) processor specific kerberos credentials for publishKafka and consumeKafka

2017-08-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-2916.
--
   Resolution: Fixed
 Assignee: Pierre Villard
Fix Version/s: 1.2.0

> processor specific kerberos credentials for publishKafka and consumeKafka
> -
>
> Key: NIFI-2916
> URL: https://issues.apache.org/jira/browse/NIFI-2916
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: William S Cochran
>Assignee: Pierre Villard
> Fix For: 1.2.0
>
>
> Please add the capability and or documentation to configure processor 
> specific kerberos credentials for publishKafka and consumeKafka. 
> Currently the only method for Kerberos authentication appears to be via 
> java.arg options in bootstrap.conf:
> java.arg.15=-Djava.security.krb5.conf=/etc/krb5.conf
> java.arg.16=-Djava.security.auth.login.config=./conf/kafka.client.jaas.conf
> This means all nifi kafka processors currently must share the same global 
> credential.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-962) Create a processor to evaluate Avro paths

2017-08-17 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-962.
--
Resolution: Won't Do

> Create a processor to evaluate Avro paths
> -
>
> Key: NIFI-962
> URL: https://issues.apache.org/jira/browse/NIFI-962
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Evaluate a set of Avro paths against an incoming file, and extract the 
> results to FlowFile attributes, or to the content of the FlowFile, similar to 
> EvaluateJsonPath. This would allow downstream processors to easily make 
> decisions based on values in an Avro record, such as RouteOnAttribute.
> This would be particularly useful to use in conjunction with SplitAvro 
> (NIFI-919) to make routing decisions on bare avro records.
> Flume has a similar concept in Morphlines that may be useful to look at:
> https://github.com/cloudera/cdk/blob/master/cdk-morphlines/cdk-morphlines-avro/src/main/java/com/cloudera/cdk/morphline/avro/ExtractAvroPathsBuilder.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-962) Create a processor to evaluate Avro paths

2017-08-17 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130473#comment-16130473
 ] 

Bryan Bende commented on NIFI-962:
--

Yup I agree, I'll close this.

> Create a processor to evaluate Avro paths
> -
>
> Key: NIFI-962
> URL: https://issues.apache.org/jira/browse/NIFI-962
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Evaluate a set of Avro paths against an incoming file, and extract the 
> results to FlowFile attributes, or to the content of the FlowFile, similar to 
> EvaluateJsonPath. This would allow downstream processors to easily make 
> decisions based on values in an Avro record, such as RouteOnAttribute.
> This would be particularly useful to use in conjunction with SplitAvro 
> (NIFI-919) to make routing decisions on bare avro records.
> Flume has a similar concept in Morphlines that may be useful to look at:
> https://github.com/cloudera/cdk/blob/master/cdk-morphlines/cdk-morphlines-avro/src/main/java/com/cloudera/cdk/morphline/avro/ExtractAvroPathsBuilder.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-962) Create a processor to evaluate Avro paths

2017-08-17 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130465#comment-16130465
 ] 

Mark Payne commented on NIFI-962:
-

[~bbende] - I think this is probably OBE due to the new record-oriented 
processors such as UpdateRecord. Do you agree?

> Create a processor to evaluate Avro paths
> -
>
> Key: NIFI-962
> URL: https://issues.apache.org/jira/browse/NIFI-962
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>
> Evaluate a set of Avro paths against an incoming file, and extract the 
> results to FlowFile attributes, or to the content of the FlowFile, similar to 
> EvaluateJsonPath. This would allow downstream processors to easily make 
> decisions based on values in an Avro record, such as RouteOnAttribute.
> This would be particularly useful to use in conjunction with SplitAvro 
> (NIFI-919) to make routing decisions on bare avro records.
> Flume has a similar concept in Morphlines that may be useful to look at:
> https://github.com/cloudera/cdk/blob/master/cdk-morphlines/cdk-morphlines-avro/src/main/java/com/cloudera/cdk/morphline/avro/ExtractAvroPathsBuilder.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-2793) Add documentation for primary node only scheduling strategy

2017-08-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-2793.
--
   Resolution: Duplicate
Fix Version/s: 1.1.0

> Add documentation for primary node only scheduling strategy
> ---
>
> Key: NIFI-2793
> URL: https://issues.apache.org/jira/browse/NIFI-2793
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Arpit Gupta
>Assignee: Sarah Olson
>Priority: Minor
> Fix For: 1.1.0
>
>
> https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#scheduling-tab 
> does not cover the primary node only option. This option is only available 
> when user is running in clustered mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4295) When selecting a controller service for a processor, services that belong to the wrong scope are shown

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130457#comment-16130457
 ] 

ASF GitHub Bot commented on NIFI-4295:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2087
  
@mcgilman wow! Sorry about that. Pushed a new commit to address.


> When selecting a controller service for a processor, services that belong to 
> the wrong scope are shown
> --
>
> Key: NIFI-4295
> URL: https://issues.apache.org/jira/browse/NIFI-4295
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> I have a Process Group, and in that Process Group I have defined a Controller 
> Service. If I then move up to the root group and add a Processor, that takes 
> a Controller Service of that type, I can now choose the Controller Service 
> that is defined in a child group. When I do so, my processor becomes invalid 
> and the reason given for the processor to be invalid is a NullPointer:
> {code}
> 2017-08-14 12:38:24,016 WARN [NiFi Web Server-76] 
> o.a.n.controller.StandardProcessorNode Failed during validation
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processor.StandardValidationContext.isValidationRequired(StandardValidationContext.java:143)
>   at 
> org.apache.nifi.components.PropertyDescriptor.validate(PropertyDescriptor.java:150)
>   at 
> org.apache.nifi.components.AbstractConfigurableComponent.validate(AbstractConfigurableComponent.java:103)
>   at 
> org.apache.nifi.controller.AbstractConfiguredComponent.validate(AbstractConfiguredComponent.java:329)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.isValid(StandardProcessorNode.java:968)
>   at 
> org.apache.nifi.controller.FlowController.getProcessorStatus(FlowController.java:2964)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2559)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2518)
>   at 
> org.apache.nifi.controller.FlowController.getGroupStatus(FlowController.java:2485)
>   at 
> org.apache.nifi.web.controller.ControllerFacade.getProcessGroupStatus(ControllerFacade.java:599)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$FastClassBySpringCGLIB$$5a42ba54.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
>   at 
> org.apache.nifi.web.controller.ControllerFacade$$EnhancerBySpringCGLIB$$344e3ba2.getProcessGroupStatus()
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade.getProcessGroupFlow(StandardNiFiServiceFacade.java:3054)
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
>   at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithReadLock(NiFiServiceFacadeLock.java:137)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.getLock(NiFiServiceFacadeLock.java:108)
>   at sun.reflect.GeneratedMethodAccessor168.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610)
>   at 
> org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:68)
>   at 
> 

[GitHub] nifi issue #2087: NIFI-4295: When determining which controller services to r...

2017-08-17 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2087
  
@mcgilman wow! Sorry about that. Pushed a new commit to address.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (NIFI-2379) Enable 'site to site' to support multiple destination sites for failover and possibly load-balancing

2017-08-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reopened NIFI-2379:
--

Re-opening as the comment from Joe suggests a bigger picture for the intent of 
this JIRA.

> Enable 'site to site' to support multiple destination sites for failover and 
> possibly load-balancing
> 
>
> Key: NIFI-2379
> URL: https://issues.apache.org/jira/browse/NIFI-2379
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: n h
>  Labels: Core, Site-to-Site
>
> Add support for multiple path (IPs) to site-to-site, so that in failure cases 
> it could elect to send the data down an alternative path (IP).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-2379) Enable 'site to site' to support multiple destination sites for failover and possibly load-balancing

2017-08-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-2379.
--
Resolution: Duplicate

> Enable 'site to site' to support multiple destination sites for failover and 
> possibly load-balancing
> 
>
> Key: NIFI-2379
> URL: https://issues.apache.org/jira/browse/NIFI-2379
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: n h
>  Labels: Core, Site-to-Site
>
> Add support for multiple path (IPs) to site-to-site, so that in failure cases 
> it could elect to send the data down an alternative path (IP).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NIFI-2377) Help docs have too much weight to the listing vs the content

2017-08-17 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-2377.
--
Resolution: Duplicate
  Assignee: Pierre Villard

This has been improved while also resolving NIFI-917.

> Help docs have too much weight to the listing vs the content
> 
>
> Key: NIFI-2377
> URL: https://issues.apache.org/jira/browse/NIFI-2377
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Joseph Witt
>Assignee: Pierre Villard
>Priority: Minor
>
> the help docs are much more responsive now which is great.  But there is a 
> large amount of wasted space due to the weighting given to the component 
> listing section vs the content.  Should give the content more weight.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-2375) Bump tika dependency version in nifi-media-nar

2017-08-17 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130414#comment-16130414
 ] 

Pierre Villard commented on NIFI-2375:
--

Just to let you know that with the release of Apache Tika 1.16 (last July) we 
should be good to go (based on TIKA-1804). However, just bumping the version is 
causing a lot of unit test failures. If someone with Tika experience can have a 
look, would be great.

> Bump tika dependency version in nifi-media-nar
> --
>
> Key: NIFI-2375
> URL: https://issues.apache.org/jira/browse/NIFI-2375
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Andre F de Miranda
>Assignee: Joseph Witt
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4302) QueryDatabaseTable bug

2017-08-17 Thread Valeriy Plyashko (JIRA)
Valeriy Plyashko created NIFI-4302:
--

 Summary: QueryDatabaseTable bug
 Key: NIFI-4302
 URL: https://issues.apache.org/jira/browse/NIFI-4302
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.3.0
Reporter: Valeriy Plyashko


Hello! Please fix

{noformat}
org.apache.nifi.processors.standard.QueryDatabaseTable#onTrigger
org.apache.nifi.processors.standard.util.JdbcCommon#convertToAvroStream(java.sql.ResultSet,
 java.io.OutputStream, 
org.apache.nifi.processors.standard.util.JdbcCommon.AvroConversionOptions, 
org.apache.nifi.processors.standard.util.JdbcCommon.ResultSetRowCallback)
{noformat}

to prevent calling java.sql.ResultSet#next after it already returned false.

Motivation:

I found that usage of java.sql.ResultSet at 
org/apache/nifi/processors/standard/QueryDatabaseTable.java:287
doesn't fully match java.sql.ResultSet documentation.

Docs for java.sql.ResultSet#next:
{noformat}
 *  If the result set type is TYPE_FORWARD_ONLY, it is vendor 
specified
 * whether their JDBC driver implementation will return false 
or
 *  throw an SQLException on a
 * subsequent call to next.
{noformat}
But loop org/apache/nifi/processors/standard/QueryDatabaseTable.java:278 
doesn't care if resultSet returned false. If there were any records in past 
iteration and we have got not enough fragments, loop continues and calls 
java.sql.ResultSet#next at least once more. It is ignored that 
java.sql.ResultSet#next returned false in past iteration.

On some jdbc drivers it's ok. But, for example, using

{code:xml}

ru.yandex.clickhouse
clickhouse-jdbc
0.1.26

{code}

causes SQLException when you call java.sql.ResultSet#next after it already 
returned false.

The result is:

{noformat}
2017-08-16 15:47:56,478 ERROR [Timer-Driven Process Thread-3] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e4f48906-015d-1000-a318-4584f2755c75] Unable to execute 
SQL select query SELECT * FROM abc due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: java.io.IOException: Attempted read on closed 
stream.
at 
ru.yandex.clickhouse.response.ClickHouseResultSet.hasNext(ClickHouseResultSet.java:114)
at 
ru.yandex.clickhouse.response.ClickHouseResultSet.next(ClickHouseResultSet.java:124)
at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
at 
org.apache.commons.dbcp.DelegatingResultSet.next(DelegatingResultSet.java:207)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:252)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:303)
... 13 common frames omitted
Caused by: java.io.IOException: Attempted read on closed stream.
at 
org.apache.http.conn.EofSensorInputStream.isReadAllowed(EofSensorInputStream.java:109)
at 
org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:118)
at 
ru.yandex.clickhouse.response.ClickHouseLZ4Stream.readNextBlock(ClickHouseLZ4Stream.java:82)
at 
ru.yandex.clickhouse.response.ClickHouseLZ4Stream.checkNext(ClickHouseLZ4Stream.java:74)
at