[jira] [Commented] (NIFI-5834) Restore default PutHiveQL error handling behavior

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694298#comment-16694298
 ] 

ASF GitHub Bot commented on NIFI-5834:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/3179
  
Reviewing..


> Restore default PutHiveQL error handling behavior
> -
>
> Key: NIFI-5834
> URL: https://issues.apache.org/jira/browse/NIFI-5834
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> As part of adding Rollback On Failure to PutHiveQL (via NIFI-3415), the code 
> was refactored to allow failures to be rolled back rather than transferred to 
> the failure relationship (if Rollback On Failure is set). As part of that, 
> all transient SQLExceptions were declared to be of type "Temporal Failure". 
> This (along with the refactor) allowed the failures to be handled as 
> rollbacks or transfers as specified.
> Hive returns all exceptions as transient SQLExceptions, with an error code 
> that better infers the behavior of the operation.  This, via the discovery of 
> NIFI-5045, resulted in the handling of error codes within the Hive error code 
> range. However the default behavior when the error code is not in the 
> Hive-valid range is to rollback regardless of whether Rollback On Failure is 
> true or not. This was done as a "better safe than sorry" approach, but it 
> made the behavior inconsistent with earlier versions of the processor, where 
> failures were simply routed to failure rather than rolling back.
> This case proposes to return the default behavior for unknown SQLExceptions 
> to "TemporalFailure", which will make the behavior consistent with previous 
> versions of the processor, where unknown errors will be transferred to 
> failure unless Rollback on Failure is true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3179: NIFI-5834: Restore default PutHiveQL error handling behavi...

2018-11-20 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/3179
  
Reviewing..


---


[jira] [Created] (NIFI-5835) Add support to use SQL Server rowversion as maximum value column

2018-11-20 Thread Charlie Meyer (JIRA)
Charlie Meyer created NIFI-5835:
---

 Summary: Add support to use SQL Server rowversion as maximum value 
column
 Key: NIFI-5835
 URL: https://issues.apache.org/jira/browse/NIFI-5835
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.8.0
Reporter: Charlie Meyer


I'm attempting to do incremental fetch from a Microsoft SQL Server database and 
would like to use rowversion [1] as my maximum value column. When I configured 
the processor to use that column, it threw an exception [2].
 
[1] 
[https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql?view=sql-server-2017]
 
[2] 
[https://github.com/apache/nifi/blob/d8d220ccb86d1797f56f34649d70a1acff278eb5/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java#L456]
 
[~alopresto]'s reply on the mailing list:
 
Looking at this issue briefly, it seems that the NiFi code explicitly lists the 
accepted datatypes which can be used, and rowversion is not enumerated. 
Therefore it throws an exception. I suggest you open a feature request on our 
Jira page to support this. While it seems proprietary to Microsoft SQL 
versions, it says on the documentation page:
 
Is a data type that exposes automatically generated, unique binary numbers 
within a database. rowversion is generally used as a mechanism for 
version-stamping table rows. The storage size is 8 bytes. The rowversion data 
type is just an incrementing number and does not preserve a date or a time.
 
I think we could handle this datatype the same way we handle INTEGER, SMALLINT, 
TINYINT (or TIMESTAMP, as that is the functional equivalent from MS SQL which 
is now deprecated) in that switch statement, as it is simply an incrementing 8 
byte natural number. However, I would welcome input from someone like 
[~mattyb149] to see if maybe there is a translation that can be done in the 
Microsoft-specific driver to a generic integer datatype before it reaches this 
logic. I would expect SQLServerResultSetMetaData#getColumnType(int column) to 
perform this translation; perhaps the version of the driver needs to be updated?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5833) Treat Twitter tokens as sensitive values in GetTwitter

2018-11-20 Thread Andy LoPresto (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-5833:

Status: Patch Available  (was: In Progress)

> Treat Twitter tokens as sensitive values in GetTwitter
> --
>
> Key: NIFI-5833
> URL: https://issues.apache.org/jira/browse/NIFI-5833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: api, key, properties, security, sensitive, token, twitter
>
> The {{GetTwitter}} processor marks properties {{Consumer Secret}} and 
> {{Access Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access 
> Token}} are not marked as such. The [Twitter API 
> documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
>  says: 
> {quote}
> Your applications’ API keys should be guarded very closely. They represent 
> your unique access to the API and if leaked/used by other parties, this could 
> lead to abuse and restrictions placed on your application. *User access 
> tokens are even more sensitive*. When access tokens are generated, the user 
> they represent is trusting your application to keep them secure. If the 
> security of both API keys and user access tokens are compromised, your 
> application would potentially expose access to private information and 
> account functionality.
> {quote}
> Once the processor code is updated to treat these properties as sensitive, 
> there may need to be backward-compatibility changes added to ensure that 
> existing flows and templates do not break when deployed on the "new" system 
> (following, marked as *1.X*). The following scenarios should be tested:
> * 1.8.0 flow (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.8.0 template (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X flow (encrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X template (no {{CK}} and {{AT}}) deployed on 1.X
> The component documentation should also be appropriately updated to note that 
> a 1.X flow (encrypted {{CK}} and {{AT}}) will not work (immediately) on a 
> <=1.8.0 instance. Rather, manual intervention will be required to re-enter 
> the {{Consumer Key}} and {{Access Token}}, as the processor will attempt to 
> use the raw value {code} enc{ABCDEF...} {code} from the {{flow.xml.gz}} file 
> as the literal {{CK}} and {{AT}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5833) Treat Twitter tokens as sensitive values in GetTwitter

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694233#comment-16694233
 ] 

ASF GitHub Bot commented on NIFI-5833:
--

GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/3180

NIFI-5833 Treat GetTwitter API properties as sensitive

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-5833

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3180.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3180


commit 9c051560f801867d5fb5c70a16813f990ee54f4f
Author: Andy LoPresto 
Date:   2018-11-21T02:15:40Z

NIFI-5833 Marked GetTwitter Consumer Key and Access Token processor 
properties as sensitive.

commit 32a4f3187120dedde6d89f3d02747079e24d1bbd
Author: Andy LoPresto 
Date:   2018-11-21T04:32:08Z

NIFI-5833 Added unit test to demonstrate arbitrary decryption of sensitive 
values regardless of processor property sensitive status.

commit 9d1e2be41801703478a5d937a10f7829b89f3252
Author: Andy LoPresto 
Date:   2018-11-21T04:48:50Z

NIFI-5833 Updated GetTwitter documentation with note about 1.9.0+ marking 
Consumer Key and Access Token as sensitive.




> Treat Twitter tokens as sensitive values in GetTwitter
> --
>
> Key: NIFI-5833
> URL: https://issues.apache.org/jira/browse/NIFI-5833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: api, key, properties, security, sensitive, token, twitter
>
> The {{GetTwitter}} processor marks properties {{Consumer Secret}} and 
> {{Access Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access 
> Token}} are not marked as such. The [Twitter API 
> documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
>  says: 
> {quote}
> Your applications’ API keys should be guarded very closely. They represent 
> your unique access to the API and if leaked/used by other parties, this could 
> lead to abuse and restrictions placed on your application. *User access 
> tokens are even more sensitive*. When access tokens are generated, the user 
> they represent is trusting your application to keep them secure. If the 
> security of both API keys and user access tokens are compromised, your 
> application would potentially expose access to private information and 
> account functionality.
> {quote}
> Once the processor code is updated to treat these properties as sensitive, 
> there may need to be backward-compatibility changes added to ensure that 
> existing flows and templates do not break when deployed on the "new" system 
> (following, marked as *1.X*). The following scenarios should be 

[GitHub] nifi pull request #3180: NIFI-5833 Treat GetTwitter API properties as sensit...

2018-11-20 Thread alopresto
GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/3180

NIFI-5833 Treat GetTwitter API properties as sensitive

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-5833

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3180.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3180


commit 9c051560f801867d5fb5c70a16813f990ee54f4f
Author: Andy LoPresto 
Date:   2018-11-21T02:15:40Z

NIFI-5833 Marked GetTwitter Consumer Key and Access Token processor 
properties as sensitive.

commit 32a4f3187120dedde6d89f3d02747079e24d1bbd
Author: Andy LoPresto 
Date:   2018-11-21T04:32:08Z

NIFI-5833 Added unit test to demonstrate arbitrary decryption of sensitive 
values regardless of processor property sensitive status.

commit 9d1e2be41801703478a5d937a10f7829b89f3252
Author: Andy LoPresto 
Date:   2018-11-21T04:48:50Z

NIFI-5833 Updated GetTwitter documentation with note about 1.9.0+ marking 
Consumer Key and Access Token as sensitive.




---


[jira] [Commented] (NIFI-5833) Treat Twitter tokens as sensitive values in GetTwitter

2018-11-20 Thread Andy LoPresto (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694175#comment-16694175
 ] 

Andy LoPresto commented on NIFI-5833:
-

Upon further manual testing, deploying a 1.X flow on a previous instance (1.7.1 
in this case) results in the encrypted values being decrypted in 
[{{FlowFromDOMFactory#getProperties()}}|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/serialization/FlowFromDOMFactory.java#L483]
 regardless of the processor property's *sensitive* marking when the flow is 
loaded initially. 

> Treat Twitter tokens as sensitive values in GetTwitter
> --
>
> Key: NIFI-5833
> URL: https://issues.apache.org/jira/browse/NIFI-5833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: api, key, properties, security, sensitive, token, twitter
>
> The {{GetTwitter}} processor marks properties {{Consumer Secret}} and 
> {{Access Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access 
> Token}} are not marked as such. The [Twitter API 
> documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
>  says: 
> {quote}
> Your applications’ API keys should be guarded very closely. They represent 
> your unique access to the API and if leaked/used by other parties, this could 
> lead to abuse and restrictions placed on your application. *User access 
> tokens are even more sensitive*. When access tokens are generated, the user 
> they represent is trusting your application to keep them secure. If the 
> security of both API keys and user access tokens are compromised, your 
> application would potentially expose access to private information and 
> account functionality.
> {quote}
> Once the processor code is updated to treat these properties as sensitive, 
> there may need to be backward-compatibility changes added to ensure that 
> existing flows and templates do not break when deployed on the "new" system 
> (following, marked as *1.X*). The following scenarios should be tested:
> * 1.8.0 flow (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.8.0 template (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X flow (encrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X template (no {{CK}} and {{AT}}) deployed on 1.X
> The component documentation should also be appropriately updated to note that 
> a 1.X flow (encrypted {{CK}} and {{AT}}) will not work (immediately) on a 
> <=1.8.0 instance. Rather, manual intervention will be required to re-enter 
> the {{Consumer Key}} and {{Access Token}}, as the processor will attempt to 
> use the raw value {code} enc{ABCDEF...} {code} from the {{flow.xml.gz}} file 
> as the literal {{CK}} and {{AT}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5833) Treat Twitter tokens as sensitive values in GetTwitter

2018-11-20 Thread Andy LoPresto (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto reassigned NIFI-5833:
---

Assignee: Andy LoPresto

> Treat Twitter tokens as sensitive values in GetTwitter
> --
>
> Key: NIFI-5833
> URL: https://issues.apache.org/jira/browse/NIFI-5833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: api, key, properties, security, sensitive, token, twitter
>
> The {{GetTwitter}} processor marks properties {{Consumer Secret}} and 
> {{Access Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access 
> Token}} are not marked as such. The [Twitter API 
> documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
>  says: 
> {quote}
> Your applications’ API keys should be guarded very closely. They represent 
> your unique access to the API and if leaked/used by other parties, this could 
> lead to abuse and restrictions placed on your application. *User access 
> tokens are even more sensitive*. When access tokens are generated, the user 
> they represent is trusting your application to keep them secure. If the 
> security of both API keys and user access tokens are compromised, your 
> application would potentially expose access to private information and 
> account functionality.
> {quote}
> Once the processor code is updated to treat these properties as sensitive, 
> there may need to be backward-compatibility changes added to ensure that 
> existing flows and templates do not break when deployed on the "new" system 
> (following, marked as *1.X*). The following scenarios should be tested:
> * 1.8.0 flow (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.8.0 template (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X flow (encrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X template (no {{CK}} and {{AT}}) deployed on 1.X
> The component documentation should also be appropriately updated to note that 
> a 1.X flow (encrypted {{CK}} and {{AT}}) will not work (immediately) on a 
> <=1.8.0 instance. Rather, manual intervention will be required to re-enter 
> the {{Consumer Key}} and {{Access Token}}, as the processor will attempt to 
> use the raw value {code} enc{ABCDEF...} {code} from the {{flow.xml.gz}} file 
> as the literal {{CK}} and {{AT}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5833) Treat Twitter tokens as sensitive values in GetTwitter

2018-11-20 Thread Andy LoPresto (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694110#comment-16694110
 ] 

Andy LoPresto commented on NIFI-5833:
-

I confirmed that saving a flow in 1.9.0-SNAPSHOT (pre-*sensitive* marking) and 
then making the code change and restarting the instance with the same 
flow.xml.gz resulted in the values automatically being encrypted successfully 
(scenario 1 above): 

{code}
Encrypted value: 
enc{6e992728732d9321ee703a5de216c2eb55b86d74fb671810cacecab46502df2b}
 Stripped value: 
6e992728732d9321ee703a5de216c2eb55b86d74fb671810cacecab46502df2b
Decrypted value: consumerSecret
{code}

> Treat Twitter tokens as sensitive values in GetTwitter
> --
>
> Key: NIFI-5833
> URL: https://issues.apache.org/jira/browse/NIFI-5833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Priority: Major
>  Labels: api, key, properties, security, sensitive, token, twitter
>
> The {{GetTwitter}} processor marks properties {{Consumer Secret}} and 
> {{Access Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access 
> Token}} are not marked as such. The [Twitter API 
> documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
>  says: 
> {quote}
> Your applications’ API keys should be guarded very closely. They represent 
> your unique access to the API and if leaked/used by other parties, this could 
> lead to abuse and restrictions placed on your application. *User access 
> tokens are even more sensitive*. When access tokens are generated, the user 
> they represent is trusting your application to keep them secure. If the 
> security of both API keys and user access tokens are compromised, your 
> application would potentially expose access to private information and 
> account functionality.
> {quote}
> Once the processor code is updated to treat these properties as sensitive, 
> there may need to be backward-compatibility changes added to ensure that 
> existing flows and templates do not break when deployed on the "new" system 
> (following, marked as *1.X*). The following scenarios should be tested:
> * 1.8.0 flow (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.8.0 template (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X flow (encrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X template (no {{CK}} and {{AT}}) deployed on 1.X
> The component documentation should also be appropriately updated to note that 
> a 1.X flow (encrypted {{CK}} and {{AT}}) will not work (immediately) on a 
> <=1.8.0 instance. Rather, manual intervention will be required to re-enter 
> the {{Consumer Key}} and {{Access Token}}, as the processor will attempt to 
> use the raw value {code} enc{ABCDEF...} {code} from the {{flow.xml.gz}} file 
> as the literal {{CK}} and {{AT}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-212) Add detailed proxy policy configuration

2018-11-20 Thread Koji Kawamura (JIRA)
Koji Kawamura created NIFIREG-212:
-

 Summary: Add detailed proxy policy configuration
 Key: NIFIREG-212
 URL: https://issues.apache.org/jira/browse/NIFIREG-212
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Koji Kawamura


NiFi Registry should support more granular 'proxy' policy for different NiFi 
cluster can share the same NiFi Registry instance, but limiting mutation 
requests (e.g. committing new flow version ... etc) from certain environment.

For example, if there are dev and prod NiFi clusters, then only allow dev NiFi 
cluster to update versioned flows. Prod NiFi can only download.

A detailed use-case was reported to NiFi user mailing list.
https://mail-archives.apache.org/mod_mbox/nifi-users/201811.mbox/%3cdfeada92-56d6-480c-b3f9-b101edee4...@ncr.com%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5834) Restore default PutHiveQL error handling behavior

2018-11-20 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5834:
---
Status: Patch Available  (was: In Progress)

> Restore default PutHiveQL error handling behavior
> -
>
> Key: NIFI-5834
> URL: https://issues.apache.org/jira/browse/NIFI-5834
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> As part of adding Rollback On Failure to PutHiveQL (via NIFI-3415), the code 
> was refactored to allow failures to be rolled back rather than transferred to 
> the failure relationship (if Rollback On Failure is set). As part of that, 
> all transient SQLExceptions were declared to be of type "Temporal Failure". 
> This (along with the refactor) allowed the failures to be handled as 
> rollbacks or transfers as specified.
> Hive returns all exceptions as transient SQLExceptions, with an error code 
> that better infers the behavior of the operation.  This, via the discovery of 
> NIFI-5045, resulted in the handling of error codes within the Hive error code 
> range. However the default behavior when the error code is not in the 
> Hive-valid range is to rollback regardless of whether Rollback On Failure is 
> true or not. This was done as a "better safe than sorry" approach, but it 
> made the behavior inconsistent with earlier versions of the processor, where 
> failures were simply routed to failure rather than rolling back.
> This case proposes to return the default behavior for unknown SQLExceptions 
> to "TemporalFailure", which will make the behavior consistent with previous 
> versions of the processor, where unknown errors will be transferred to 
> failure unless Rollback on Failure is true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5834) Restore default PutHiveQL error handling behavior

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693875#comment-16693875
 ] 

ASF GitHub Bot commented on NIFI-5834:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/3179

NIFI-5834: Restore default PutHiveQL error handling behavior

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-5834

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3179.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3179


commit 3e1b98b74476bc534f7039580cfde25bd733e773
Author: Matthew Burgess 
Date:   2018-11-20T22:58:59Z

NIFI-5834: Restore default PutHiveQL error handling behavior




> Restore default PutHiveQL error handling behavior
> -
>
> Key: NIFI-5834
> URL: https://issues.apache.org/jira/browse/NIFI-5834
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> As part of adding Rollback On Failure to PutHiveQL (via NIFI-3415), the code 
> was refactored to allow failures to be rolled back rather than transferred to 
> the failure relationship (if Rollback On Failure is set). As part of that, 
> all transient SQLExceptions were declared to be of type "Temporal Failure". 
> This (along with the refactor) allowed the failures to be handled as 
> rollbacks or transfers as specified.
> Hive returns all exceptions as transient SQLExceptions, with an error code 
> that better infers the behavior of the operation.  This, via the discovery of 
> NIFI-5045, resulted in the handling of error codes within the Hive error code 
> range. However the default behavior when the error code is not in the 
> Hive-valid range is to rollback regardless of whether Rollback On Failure is 
> true or not. This was done as a "better safe than sorry" approach, but it 
> made the behavior inconsistent with earlier versions of the processor, where 
> failures were simply routed to failure rather than rolling back.
> This case proposes to return the default behavior for unknown SQLExceptions 
> to "TemporalFailure", which will make the behavior consistent with previous 
> versions of the processor, where unknown errors will be transferred to 
> failure unless Rollback on Failure is true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3179: NIFI-5834: Restore default PutHiveQL error handling...

2018-11-20 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/3179

NIFI-5834: Restore default PutHiveQL error handling behavior

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-5834

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/3179.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3179


commit 3e1b98b74476bc534f7039580cfde25bd733e773
Author: Matthew Burgess 
Date:   2018-11-20T22:58:59Z

NIFI-5834: Restore default PutHiveQL error handling behavior




---


[jira] [Commented] (NIFI-5834) Restore default PutHiveQL error handling behavior

2018-11-20 Thread Matt Burgess (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693842#comment-16693842
 ] 

Matt Burgess commented on NIFI-5834:


Also the changes in NIFI-5045 were not applied to PutHive3QL as they were 
developed concurrently. This Jira should copy the updated logic to PutHive3QL 
once it is working for PutHiveQL.

> Restore default PutHiveQL error handling behavior
> -
>
> Key: NIFI-5834
> URL: https://issues.apache.org/jira/browse/NIFI-5834
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Priority: Major
>
> As part of adding Rollback On Failure to PutHiveQL (via NIFI-3415), the code 
> was refactored to allow failures to be rolled back rather than transferred to 
> the failure relationship (if Rollback On Failure is set). As part of that, 
> all transient SQLExceptions were declared to be of type "Temporal Failure". 
> This (along with the refactor) allowed the failures to be handled as 
> rollbacks or transfers as specified.
> Hive returns all exceptions as transient SQLExceptions, with an error code 
> that better infers the behavior of the operation.  This, via the discovery of 
> NIFI-5045, resulted in the handling of error codes within the Hive error code 
> range. However the default behavior when the error code is not in the 
> Hive-valid range is to rollback regardless of whether Rollback On Failure is 
> true or not. This was done as a "better safe than sorry" approach, but it 
> made the behavior inconsistent with earlier versions of the processor, where 
> failures were simply routed to failure rather than rolling back.
> This case proposes to return the default behavior for unknown SQLExceptions 
> to "TemporalFailure", which will make the behavior consistent with previous 
> versions of the processor, where unknown errors will be transferred to 
> failure unless Rollback on Failure is true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5834) Restore default PutHiveQL error handling behavior

2018-11-20 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-5834:
--

Assignee: Matt Burgess

> Restore default PutHiveQL error handling behavior
> -
>
> Key: NIFI-5834
> URL: https://issues.apache.org/jira/browse/NIFI-5834
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> As part of adding Rollback On Failure to PutHiveQL (via NIFI-3415), the code 
> was refactored to allow failures to be rolled back rather than transferred to 
> the failure relationship (if Rollback On Failure is set). As part of that, 
> all transient SQLExceptions were declared to be of type "Temporal Failure". 
> This (along with the refactor) allowed the failures to be handled as 
> rollbacks or transfers as specified.
> Hive returns all exceptions as transient SQLExceptions, with an error code 
> that better infers the behavior of the operation.  This, via the discovery of 
> NIFI-5045, resulted in the handling of error codes within the Hive error code 
> range. However the default behavior when the error code is not in the 
> Hive-valid range is to rollback regardless of whether Rollback On Failure is 
> true or not. This was done as a "better safe than sorry" approach, but it 
> made the behavior inconsistent with earlier versions of the processor, where 
> failures were simply routed to failure rather than rolling back.
> This case proposes to return the default behavior for unknown SQLExceptions 
> to "TemporalFailure", which will make the behavior consistent with previous 
> versions of the processor, where unknown errors will be transferred to 
> failure unless Rollback on Failure is true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5834) Restore default PutHiveQL error handling behavior

2018-11-20 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-5834:
--

 Summary: Restore default PutHiveQL error handling behavior
 Key: NIFI-5834
 URL: https://issues.apache.org/jira/browse/NIFI-5834
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Matt Burgess


As part of adding Rollback On Failure to PutHiveQL (via NIFI-3415), the code 
was refactored to allow failures to be rolled back rather than transferred to 
the failure relationship (if Rollback On Failure is set). As part of that, all 
transient SQLExceptions were declared to be of type "Temporal Failure". This 
(along with the refactor) allowed the failures to be handled as rollbacks or 
transfers as specified.

Hive returns all exceptions as transient SQLExceptions, with an error code that 
better infers the behavior of the operation.  This, via the discovery of 
NIFI-5045, resulted in the handling of error codes within the Hive error code 
range. However the default behavior when the error code is not in the 
Hive-valid range is to rollback regardless of whether Rollback On Failure is 
true or not. This was done as a "better safe than sorry" approach, but it made 
the behavior inconsistent with earlier versions of the processor, where 
failures were simply routed to failure rather than rolling back.

This case proposes to return the default behavior for unknown SQLExceptions to 
"TemporalFailure", which will make the behavior consistent with previous 
versions of the processor, where unknown errors will be transferred to failure 
unless Rollback on Failure is true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5176) NiFi needs to be buildable on Java 11

2018-11-20 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5176:
--
Summary: NiFi needs to be buildable on Java 11  (was: NiFi needs to be 
buildable on Java 9+)

> NiFi needs to be buildable on Java 11
> -
>
> Key: NIFI-5176
> URL: https://issues.apache.org/jira/browse/NIFI-5176
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> While retaining a source/target comptability of 1.8, NiFi needs to be 
> buildable on Java 9.
> The following issues have been encountered while attempting to run a Java 
> 1.8-built NiFi on Java 9:
> ||Issue||Solution||
> |groovy-eclipse-compiler not working with Java 10|-Switched to gmaven-plus- 
> Updated to maven-compiler-plugin:3.7.0, groovy-eclipse-compiler:2.9.3-01, and 
> groovy-eclipse-batch:2.4.15-01|
> |Antler3 issue with ambiguous method calls|Explicit cast to ValidationContext 
> needed in TestHL7Query.java|
> |jaxb2-maven-plugin not compatible with Java 9|Switched to maven-jaxb-plugin|
> |-nifi-enrich-processors uses package com.sun.jndi.dns, which does not 
> exist-|-Required addition of- -add-modules=jdk.naming.dns 
> --add-exports=jdk.naming.dns/com.sun.jndi.dns=ALL-UNNAMED, which prevents the 
> usage of compiling with the --release option (to compile only against public 
> APIs in the JDK) from being used. Not an optimal solution.-|
> |groovy-eclipse-batch:2.4.13-01 could not find JDK base classes|Updated to 
> groovy-eclipse-batch:2.4.15-01 and grooy-all:2.4.15|
> |maven-surefire-plugin:2.20.1 throws null pointer exceptions|Updated to 
> maven-surefire-plugin:2.22.0|
> |okhttp client builder requires X509TrustManager on Java 9+|Added methods to 
> return TrustManager instances with the SSLContext created by 
> SSLContextFactory and updated HttpNotificationService to use the paired 
> TrustManager|
> |nifi-runtime groovy tests aren't running|Added usage of 
> build-helper-maven-plugin to explicitly add src/test/groovy to force groovy 
> compilation of test sources. groovy-eclipse-compiler skips src/test/groovy if 
> src/test/java doesn't exist, which is the case for nifi-runtime. (See 
> NIFI-5341 for reference)|
> |hbase-client depends on jdk.tools:jdk.tools|Excluded this dependency 
> {color:#f79232}*(needs live testing)* {color}|
> |HBase client 1.1.2 does not allow running on Java 9+|Updated to HBase client 
> 1.1.11, passes unit tests (See HBASE-17944 for reference) 
> *{color:#f79232}(needs live testing){color}*|
> |powermock:1.6.5 does not support Java 10|Updated to powermock:2.0.0-beta.5 
> and mockito:2.19|
> |com.yammer.metrics:metrics-core:2.2.0 does not support Java 10|Upgrading 
> com.yammer.metrics:metrics-core:2.2.0 to 
> io.dropwizard.metrics:metrics-jvm:4.0.0 (See NIFI-5373 for reference)|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5174) NiFi Compatibility with Java 9/10/11

2018-11-20 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5174:
--
Summary: NiFi Compatibility with Java 9/10/11  (was: NiFi Compatibility 
with Java 9+)

> NiFi Compatibility with Java 9/10/11
> 
>
> Key: NIFI-5174
> URL: https://issues.apache.org/jira/browse/NIFI-5174
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5176) NiFi needs to be buildable on Java 11

2018-11-20 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5176:
--
Description: 
While retaining a source/target comptability of 1.8, NiFi needs to be buildable 
on Java 11.

The following issues have been encountered while attempting to run a Java 
1.8-built NiFi on Java 11:
||Issue||Solution||
|groovy-eclipse-compiler not working with Java 10|-Switched to gmaven-plus- 
Updated to maven-compiler-plugin:3.7.0, groovy-eclipse-compiler:2.9.3-01, and 
groovy-eclipse-batch:2.4.15-01|
|Antler3 issue with ambiguous method calls|Explicit cast to ValidationContext 
needed in TestHL7Query.java|
|jaxb2-maven-plugin not compatible with Java 9|Switched to maven-jaxb-plugin|
|-nifi-enrich-processors uses package com.sun.jndi.dns, which does not 
exist-|-Required addition of- -add-modules=jdk.naming.dns 
--add-exports=jdk.naming.dns/com.sun.jndi.dns=ALL-UNNAMED, which prevents the 
usage of compiling with the --release option (to compile only against public 
APIs in the JDK) from being used. Not an optimal solution.-|
|groovy-eclipse-batch:2.4.13-01 could not find JDK base classes|Updated to 
groovy-eclipse-batch:2.4.15-01 and grooy-all:2.4.15|
|maven-surefire-plugin:2.20.1 throws null pointer exceptions|Updated to 
maven-surefire-plugin:2.22.0|
|okhttp client builder requires X509TrustManager on Java 9+|Added methods to 
return TrustManager instances with the SSLContext created by SSLContextFactory 
and updated HttpNotificationService to use the paired TrustManager|
|nifi-runtime groovy tests aren't running|Added usage of 
build-helper-maven-plugin to explicitly add src/test/groovy to force groovy 
compilation of test sources. groovy-eclipse-compiler skips src/test/groovy if 
src/test/java doesn't exist, which is the case for nifi-runtime. (See NIFI-5341 
for reference)|
|hbase-client depends on jdk.tools:jdk.tools|Excluded this dependency 
{color:#f79232}*(needs live testing)* {color}|
|HBase client 1.1.2 does not allow running on Java 9+|Updated to HBase client 
1.1.11, passes unit tests (See HBASE-17944 for reference) 
*{color:#f79232}(needs live testing){color}*|
|powermock:1.6.5 does not support Java 10|Updated to powermock:2.0.0-beta.5 and 
mockito:2.19|
|com.yammer.metrics:metrics-core:2.2.0 does not support Java 10|Upgrading 
com.yammer.metrics:metrics-core:2.2.0 to 
io.dropwizard.metrics:metrics-jvm:4.0.0 (See NIFI-5373 for reference)|

  was:
While retaining a source/target comptability of 1.8, NiFi needs to be buildable 
on Java 9.

The following issues have been encountered while attempting to run a Java 
1.8-built NiFi on Java 9:
||Issue||Solution||
|groovy-eclipse-compiler not working with Java 10|-Switched to gmaven-plus- 
Updated to maven-compiler-plugin:3.7.0, groovy-eclipse-compiler:2.9.3-01, and 
groovy-eclipse-batch:2.4.15-01|
|Antler3 issue with ambiguous method calls|Explicit cast to ValidationContext 
needed in TestHL7Query.java|
|jaxb2-maven-plugin not compatible with Java 9|Switched to maven-jaxb-plugin|
|-nifi-enrich-processors uses package com.sun.jndi.dns, which does not 
exist-|-Required addition of- -add-modules=jdk.naming.dns 
--add-exports=jdk.naming.dns/com.sun.jndi.dns=ALL-UNNAMED, which prevents the 
usage of compiling with the --release option (to compile only against public 
APIs in the JDK) from being used. Not an optimal solution.-|
|groovy-eclipse-batch:2.4.13-01 could not find JDK base classes|Updated to 
groovy-eclipse-batch:2.4.15-01 and grooy-all:2.4.15|
|maven-surefire-plugin:2.20.1 throws null pointer exceptions|Updated to 
maven-surefire-plugin:2.22.0|
|okhttp client builder requires X509TrustManager on Java 9+|Added methods to 
return TrustManager instances with the SSLContext created by SSLContextFactory 
and updated HttpNotificationService to use the paired TrustManager|
|nifi-runtime groovy tests aren't running|Added usage of 
build-helper-maven-plugin to explicitly add src/test/groovy to force groovy 
compilation of test sources. groovy-eclipse-compiler skips src/test/groovy if 
src/test/java doesn't exist, which is the case for nifi-runtime. (See NIFI-5341 
for reference)|
|hbase-client depends on jdk.tools:jdk.tools|Excluded this dependency 
{color:#f79232}*(needs live testing)* {color}|
|HBase client 1.1.2 does not allow running on Java 9+|Updated to HBase client 
1.1.11, passes unit tests (See HBASE-17944 for reference) 
*{color:#f79232}(needs live testing){color}*|
|powermock:1.6.5 does not support Java 10|Updated to powermock:2.0.0-beta.5 and 
mockito:2.19|
|com.yammer.metrics:metrics-core:2.2.0 does not support Java 10|Upgrading 
com.yammer.metrics:metrics-core:2.2.0 to 
io.dropwizard.metrics:metrics-jvm:4.0.0 (See NIFI-5373 for reference)|


> NiFi needs to be buildable on Java 11
> -
>
> Key: NIFI-5176
> URL: https://issues.apache.org/jira/browse/NIFI-5176
>  

[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693755#comment-16693755
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user david-streamlio commented on the issue:

https://github.com/apache/nifi/pull/3178
  
@joewitt What was the command you used to generate the error above?  I want 
to attempt to reproduce it locally if possible.

The basic issue is that since this test message was sent asynchronously, 
there is a lag between when it is processed (and throws the error), moved to 
the failureQueue, and then eventually routed to the FAILURE relationship. 

For my local tests, I had to introduce a lag in the process by setting the 
number of iterations on the run command to 100. Apparently, this value needs to 
be larger for a parallel build, so I will increase the number of iterations to 
5K. But I would like to test it using the values you used to ensure that value 
is sufficient


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread david-streamlio
Github user david-streamlio commented on the issue:

https://github.com/apache/nifi/pull/3178
  
@joewitt What was the command you used to generate the error above?  I want 
to attempt to reproduce it locally if possible.

The basic issue is that since this test message was sent asynchronously, 
there is a lag between when it is processed (and throws the error), moved to 
the failureQueue, and then eventually routed to the FAILURE relationship. 

For my local tests, I had to introduce a lag in the process by setting the 
number of iterations on the run command to 100. Apparently, this value needs to 
be larger for a parallel build, so I will increase the number of iterations to 
5K. But I would like to test it using the values you used to ensure that value 
is sufficient


---


[jira] [Commented] (NIFI-5833) Treat Twitter tokens as sensitive values in GetTwitter

2018-11-20 Thread Joseph Witt (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693704#comment-16693704
 ] 

Joseph Witt commented on NIFI-5833:
---

Andy - i believe old flows will work right away as they'll take the unprotected 
value in then encrypt it.  *i think*.  If not then we can flag in migration 
guide.

> Treat Twitter tokens as sensitive values in GetTwitter
> --
>
> Key: NIFI-5833
> URL: https://issues.apache.org/jira/browse/NIFI-5833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Priority: Major
>  Labels: api, key, properties, security, sensitive, token, twitter
>
> The {{GetTwitter}} processor marks properties {{Consumer Secret}} and 
> {{Access Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access 
> Token}} are not marked as such. The [Twitter API 
> documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
>  says: 
> {quote}
> Your applications’ API keys should be guarded very closely. They represent 
> your unique access to the API and if leaked/used by other parties, this could 
> lead to abuse and restrictions placed on your application. *User access 
> tokens are even more sensitive*. When access tokens are generated, the user 
> they represent is trusting your application to keep them secure. If the 
> security of both API keys and user access tokens are compromised, your 
> application would potentially expose access to private information and 
> account functionality.
> {quote}
> Once the processor code is updated to treat these properties as sensitive, 
> there may need to be backward-compatibility changes added to ensure that 
> existing flows and templates do not break when deployed on the "new" system 
> (following, marked as *1.X*). The following scenarios should be tested:
> * 1.8.0 flow (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.8.0 template (unencrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X flow (encrypted {{CK}} and {{AT}}) deployed on 1.X
> * 1.X template (no {{CK}} and {{AT}}) deployed on 1.X
> The component documentation should also be appropriately updated to note that 
> a 1.X flow (encrypted {{CK}} and {{AT}}) will not work (immediately) on a 
> <=1.8.0 instance. Rather, manual intervention will be required to re-enter 
> the {{Consumer Key}} and {{Access Token}}, as the processor will attempt to 
> use the raw value {code} enc{ABCDEF...} {code} from the {{flow.xml.gz}} file 
> as the literal {{CK}} and {{AT}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5833) Treat Twitter tokens as sensitive values in GetTwitter

2018-11-20 Thread Andy LoPresto (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-5833:

Description: 
The {{GetTwitter}} processor marks properties {{Consumer Secret}} and {{Access 
Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access Token}} are 
not marked as such. The [Twitter API 
documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
 says: 

{quote}
Your applications’ API keys should be guarded very closely. They represent your 
unique access to the API and if leaked/used by other parties, this could lead 
to abuse and restrictions placed on your application. *User access tokens are 
even more sensitive*. When access tokens are generated, the user they represent 
is trusting your application to keep them secure. If the security of both API 
keys and user access tokens are compromised, your application would potentially 
expose access to private information and account functionality.
{quote}

Once the processor code is updated to treat these properties as sensitive, 
there may need to be backward-compatibility changes added to ensure that 
existing flows and templates do not break when deployed on the "new" system 
(following, marked as *1.X*). The following scenarios should be tested:

* 1.8.0 flow (unencrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.8.0 template (unencrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.X flow (encrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.X template (no {{CK}} and {{AT}}) deployed on 1.X

The component documentation should also be appropriately updated to note that a 
1.X flow (encrypted {{CK}} and {{AT}}) will not work (immediately) on a <=1.8.0 
instance. Rather, manual intervention will be required to re-enter the 
{{Consumer Key}} and {{Access Token}}, as the processor will attempt to use the 
raw value {code} enc{ABCDEF...} {code} from the {{flow.xml.gz}} file as the 
literal {{CK}} and {{AT}}. 

  was:
The {{GetTwitter}} processor marks properties {{Consumer Secret}} and {{Access 
Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access Token}} are 
not marked as such. The [Twitter API 
documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
 says: 

{quote}
Your applications’ API keys should be guarded very closely. They represent your 
unique access to the API and if leaked/used by other parties, this could lead 
to abuse and restrictions placed on your application. *User access tokens are 
even more sensitive*. When access tokens are generated, the user they represent 
is trusting your application to keep them secure. If the security of both API 
keys and user access tokens are compromised, your application would potentially 
expose access to private information and account functionality.
{quote}

Once the processor code is updated to treat these properties as sensitive, 
there may need to be backward-compatibility changes added to ensure that 
existing flows and templates do not break when deployed on the "new" system 
(following, marked as *1.X*). The following scenarios should be tested:

* 1.8.0 flow (unencrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.8.0 template (unencrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.X flow (encrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.X template (no {{CK}} and {{AT}}) deployed on 1.X

The component documentation should also be appropriately updated to note that a 
1.X flow (encrypted {{CK}} and {{AT}}) will not work (immediately) on a <=1.8.0 
instance. Rather, manual intervention will be required to re-enter the 
{{Consumer Key}} and {{Access Token}}, as the processor will attempt to use the 
raw value {{enc{ABCDEF...}}} from the {{flow.xml.gz}} file as the literal 
{{CK}} and {{AT}}. 


> Treat Twitter tokens as sensitive values in GetTwitter
> --
>
> Key: NIFI-5833
> URL: https://issues.apache.org/jira/browse/NIFI-5833
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Priority: Major
>  Labels: api, key, properties, security, sensitive, token, twitter
>
> The {{GetTwitter}} processor marks properties {{Consumer Secret}} and 
> {{Access Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access 
> Token}} are not marked as such. The [Twitter API 
> documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
>  says: 
> {quote}
> Your applications’ API keys should be guarded very closely. They represent 
> your unique access to the API and if leaked/used by other parties, this could 
> lead to abuse and restrictions placed on your application. *User access 
> tokens are even more sensitive*. When access tokens are 

[jira] [Created] (NIFI-5833) Treat Twitter tokens as sensitive values in GetTwitter

2018-11-20 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-5833:
---

 Summary: Treat Twitter tokens as sensitive values in GetTwitter
 Key: NIFI-5833
 URL: https://issues.apache.org/jira/browse/NIFI-5833
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.8.0
Reporter: Andy LoPresto


The {{GetTwitter}} processor marks properties {{Consumer Secret}} and {{Access 
Token Secret}} as *sensitive*, but {{Consumer Key}} and {{Access Token}} are 
not marked as such. The [Twitter API 
documentation|https://developer.twitter.com/en/docs/basics/authentication/guides/securing-keys-and-tokens]
 says: 

{quote}
Your applications’ API keys should be guarded very closely. They represent your 
unique access to the API and if leaked/used by other parties, this could lead 
to abuse and restrictions placed on your application. *User access tokens are 
even more sensitive*. When access tokens are generated, the user they represent 
is trusting your application to keep them secure. If the security of both API 
keys and user access tokens are compromised, your application would potentially 
expose access to private information and account functionality.
{quote}

Once the processor code is updated to treat these properties as sensitive, 
there may need to be backward-compatibility changes added to ensure that 
existing flows and templates do not break when deployed on the "new" system 
(following, marked as *1.X*). The following scenarios should be tested:

* 1.8.0 flow (unencrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.8.0 template (unencrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.X flow (encrypted {{CK}} and {{AT}}) deployed on 1.X
* 1.X template (no {{CK}} and {{AT}}) deployed on 1.X

The component documentation should also be appropriately updated to note that a 
1.X flow (encrypted {{CK}} and {{AT}}) will not work (immediately) on a <=1.8.0 
instance. Rather, manual intervention will be required to re-enter the 
{{Consumer Key}} and {{Access Token}}, as the processor will attempt to use the 
raw value {{enc{ABCDEF...}}} from the {{flow.xml.gz}} file as the literal 
{{CK}} and {{AT}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693683#comment-16693683
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
the output log had "14:11:44.805 [pool-67-thread-1] ERROR 
org.apache.nifi.processors.pulsar.pubsub.PublishPulsarRecord - 
PublishPulsarRecord[id=9c298cea-0e2a-4a59-8864-92e95563d7f4] Unable to publish 
to topic "

not sure if that is from the same test or not


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
the output log had "14:11:44.805 [pool-67-thread-1] ERROR 
org.apache.nifi.processors.pulsar.pubsub.PublishPulsarRecord - 
PublishPulsarRecord[id=9c298cea-0e2a-4a59-8864-92e95563d7f4] Unable to publish 
to topic "

not sure if that is from the same test or not


---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693679#comment-16693679
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
build on large machine now has a unit test failure

[ERROR] Failures: 
[ERROR]   TestAsyncPublishPulsarRecord.pulsarClientExceptionTest:56 
expected:<1> but was:<0>
[INFO] 
[ERROR] Tests run: 70, Failures: 1, Errors: 0, Skipped: 0



> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
build on large machine now has a unit test failure

[ERROR] Failures: 
[ERROR]   TestAsyncPublishPulsarRecord.pulsarClientExceptionTest:56 
expected:<1> but was:<0>
[INFO] 
[ERROR] Tests run: 70, Failures: 1, Errors: 0, Skipped: 0



---


[jira] [Commented] (NIFI-5758) Duplicate Variables

2018-11-20 Thread Joseph Witt (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693665#comment-16693665
 ] 

Joseph Witt commented on NIFI-5758:
---

...this is a good point.  We need to find a good approach here

> Duplicate Variables
> ---
>
> Key: NIFI-5758
> URL: https://issues.apache.org/jira/browse/NIFI-5758
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.7.1
>Reporter: Joseph Rosado
>Priority: Major
> Attachments: nifi-duplicate-variables.PNG
>
>
> We have a process group with variables defined for the entire flow.  However, 
> the variables are duplicated in each nested process group.  Therefore, making 
> it difficult to manage, and identify issues. (see attached)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5758) Duplicate Variables

2018-11-20 Thread Joseph Rosado (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693661#comment-16693661
 ] 

Joseph Rosado commented on NIFI-5758:
-

The problem is when I make a change to the variable at the parent level, those 
changes are not replicated downstream to embedded process groups.  In fact, the 
embedded process groups will reference it's local variable as per the design; 
thus overriding the parent.  As a result, I need to manually delete all scoped 
variables from embedded process groups whenever I import a process group from 
the Registry.  Unfortunately, this negates the concept of a "global" variable.  
Instead, we should have the option to choose whether a variable should be 
scoped to embedded process groups.

> Duplicate Variables
> ---
>
> Key: NIFI-5758
> URL: https://issues.apache.org/jira/browse/NIFI-5758
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Variable Registry
>Affects Versions: 1.7.1
>Reporter: Joseph Rosado
>Priority: Major
> Attachments: nifi-duplicate-variables.PNG
>
>
> We have a process group with variables defined for the entire flow.  However, 
> the variables are duplicated in each nested process group.  Therefore, making 
> it difficult to manage, and identify issues. (see attached)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693652#comment-16693652
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user david-streamlio commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235123524
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/src/main/java/org/apache/nifi/pulsar/StandardPulsarClientService.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar;
+
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.pulsar.client.api.ClientBuilder;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.client.api.PulsarClientException.UnsupportedAuthenticationException;
+import org.apache.pulsar.client.impl.auth.AuthenticationTls;
+
+public class StandardPulsarClientService extends AbstractControllerService 
implements PulsarClientService {
+
+public static final PropertyDescriptor PULSAR_SERVICE_URL = new 
PropertyDescriptor.Builder()
+.name("PULSAR_SERVICE_URL")
+.displayName("Pulsar Service URL")
+.description("URL for the Pulsar cluster, e.g localhost:6650")
+.required(true)
+.addValidator(StandardValidators.HOSTNAME_PORT_LIST_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.build();
+
+public static final PropertyDescriptor 
ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER = new PropertyDescriptor.Builder()
+.name("ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER")
+.displayName("Allow TLS insecure connection")
+.description("If a valid trusted certificate is provided in 
the 'TLS Trust Certs File Path' property of this controller service,"
++ " then, by default, all communication between this 
controller service and the Apache Pulsar broker will be secured via"
++ " TLS and only use the trusted TLS certificate from 
broker. Setting this property to 'false' will allow this controller"
++ " service to accept an untrusted TLS certificate 
from broker as well. This property should only be set to false if you trust"
++ " the broker you are connecting to, but do not have 
access to the TLS certificate file.")
+.required(false)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();
+
+public static final PropertyDescriptor CONCURRENT_LOOKUP_REQUESTS = 
new PropertyDescriptor.Builder()
+.name("CONCURRENT_LOOKUP_REQUESTS")
+.displayName("Maximum concurrent lookup-requests")
+.description("Number of concurrent lookup-requests allowed on 
each broker-connection.")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)

[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread david-streamlio
Github user david-streamlio commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235123524
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/src/main/java/org/apache/nifi/pulsar/StandardPulsarClientService.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar;
+
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.pulsar.client.api.ClientBuilder;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.client.api.PulsarClientException.UnsupportedAuthenticationException;
+import org.apache.pulsar.client.impl.auth.AuthenticationTls;
+
+public class StandardPulsarClientService extends AbstractControllerService 
implements PulsarClientService {
+
+public static final PropertyDescriptor PULSAR_SERVICE_URL = new 
PropertyDescriptor.Builder()
+.name("PULSAR_SERVICE_URL")
+.displayName("Pulsar Service URL")
+.description("URL for the Pulsar cluster, e.g localhost:6650")
+.required(true)
+.addValidator(StandardValidators.HOSTNAME_PORT_LIST_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.build();
+
+public static final PropertyDescriptor 
ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER = new PropertyDescriptor.Builder()
+.name("ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER")
+.displayName("Allow TLS insecure connection")
+.description("If a valid trusted certificate is provided in 
the 'TLS Trust Certs File Path' property of this controller service,"
++ " then, by default, all communication between this 
controller service and the Apache Pulsar broker will be secured via"
++ " TLS and only use the trusted TLS certificate from 
broker. Setting this property to 'false' will allow this controller"
++ " service to accept an untrusted TLS certificate 
from broker as well. This property should only be set to false if you trust"
++ " the broker you are connecting to, but do not have 
access to the TLS certificate file.")
+.required(false)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();
+
+public static final PropertyDescriptor CONCURRENT_LOOKUP_REQUESTS = 
new PropertyDescriptor.Builder()
+.name("CONCURRENT_LOOKUP_REQUESTS")
+.displayName("Maximum concurrent lookup-requests")
+.description("Number of concurrent lookup-requests allowed on 
each broker-connection.")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.defaultValue("5000")
+.build();
+
+public static final PropertyDescriptor CONNECTIONS_PER_BROKER = new 
PropertyDescriptor.Builder()
+.name("CONNECTIONS_PER_BROKER")
+ 

[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693651#comment-16693651
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user david-streamlio commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235123261
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/AbstractPulsarProducerProcessor.java
 ---
@@ -0,0 +1,472 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.pulsar;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.pulsar.PulsarClientService;
+import org.apache.nifi.pulsar.cache.LRUCache;
+import org.apache.nifi.util.StringUtils;
+import org.apache.pulsar.client.api.CompressionType;
+import org.apache.pulsar.client.api.MessageRoutingMode;
+import org.apache.pulsar.client.api.Producer;
+import org.apache.pulsar.client.api.ProducerBuilder;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.shade.org.apache.commons.collections.CollectionUtils;
+
+public abstract class AbstractPulsarProducerProcessor extends 
AbstractProcessor {
+
+public static final String MSG_COUNT = "msg.count";
+public static final String TOPIC_NAME = "topic.name";
+
+static final AllowableValue COMPRESSION_TYPE_NONE = new 
AllowableValue("NONE", "None", "No compression");
+static final AllowableValue COMPRESSION_TYPE_LZ4 = new 
AllowableValue("LZ4", "LZ4", "Compress with LZ4 algorithm.");
+static final AllowableValue COMPRESSION_TYPE_ZLIB = new 
AllowableValue("ZLIB", "ZLIB", "Compress with ZLib algorithm");
+
+static final AllowableValue MESSAGE_ROUTING_MODE_CUSTOM_PARTITION = 
new AllowableValue("CustomPartition", "Custom Partition", "Route messages to a 
custom partition");
+static final AllowableValue MESSAGE_ROUTING_MODE_ROUND_ROBIN_PARTITION 
= new AllowableValue("RoundRobinPartition", "Round Robin Partition", "Route 
messages to all "
+   
+ "partitions in a round robin 
manner");
+static final AllowableValue MESSAGE_ROUTING_MODE_SINGLE_PARTITION = 
new AllowableValue("SinglePartition", "Single Partition", "Route messages to a 
single partition");
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("success")
+.description("FlowFiles for which all content was sent to 
Pulsar.")
+.build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that cannot be sent to 

[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread david-streamlio
Github user david-streamlio commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235123261
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/AbstractPulsarProducerProcessor.java
 ---
@@ -0,0 +1,472 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.pulsar;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.pulsar.PulsarClientService;
+import org.apache.nifi.pulsar.cache.LRUCache;
+import org.apache.nifi.util.StringUtils;
+import org.apache.pulsar.client.api.CompressionType;
+import org.apache.pulsar.client.api.MessageRoutingMode;
+import org.apache.pulsar.client.api.Producer;
+import org.apache.pulsar.client.api.ProducerBuilder;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.shade.org.apache.commons.collections.CollectionUtils;
+
+public abstract class AbstractPulsarProducerProcessor extends 
AbstractProcessor {
+
+public static final String MSG_COUNT = "msg.count";
+public static final String TOPIC_NAME = "topic.name";
+
+static final AllowableValue COMPRESSION_TYPE_NONE = new 
AllowableValue("NONE", "None", "No compression");
+static final AllowableValue COMPRESSION_TYPE_LZ4 = new 
AllowableValue("LZ4", "LZ4", "Compress with LZ4 algorithm.");
+static final AllowableValue COMPRESSION_TYPE_ZLIB = new 
AllowableValue("ZLIB", "ZLIB", "Compress with ZLib algorithm");
+
+static final AllowableValue MESSAGE_ROUTING_MODE_CUSTOM_PARTITION = 
new AllowableValue("CustomPartition", "Custom Partition", "Route messages to a 
custom partition");
+static final AllowableValue MESSAGE_ROUTING_MODE_ROUND_ROBIN_PARTITION 
= new AllowableValue("RoundRobinPartition", "Round Robin Partition", "Route 
messages to all "
+   
+ "partitions in a round robin 
manner");
+static final AllowableValue MESSAGE_ROUTING_MODE_SINGLE_PARTITION = 
new AllowableValue("SinglePartition", "Single Partition", "Route messages to a 
single partition");
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("success")
+.description("FlowFiles for which all content was sent to 
Pulsar.")
+.build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that cannot be sent to Pulsar will 
be routed to this Relationship")
+.build();
+
+public static final PropertyDescriptor PULSAR_CLIENT_SERVICE = new 
PropertyDescriptor.Builder()
+.name("PULSAR_CLIENT_SERVICE")
+ 

[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693593#comment-16693593
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235107847
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/pubsub/ConsumePulsar.java
 ---
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.pulsar.pubsub;
+
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.pulsar.AbstractPulsarConsumerProcessor;
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.PulsarClientException;
+
+@SeeAlso({PublishPulsar.class, ConsumePulsarRecord.class, 
PublishPulsarRecord.class})
+@Tags({"Pulsar", "Get", "Ingest", "Ingress", "Topic", "PubSub", "Consume"})
+@CapabilityDescription("Consumes messages from Apache Pulsar. The 
complementary NiFi processor for sending messages is PublishPulsar.")
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+public class ConsumePulsar extends AbstractPulsarConsumerProcessor 
{
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+try {
+Consumer consumer = getConsumer(context, 
getConsumerId(context, session.get()));
+
+if (consumer == null) {
+context.yield();
+return;
+}
+
+if (context.getProperty(ASYNC_ENABLED).asBoolean()) {
+consumeAsync(consumer, context, session);
+handleAsync(consumer, context, session);
+} else {
+consume(consumer, context, session);
+}
+} catch (PulsarClientException e) {
+getLogger().error("Unable to consume from Pulsar Topic ", e);
+context.yield();
+throw new ProcessException(e);
+}
+}
+
+private void handleAsync(final Consumer consumer, 
ProcessContext context, ProcessSession session) {
+try {
+Future> done = getConsumerService().poll(50, 
TimeUnit.MILLISECONDS);
+
+if (done != null) {
+   Message msg = done.get();
+
+   if (msg != null) {
+  FlowFile flowFile = null;
+  final byte[] value = msg.getData();
+  if (value != null && value.length > 0) {
+  flowFile = session.create();
+  flowFile = session.write(flowFile, out -> {
+  out.write(value);
+  });
+
+ session.getProvenanceReporter().receive(flowFile, 
getPulsarClientService().getPulsarBrokerRootURL() + "/" + consumer.getTopic());
+ session.transfer(flowFile, REL_SUCCESS);
+ session.commit();
+  }
+  // Acknowledge consuming the message
+  getAckService().submit(new Callable() {
+  @Override
+   

[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693588#comment-16693588
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user david-streamlio commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235107339
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service-api/src/main/java/org/apache/nifi/pulsar/cache/LRUCache.java
 ---
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar.cache;
+
+import java.io.Closeable;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+
+public class LRUCache {
--- End diff --

@joewitt The LRU Cache code wasn't copied from anywhere, and is just my 
naive implementation of the LRU Cache.  Most java-based examples I see utilize 
a LinkedHashMap and I do not. 

I do appreciate your need to ensure 100% Apache compliance with the code, 
so I can provide an implementation of the LRU cache based on the 

https://commons.apache.org/proper/commons-collections/apidocs/org/apache/commons/collections4/map/LRUMap.html
 class instead, if you think that would be better/safer from a licensing 
perspective


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693595#comment-16693595
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235108402
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/pubsub/ConsumePulsarRecord.java
 ---
@@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.pulsar.pubsub;
+
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.pulsar.AbstractPulsarConsumerProcessor;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.PulsarClientException;
+
+@CapabilityDescription("Consumes messages from Apache Pulsar. "
++ "The complementary NiFi processor for sending messages is 
PublishPulsarRecord. Please note that, at this time, "
++ "the Processor assumes that all records that are retrieved have 
the same schema. If any of the Pulsar messages "
++ "that are pulled but cannot be parsed or written with the 
configured Record Reader or Record Writer, the contents "
++ "of the message will be written to a separate FlowFile, and that 
FlowFile will be transferred to the 'parse.failure' "
++ "relationship. Otherwise, each FlowFile is sent to the 'success' 
relationship and may contain many individual "
++ "messages within the single FlowFile. A 'record.count' attribute 
is added to indicate how many messages are contained in the "
++ "FlowFile. No two Pulsar messages will be placed into the same 
FlowFile if they have different schemas.")
+@Tags({"Pulsar", "Get", "Record", "csv", "avro", "json", "Ingest", 
"Ingress", "Topic", "PubSub", "Consume"})
+@WritesAttributes({
+@WritesAttribute(attribute = "record.count", description = "The number 
of records received")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@SeeAlso({PublishPulsar.class, 

[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235108402
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/pubsub/ConsumePulsarRecord.java
 ---
@@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.pulsar.pubsub;
+
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.pulsar.AbstractPulsarConsumerProcessor;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.PulsarClientException;
+
+@CapabilityDescription("Consumes messages from Apache Pulsar. "
++ "The complementary NiFi processor for sending messages is 
PublishPulsarRecord. Please note that, at this time, "
++ "the Processor assumes that all records that are retrieved have 
the same schema. If any of the Pulsar messages "
++ "that are pulled but cannot be parsed or written with the 
configured Record Reader or Record Writer, the contents "
++ "of the message will be written to a separate FlowFile, and that 
FlowFile will be transferred to the 'parse.failure' "
++ "relationship. Otherwise, each FlowFile is sent to the 'success' 
relationship and may contain many individual "
++ "messages within the single FlowFile. A 'record.count' attribute 
is added to indicate how many messages are contained in the "
++ "FlowFile. No two Pulsar messages will be placed into the same 
FlowFile if they have different schemas.")
+@Tags({"Pulsar", "Get", "Record", "csv", "avro", "json", "Ingest", 
"Ingress", "Topic", "PubSub", "Consume"})
+@WritesAttributes({
+@WritesAttribute(attribute = "record.count", description = "The number 
of records received")
+})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@SeeAlso({PublishPulsar.class, ConsumePulsar.class, 
PublishPulsarRecord.class})
+public class ConsumePulsarRecord extends 
AbstractPulsarConsumerProcessor {
+
+public static final String MSG_COUNT = "record.count";
+
+public static final PropertyDescriptor 

[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693590#comment-16693590
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user david-streamlio commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235107594
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/src/main/java/org/apache/nifi/pulsar/StandardPulsarClientService.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar;
+
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.pulsar.client.api.ClientBuilder;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.client.api.PulsarClientException.UnsupportedAuthenticationException;
+import org.apache.pulsar.client.impl.auth.AuthenticationTls;
+
+public class StandardPulsarClientService extends AbstractControllerService 
implements PulsarClientService {
+
+public static final PropertyDescriptor PULSAR_SERVICE_URL = new 
PropertyDescriptor.Builder()
+.name("PULSAR_SERVICE_URL")
+.displayName("Pulsar Service URL")
+.description("URL for the Pulsar cluster, e.g localhost:6650")
+.required(true)
+.addValidator(StandardValidators.HOSTNAME_PORT_LIST_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.build();
+
+public static final PropertyDescriptor 
ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER = new PropertyDescriptor.Builder()
--- End diff --

Ok, I will remove that property


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235107847
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/pubsub/ConsumePulsar.java
 ---
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.pulsar.pubsub;
+
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.pulsar.AbstractPulsarConsumerProcessor;
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.PulsarClientException;
+
+@SeeAlso({PublishPulsar.class, ConsumePulsarRecord.class, 
PublishPulsarRecord.class})
+@Tags({"Pulsar", "Get", "Ingest", "Ingress", "Topic", "PubSub", "Consume"})
+@CapabilityDescription("Consumes messages from Apache Pulsar. The 
complementary NiFi processor for sending messages is PublishPulsar.")
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+public class ConsumePulsar extends AbstractPulsarConsumerProcessor 
{
+
+@Override
+public void onTrigger(ProcessContext context, ProcessSession session) 
throws ProcessException {
+try {
+Consumer consumer = getConsumer(context, 
getConsumerId(context, session.get()));
+
+if (consumer == null) {
+context.yield();
+return;
+}
+
+if (context.getProperty(ASYNC_ENABLED).asBoolean()) {
+consumeAsync(consumer, context, session);
+handleAsync(consumer, context, session);
+} else {
+consume(consumer, context, session);
+}
+} catch (PulsarClientException e) {
+getLogger().error("Unable to consume from Pulsar Topic ", e);
+context.yield();
+throw new ProcessException(e);
+}
+}
+
+private void handleAsync(final Consumer consumer, 
ProcessContext context, ProcessSession session) {
+try {
+Future> done = getConsumerService().poll(50, 
TimeUnit.MILLISECONDS);
+
+if (done != null) {
+   Message msg = done.get();
+
+   if (msg != null) {
+  FlowFile flowFile = null;
+  final byte[] value = msg.getData();
+  if (value != null && value.length > 0) {
+  flowFile = session.create();
+  flowFile = session.write(flowFile, out -> {
+  out.write(value);
+  });
+
+ session.getProvenanceReporter().receive(flowFile, 
getPulsarClientService().getPulsarBrokerRootURL() + "/" + consumer.getTopic());
+ session.transfer(flowFile, REL_SUCCESS);
+ session.commit();
+  }
+  // Acknowledge consuming the message
+  getAckService().submit(new Callable() {
+  @Override
+  public Object call() throws Exception {
+ return consumer.acknowledgeAsync(msg).get();
+  }
+   });
+  }
+} else {
+  

[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread david-streamlio
Github user david-streamlio commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235107594
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/src/main/java/org/apache/nifi/pulsar/StandardPulsarClientService.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar;
+
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.pulsar.client.api.ClientBuilder;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.client.api.PulsarClientException.UnsupportedAuthenticationException;
+import org.apache.pulsar.client.impl.auth.AuthenticationTls;
+
+public class StandardPulsarClientService extends AbstractControllerService 
implements PulsarClientService {
+
+public static final PropertyDescriptor PULSAR_SERVICE_URL = new 
PropertyDescriptor.Builder()
+.name("PULSAR_SERVICE_URL")
+.displayName("Pulsar Service URL")
+.description("URL for the Pulsar cluster, e.g localhost:6650")
+.required(true)
+.addValidator(StandardValidators.HOSTNAME_PORT_LIST_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.build();
+
+public static final PropertyDescriptor 
ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER = new PropertyDescriptor.Builder()
--- End diff --

Ok, I will remove that property


---


[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread david-streamlio
Github user david-streamlio commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235107339
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service-api/src/main/java/org/apache/nifi/pulsar/cache/LRUCache.java
 ---
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar.cache;
+
+import java.io.Closeable;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+
+public class LRUCache {
--- End diff --

@joewitt The LRU Cache code wasn't copied from anywhere, and is just my 
naive implementation of the LRU Cache.  Most java-based examples I see utilize 
a LinkedHashMap and I do not. 

I do appreciate your need to ensure 100% Apache compliance with the code, 
so I can provide an implementation of the LRU cache based on the 

https://commons.apache.org/proper/commons-collections/apidocs/org/apache/commons/collections4/map/LRUMap.html
 class instead, if you think that would be better/safer from a licensing 
perspective


---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693586#comment-16693586
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235106784
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/AbstractPulsarProducerProcessor.java
 ---
@@ -0,0 +1,472 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.pulsar;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.pulsar.PulsarClientService;
+import org.apache.nifi.pulsar.cache.LRUCache;
+import org.apache.nifi.util.StringUtils;
+import org.apache.pulsar.client.api.CompressionType;
+import org.apache.pulsar.client.api.MessageRoutingMode;
+import org.apache.pulsar.client.api.Producer;
+import org.apache.pulsar.client.api.ProducerBuilder;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.shade.org.apache.commons.collections.CollectionUtils;
+
+public abstract class AbstractPulsarProducerProcessor extends 
AbstractProcessor {
+
+public static final String MSG_COUNT = "msg.count";
+public static final String TOPIC_NAME = "topic.name";
+
+static final AllowableValue COMPRESSION_TYPE_NONE = new 
AllowableValue("NONE", "None", "No compression");
+static final AllowableValue COMPRESSION_TYPE_LZ4 = new 
AllowableValue("LZ4", "LZ4", "Compress with LZ4 algorithm.");
+static final AllowableValue COMPRESSION_TYPE_ZLIB = new 
AllowableValue("ZLIB", "ZLIB", "Compress with ZLib algorithm");
+
+static final AllowableValue MESSAGE_ROUTING_MODE_CUSTOM_PARTITION = 
new AllowableValue("CustomPartition", "Custom Partition", "Route messages to a 
custom partition");
+static final AllowableValue MESSAGE_ROUTING_MODE_ROUND_ROBIN_PARTITION 
= new AllowableValue("RoundRobinPartition", "Round Robin Partition", "Route 
messages to all "
+   
+ "partitions in a round robin 
manner");
+static final AllowableValue MESSAGE_ROUTING_MODE_SINGLE_PARTITION = 
new AllowableValue("SinglePartition", "Single Partition", "Route messages to a 
single partition");
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("success")
+.description("FlowFiles for which all content was sent to 
Pulsar.")
+.build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that cannot be sent to Pulsar will 

[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235106784
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/src/main/java/org/apache/nifi/processors/pulsar/AbstractPulsarProducerProcessor.java
 ---
@@ -0,0 +1,472 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.pulsar;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.pulsar.PulsarClientService;
+import org.apache.nifi.pulsar.cache.LRUCache;
+import org.apache.nifi.util.StringUtils;
+import org.apache.pulsar.client.api.CompressionType;
+import org.apache.pulsar.client.api.MessageRoutingMode;
+import org.apache.pulsar.client.api.Producer;
+import org.apache.pulsar.client.api.ProducerBuilder;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.shade.org.apache.commons.collections.CollectionUtils;
+
+public abstract class AbstractPulsarProducerProcessor extends 
AbstractProcessor {
+
+public static final String MSG_COUNT = "msg.count";
+public static final String TOPIC_NAME = "topic.name";
+
+static final AllowableValue COMPRESSION_TYPE_NONE = new 
AllowableValue("NONE", "None", "No compression");
+static final AllowableValue COMPRESSION_TYPE_LZ4 = new 
AllowableValue("LZ4", "LZ4", "Compress with LZ4 algorithm.");
+static final AllowableValue COMPRESSION_TYPE_ZLIB = new 
AllowableValue("ZLIB", "ZLIB", "Compress with ZLib algorithm");
+
+static final AllowableValue MESSAGE_ROUTING_MODE_CUSTOM_PARTITION = 
new AllowableValue("CustomPartition", "Custom Partition", "Route messages to a 
custom partition");
+static final AllowableValue MESSAGE_ROUTING_MODE_ROUND_ROBIN_PARTITION 
= new AllowableValue("RoundRobinPartition", "Round Robin Partition", "Route 
messages to all "
+   
+ "partitions in a round robin 
manner");
+static final AllowableValue MESSAGE_ROUTING_MODE_SINGLE_PARTITION = 
new AllowableValue("SinglePartition", "Single Partition", "Route messages to a 
single partition");
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder()
+.name("success")
+.description("FlowFiles for which all content was sent to 
Pulsar.")
+.build();
+
+public static final Relationship REL_FAILURE = new 
Relationship.Builder()
+.name("failure")
+.description("Any FlowFile that cannot be sent to Pulsar will 
be routed to this Relationship")
+.build();
+
+public static final PropertyDescriptor PULSAR_CLIENT_SERVICE = new 
PropertyDescriptor.Builder()
+.name("PULSAR_CLIENT_SERVICE")
+

[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693566#comment-16693566
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235102831
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/pom.xml ---
@@ -0,0 +1,84 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-pulsar-bundle
+1.9.0-SNAPSHOT
+
+
+nifi-pulsar-processors
+jar
+
+
+
+org.apache.nifi
+nifi-api
+
+
+org.apache.nifi
+nifi-record-serialization-service-api
+
+
+org.apache.nifi
+nifi-record
+
+
+org.apache.nifi
+nifi-utils
+1.9.0-SNAPSHOT
+
+ 
+org.apache.nifi
+nifi-ssl-context-service-api
+
+
+org.apache.nifi
+nifi-pulsar-client-service-api
+1.9.0-SNAPSHOT
+provided
+   
+
+org.apache.pulsar
+pulsar-client
+${pulsar.version}
+   
+
+org.apache.nifi
+nifi-mock
+test
+1.9.0-SNAPSHOT
+
+
+org.slf4j
+slf4j-simple
+test
+
+
+junit
+junit
+test
+
+
+   org.apache.commons
+   commons-lang3
+   3.7
--- End diff --

this should move up to current release which I think is 3.8.1


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693567#comment-16693567
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235102894
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/pom.xml ---
@@ -0,0 +1,84 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-pulsar-bundle
+1.9.0-SNAPSHOT
+
+
+nifi-pulsar-processors
+jar
+
+
+
+org.apache.nifi
+nifi-api
+
+
+org.apache.nifi
+nifi-record-serialization-service-api
+
+
+org.apache.nifi
+nifi-record
+
+
+org.apache.nifi
+nifi-utils
+1.9.0-SNAPSHOT
+
+ 
+org.apache.nifi
+nifi-ssl-context-service-api
+
+
+org.apache.nifi
+nifi-pulsar-client-service-api
+1.9.0-SNAPSHOT
+provided
+   
+
+org.apache.pulsar
+pulsar-client
+${pulsar.version}
+   
+
+org.apache.nifi
+nifi-mock
+test
+1.9.0-SNAPSHOT
+
+
+org.slf4j
+slf4j-simple
+test
+
+
+junit
+junit
+test
+
+
+   org.apache.commons
+   commons-lang3
+   3.7
--- End diff --

assuming we really need it


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235102894
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/pom.xml ---
@@ -0,0 +1,84 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-pulsar-bundle
+1.9.0-SNAPSHOT
+
+
+nifi-pulsar-processors
+jar
+
+
+
+org.apache.nifi
+nifi-api
+
+
+org.apache.nifi
+nifi-record-serialization-service-api
+
+
+org.apache.nifi
+nifi-record
+
+
+org.apache.nifi
+nifi-utils
+1.9.0-SNAPSHOT
+
+ 
+org.apache.nifi
+nifi-ssl-context-service-api
+
+
+org.apache.nifi
+nifi-pulsar-client-service-api
+1.9.0-SNAPSHOT
+provided
+   
+
+org.apache.pulsar
+pulsar-client
+${pulsar.version}
+   
+
+org.apache.nifi
+nifi-mock
+test
+1.9.0-SNAPSHOT
+
+
+org.slf4j
+slf4j-simple
+test
+
+
+junit
+junit
+test
+
+
+   org.apache.commons
+   commons-lang3
+   3.7
--- End diff --

assuming we really need it


---


[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235102831
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-processors/pom.xml ---
@@ -0,0 +1,84 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+org.apache.nifi
+nifi-pulsar-bundle
+1.9.0-SNAPSHOT
+
+
+nifi-pulsar-processors
+jar
+
+
+
+org.apache.nifi
+nifi-api
+
+
+org.apache.nifi
+nifi-record-serialization-service-api
+
+
+org.apache.nifi
+nifi-record
+
+
+org.apache.nifi
+nifi-utils
+1.9.0-SNAPSHOT
+
+ 
+org.apache.nifi
+nifi-ssl-context-service-api
+
+
+org.apache.nifi
+nifi-pulsar-client-service-api
+1.9.0-SNAPSHOT
+provided
+   
+
+org.apache.pulsar
+pulsar-client
+${pulsar.version}
+   
+
+org.apache.nifi
+nifi-mock
+test
+1.9.0-SNAPSHOT
+
+
+org.slf4j
+slf4j-simple
+test
+
+
+junit
+junit
+test
+
+
+   org.apache.commons
+   commons-lang3
+   3.7
--- End diff --

this should move up to current release which I think is 3.8.1


---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693561#comment-16693561
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235102049
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/src/main/java/org/apache/nifi/pulsar/StandardPulsarClientService.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar;
+
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.pulsar.client.api.ClientBuilder;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.client.api.PulsarClientException.UnsupportedAuthenticationException;
+import org.apache.pulsar.client.impl.auth.AuthenticationTls;
+
+public class StandardPulsarClientService extends AbstractControllerService 
implements PulsarClientService {
+
+public static final PropertyDescriptor PULSAR_SERVICE_URL = new 
PropertyDescriptor.Builder()
+.name("PULSAR_SERVICE_URL")
+.displayName("Pulsar Service URL")
+.description("URL for the Pulsar cluster, e.g localhost:6650")
+.required(true)
+.addValidator(StandardValidators.HOSTNAME_PORT_LIST_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.build();
+
+public static final PropertyDescriptor 
ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER = new PropertyDescriptor.Builder()
+.name("ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER")
+.displayName("Allow TLS insecure connection")
+.description("If a valid trusted certificate is provided in 
the 'TLS Trust Certs File Path' property of this controller service,"
++ " then, by default, all communication between this 
controller service and the Apache Pulsar broker will be secured via"
++ " TLS and only use the trusted TLS certificate from 
broker. Setting this property to 'false' will allow this controller"
++ " service to accept an untrusted TLS certificate 
from broker as well. This property should only be set to false if you trust"
++ " the broker you are connecting to, but do not have 
access to the TLS certificate file.")
+.required(false)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();
+
+public static final PropertyDescriptor CONCURRENT_LOOKUP_REQUESTS = 
new PropertyDescriptor.Builder()
+.name("CONCURRENT_LOOKUP_REQUESTS")
+.displayName("Maximum concurrent lookup-requests")
+.description("Number of concurrent lookup-requests allowed on 
each broker-connection.")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+

[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235102049
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/src/main/java/org/apache/nifi/pulsar/StandardPulsarClientService.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar;
+
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.pulsar.client.api.ClientBuilder;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.client.api.PulsarClientException.UnsupportedAuthenticationException;
+import org.apache.pulsar.client.impl.auth.AuthenticationTls;
+
+public class StandardPulsarClientService extends AbstractControllerService 
implements PulsarClientService {
+
+public static final PropertyDescriptor PULSAR_SERVICE_URL = new 
PropertyDescriptor.Builder()
+.name("PULSAR_SERVICE_URL")
+.displayName("Pulsar Service URL")
+.description("URL for the Pulsar cluster, e.g localhost:6650")
+.required(true)
+.addValidator(StandardValidators.HOSTNAME_PORT_LIST_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.build();
+
+public static final PropertyDescriptor 
ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER = new PropertyDescriptor.Builder()
+.name("ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER")
+.displayName("Allow TLS insecure connection")
+.description("If a valid trusted certificate is provided in 
the 'TLS Trust Certs File Path' property of this controller service,"
++ " then, by default, all communication between this 
controller service and the Apache Pulsar broker will be secured via"
++ " TLS and only use the trusted TLS certificate from 
broker. Setting this property to 'false' will allow this controller"
++ " service to accept an untrusted TLS certificate 
from broker as well. This property should only be set to false if you trust"
++ " the broker you are connecting to, but do not have 
access to the TLS certificate file.")
+.required(false)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();
+
+public static final PropertyDescriptor CONCURRENT_LOOKUP_REQUESTS = 
new PropertyDescriptor.Builder()
+.name("CONCURRENT_LOOKUP_REQUESTS")
+.displayName("Maximum concurrent lookup-requests")
+.description("Number of concurrent lookup-requests allowed on 
each broker-connection.")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.defaultValue("5000")
+.build();
+
+public static final PropertyDescriptor CONNECTIONS_PER_BROKER = new 
PropertyDescriptor.Builder()
+.name("CONNECTIONS_PER_BROKER")
+

[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693557#comment-16693557
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235101491
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/src/main/java/org/apache/nifi/pulsar/StandardPulsarClientService.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar;
+
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.pulsar.client.api.ClientBuilder;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.client.api.PulsarClientException.UnsupportedAuthenticationException;
+import org.apache.pulsar.client.impl.auth.AuthenticationTls;
+
+public class StandardPulsarClientService extends AbstractControllerService 
implements PulsarClientService {
+
+public static final PropertyDescriptor PULSAR_SERVICE_URL = new 
PropertyDescriptor.Builder()
+.name("PULSAR_SERVICE_URL")
+.displayName("Pulsar Service URL")
+.description("URL for the Pulsar cluster, e.g localhost:6650")
+.required(true)
+.addValidator(StandardValidators.HOSTNAME_PORT_LIST_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.build();
+
+public static final PropertyDescriptor 
ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER = new PropertyDescriptor.Builder()
--- End diff --

@david-streamlio this is the property i am saying we should eliminate.  
It's fine that pulsar supports this but we dont need to from nifi.  We spend a 
ton of time/energy on security and we need to get more serious on it.  I see no 
reason to keep this here.  If we later find this to be a real problem then we 
could talk about ways to improve the situation by helping them generate proper 
certs/etc..  Also, I dont think we claim this is ok 'if you trust the broker 
you're connecting to' - the point is it could be any broker...  This is 
inherently not a good thing to use.  We should solve the problem.  We can do 
that later.


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235101491
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service/src/main/java/org/apache/nifi/pulsar/StandardPulsarClientService.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar;
+
+import java.net.MalformedURLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.ssl.SSLContextService;
+import org.apache.pulsar.client.api.ClientBuilder;
+import org.apache.pulsar.client.api.PulsarClient;
+import org.apache.pulsar.client.api.PulsarClientException;
+import 
org.apache.pulsar.client.api.PulsarClientException.UnsupportedAuthenticationException;
+import org.apache.pulsar.client.impl.auth.AuthenticationTls;
+
+public class StandardPulsarClientService extends AbstractControllerService 
implements PulsarClientService {
+
+public static final PropertyDescriptor PULSAR_SERVICE_URL = new 
PropertyDescriptor.Builder()
+.name("PULSAR_SERVICE_URL")
+.displayName("Pulsar Service URL")
+.description("URL for the Pulsar cluster, e.g localhost:6650")
+.required(true)
+.addValidator(StandardValidators.HOSTNAME_PORT_LIST_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.build();
+
+public static final PropertyDescriptor 
ACCEPT_UNTRUSTED_TLS_CERTIFICATE_FROM_BROKER = new PropertyDescriptor.Builder()
--- End diff --

@david-streamlio this is the property i am saying we should eliminate.  
It's fine that pulsar supports this but we dont need to from nifi.  We spend a 
ton of time/energy on security and we need to get more serious on it.  I see no 
reason to keep this here.  If we later find this to be a real problem then we 
could talk about ways to improve the situation by helping them generate proper 
certs/etc..  Also, I dont think we claim this is ok 'if you trust the broker 
you're connecting to' - the point is it could be any broker...  This is 
inherently not a good thing to use.  We should solve the problem.  We can do 
that later.


---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693545#comment-16693545
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235099937
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service-api/src/main/java/org/apache/nifi/pulsar/cache/LRUCache.java
 ---
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar.cache;
+
+import java.io.Closeable;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+
+public class LRUCache {
--- End diff --

@david-streamlio can you confirm this is a uniquely written class for this 
purpose and not taken from elsewhere?  I ask because I've had the displeasure 
of finding copied code before and it makes for some build/release messes and 
this is the type of thing that often hits that trigger.  This is fresh/clean 
room stuff?


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3178#discussion_r235099937
  
--- Diff: 
nifi-nar-bundles/nifi-pulsar-bundle/nifi-pulsar-client-service-api/src/main/java/org/apache/nifi/pulsar/cache/LRUCache.java
 ---
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.pulsar.cache;
+
+import java.io.Closeable;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+
+public class LRUCache {
--- End diff --

@david-streamlio can you confirm this is a uniquely written class for this 
purpose and not taken from elsewhere?  I ask because I've had the displeasure 
of finding copied code before and it makes for some build/release messes and 
this is the type of thing that often hits that trigger.  This is fresh/clean 
room stuff?


---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693535#comment-16693535
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
cool thanks.  can you please take a look at my comment regarding the 
security related property.


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
cool thanks.  can you please take a look at my comment regarding the 
security related property.


---


[jira] [Commented] (NIFIREG-211) Add extension bundles as a type of versioned item

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693487#comment-16693487
 ] 

ASF GitHub Bot commented on NIFIREG-211:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
Before releasing any of this we would definitely have sections in the user 
guide and admin guide related to extension bundles, but we can't really write 
all that until the work is done, and this is only a starting point for the work.

Most of this is just integrating a new type of versioned item into the 
existing registry framework, so if you aren't already familiar with how 
registry works, then a starting point would probably be to play around with 
version controlling flows, and go through all the existing documentation, and 
then think of extension bundles as just another versioned thing like flows.

You can access the swagger documentation for the REST API from the running 
application:

http://localhost:18080/nifi-registry-api/swagger/ui.html

You can also get a feel for how the current API works from looking at this 
integration test:


https://github.com/apache/nifi-registry/blob/7744072b5f059fe5240746e968493eddefa719a6/nifi-registry-core/nifi-registry-web-api/src/test/java/org/apache/nifi/registry/web/api/UnsecuredNiFiRegistryClientIT.java#L320-L469


> Add extension bundles as a type of versioned item
> -
>
> Key: NIFIREG-211
> URL: https://issues.apache.org/jira/browse/NIFIREG-211
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.4.0
>
>
> This ticket is to capture the work for adding extension bundles to NiFi 
> Registry.
> This work may require several follow on tickets, but at a high-level will 
> include some of the following:
> - Add a new type of item called an extension bundle, where each bundle
>  can contain one ore extensions or APIs
>  
>  - Support bundles for traditional NiFi (aka NARs) and also bundles for
>  MiNiFi CPP
>  
>  - Ability to upload the binary artifact for a bundle and extract the
>  metadata about the bundle, and metadata about the extensions contained
>  in the bundle (more on this later)
>  
>  - Provide a pluggable storage provider for saving the content of each
>  extension bundle so that we can have different implementations like
>  local fileysystem, S3, and other object stores
>  
>  - Provide a REST API for listing and retrieving available bundles,
>  integrate this into the registry Java client and NiFi CLI
> - Security considerations such as checksums and cryptographic signatures for 
> bundles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #148: NIFIREG-211 Initial work for adding extenion bundl...

2018-11-20 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
Before releasing any of this we would definitely have sections in the user 
guide and admin guide related to extension bundles, but we can't really write 
all that until the work is done, and this is only a starting point for the work.

Most of this is just integrating a new type of versioned item into the 
existing registry framework, so if you aren't already familiar with how 
registry works, then a starting point would probably be to play around with 
version controlling flows, and go through all the existing documentation, and 
then think of extension bundles as just another versioned thing like flows.

You can access the swagger documentation for the REST API from the running 
application:

http://localhost:18080/nifi-registry-api/swagger/ui.html

You can also get a feel for how the current API works from looking at this 
integration test:


https://github.com/apache/nifi-registry/blob/7744072b5f059fe5240746e968493eddefa719a6/nifi-registry-core/nifi-registry-web-api/src/test/java/org/apache/nifi/registry/web/api/UnsecuredNiFiRegistryClientIT.java#L320-L469


---


[jira] [Commented] (NIFIREG-211) Add extension bundles as a type of versioned item

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693476#comment-16693476
 ] 

ASF GitHub Bot commented on NIFIREG-211:


Github user ottobackwards commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
I don't want to pepper you with questions on this, is there some 
documentation that goes along with this? 


> Add extension bundles as a type of versioned item
> -
>
> Key: NIFIREG-211
> URL: https://issues.apache.org/jira/browse/NIFIREG-211
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.4.0
>
>
> This ticket is to capture the work for adding extension bundles to NiFi 
> Registry.
> This work may require several follow on tickets, but at a high-level will 
> include some of the following:
> - Add a new type of item called an extension bundle, where each bundle
>  can contain one ore extensions or APIs
>  
>  - Support bundles for traditional NiFi (aka NARs) and also bundles for
>  MiNiFi CPP
>  
>  - Ability to upload the binary artifact for a bundle and extract the
>  metadata about the bundle, and metadata about the extensions contained
>  in the bundle (more on this later)
>  
>  - Provide a pluggable storage provider for saving the content of each
>  extension bundle so that we can have different implementations like
>  local fileysystem, S3, and other object stores
>  
>  - Provide a REST API for listing and retrieving available bundles,
>  integrate this into the registry Java client and NiFi CLI
> - Security considerations such as checksums and cryptographic signatures for 
> bundles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #148: NIFIREG-211 Initial work for adding extenion bundl...

2018-11-20 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
I don't want to pepper you with questions on this, is there some 
documentation that goes along with this? 


---


[jira] [Commented] (NIFIREG-211) Add extension bundles as a type of versioned item

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693460#comment-16693460
 ] 

ASF GitHub Bot commented on NIFIREG-211:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
All versioned items (flows and now bundles) live in a bucket, and a bucket 
is where security policies are applied. So each bucket can be kind of like a 
mini extension repo, there could be a bucket for project nars and another 
bucket for other nars. 

The main rules are the following...

- Within a bucket, the bundle coordinate is unique, so you can't upload a 
NAR with the same group+artifact+version to the same bucket twice

- Across buckets you CAN upload NARs with the same group+artifact+version, 
BUT they must have the same checksum (currently SHA-256) which ensures they are 
actually the same bundle.


> Add extension bundles as a type of versioned item
> -
>
> Key: NIFIREG-211
> URL: https://issues.apache.org/jira/browse/NIFIREG-211
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.4.0
>
>
> This ticket is to capture the work for adding extension bundles to NiFi 
> Registry.
> This work may require several follow on tickets, but at a high-level will 
> include some of the following:
> - Add a new type of item called an extension bundle, where each bundle
>  can contain one ore extensions or APIs
>  
>  - Support bundles for traditional NiFi (aka NARs) and also bundles for
>  MiNiFi CPP
>  
>  - Ability to upload the binary artifact for a bundle and extract the
>  metadata about the bundle, and metadata about the extensions contained
>  in the bundle (more on this later)
>  
>  - Provide a pluggable storage provider for saving the content of each
>  extension bundle so that we can have different implementations like
>  local fileysystem, S3, and other object stores
>  
>  - Provide a REST API for listing and retrieving available bundles,
>  integrate this into the registry Java client and NiFi CLI
> - Security considerations such as checksums and cryptographic signatures for 
> bundles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #148: NIFIREG-211 Initial work for adding extenion bundl...

2018-11-20 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
All versioned items (flows and now bundles) live in a bucket, and a bucket 
is where security policies are applied. So each bucket can be kind of like a 
mini extension repo, there could be a bucket for project nars and another 
bucket for other nars. 

The main rules are the following...

- Within a bucket, the bundle coordinate is unique, so you can't upload a 
NAR with the same group+artifact+version to the same bucket twice

- Across buckets you CAN upload NARs with the same group+artifact+version, 
BUT they must have the same checksum (currently SHA-256) which ensures they are 
actually the same bundle.


---


[jira] [Commented] (NIFIREG-211) Add extension bundles as a type of versioned item

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693444#comment-16693444
 ] 

ASF GitHub Bot commented on NIFIREG-211:


Github user ottobackwards commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
What are some of the rules that can be tested?   Like uploading with 
duplicate names etc?


> Add extension bundles as a type of versioned item
> -
>
> Key: NIFIREG-211
> URL: https://issues.apache.org/jira/browse/NIFIREG-211
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.4.0
>
>
> This ticket is to capture the work for adding extension bundles to NiFi 
> Registry.
> This work may require several follow on tickets, but at a high-level will 
> include some of the following:
> - Add a new type of item called an extension bundle, where each bundle
>  can contain one ore extensions or APIs
>  
>  - Support bundles for traditional NiFi (aka NARs) and also bundles for
>  MiNiFi CPP
>  
>  - Ability to upload the binary artifact for a bundle and extract the
>  metadata about the bundle, and metadata about the extensions contained
>  in the bundle (more on this later)
>  
>  - Provide a pluggable storage provider for saving the content of each
>  extension bundle so that we can have different implementations like
>  local fileysystem, S3, and other object stores
>  
>  - Provide a REST API for listing and retrieving available bundles,
>  integrate this into the registry Java client and NiFi CLI
> - Security considerations such as checksums and cryptographic signatures for 
> bundles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4731) BigQuery processors

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693454#comment-16693454
 ] 

ASF GitHub Bot commented on NIFI-4731:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/3019
  
rebased against master if anyone wants to give it a try ;)


> BigQuery processors
> ---
>
> Key: NIFI-4731
> URL: https://issues.apache.org/jira/browse/NIFI-4731
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mikhail Sosonkin
>Priority: Major
>
> NIFI should have processors for putting data into BigQuery (Streaming and 
> Batch).
> Initial working processors can be found this repository: 
> https://github.com/nologic/nifi/tree/NIFI-4731/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/bigquery
> I'd like to get them into Nifi proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3019: [NIFI-4731][NIFI-4933] Big Query processor

2018-11-20 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/3019
  
rebased against master if anyone wants to give it a try ;)


---


[GitHub] nifi-registry issue #148: NIFIREG-211 Initial work for adding extenion bundl...

2018-11-20 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
What are some of the rules that can be tested?   Like uploading with 
duplicate names etc?


---


[jira] [Commented] (NIFIREG-211) Add extension bundles as a type of versioned item

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693442#comment-16693442
 ] 

ASF GitHub Bot commented on NIFIREG-211:


Github user ottobackwards commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
Would it make sense to have separate areas to store the project nars vs 
other nars?


> Add extension bundles as a type of versioned item
> -
>
> Key: NIFIREG-211
> URL: https://issues.apache.org/jira/browse/NIFIREG-211
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.4.0
>
>
> This ticket is to capture the work for adding extension bundles to NiFi 
> Registry.
> This work may require several follow on tickets, but at a high-level will 
> include some of the following:
> - Add a new type of item called an extension bundle, where each bundle
>  can contain one ore extensions or APIs
>  
>  - Support bundles for traditional NiFi (aka NARs) and also bundles for
>  MiNiFi CPP
>  
>  - Ability to upload the binary artifact for a bundle and extract the
>  metadata about the bundle, and metadata about the extensions contained
>  in the bundle (more on this later)
>  
>  - Provide a pluggable storage provider for saving the content of each
>  extension bundle so that we can have different implementations like
>  local fileysystem, S3, and other object stores
>  
>  - Provide a REST API for listing and retrieving available bundles,
>  integrate this into the registry Java client and NiFi CLI
> - Security considerations such as checksums and cryptographic signatures for 
> bundles



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread david-streamlio
Github user david-streamlio commented on the issue:

https://github.com/apache/nifi/pull/3178
  
@joewitt I dug into the build logs and found that the issue with the 
parallel build was due to having the wrong version for both of the Pulsar NAR 
files defined in the NiFi-assembly/pom.xml file. Which I fixed and pushed in 
the second 
[commit](https://github.com/apache/nifi/pull/3178/commits/0c85ce122b1bf06149320a2475aa396838de9493).
 
`
org.apache.nifi
nifi-pulsar-client-service-nar
1.9.0-SNAPSHOT
nar


org.apache.nifi
nifi-pulsar-nar
1.9.0-SNAPSHOT
nar
`


---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693443#comment-16693443
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user david-streamlio commented on the issue:

https://github.com/apache/nifi/pull/3178
  
@joewitt I dug into the build logs and found that the issue with the 
parallel build was due to having the wrong version for both of the Pulsar NAR 
files defined in the NiFi-assembly/pom.xml file. Which I fixed and pushed in 
the second 
[commit](https://github.com/apache/nifi/pull/3178/commits/0c85ce122b1bf06149320a2475aa396838de9493).
 
`
org.apache.nifi
nifi-pulsar-client-service-nar
1.9.0-SNAPSHOT
nar


org.apache.nifi
nifi-pulsar-nar
1.9.0-SNAPSHOT
nar
`


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry issue #148: NIFIREG-211 Initial work for adding extenion bundl...

2018-11-20 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/nifi-registry/pull/148
  
Would it make sense to have separate areas to store the project nars vs 
other nars?


---


[GitHub] nifi issue #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
just did a full clean build with 6 threads on an older macbook and no 
problems.  i'll try to reproduce my build issue from yesterday on a much higher 
thread build/machine and advise if issue i saw remains


---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693424#comment-16693424
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
just did a full clean build with 6 threads on an older macbook and no 
problems.  i'll try to reproduce my build issue from yesterday on a much higher 
thread build/machine and advise if issue i saw remains


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-211) Add extension bundles as a type of versioned item

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFIREG-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693418#comment-16693418
 ] 

ASF GitHub Bot commented on NIFIREG-211:


GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/148

NIFIREG-211 Initial work for adding extenion bundles to NiFi Registry

This PR includes the foundational work for adding extension bundles as a 
new type of versioned item. With this PR it is possible to upload NARs via the 
REST API with them being stored in the local filesystem persistence provider, 
and then interact with them through various REST APIs.

In the current state of this PR, registry does not have a way to know 
anything about the actual extensions contained in a bundle. This will need to 
come from a component descriptor provided within the NAR which will then be 
parsed and extracted on registry side.

We can leave NIFIREG-211 open after merging this so that we can continue to 
work against it and build out the additional functionality.

A NAR can be uploaded by making a multi-part form POST to the end-point:

_buckets/{bucketId}/extensions/bundles/{bundleType}_

An example using curl would be:

`curl -v -F file=@./nifi-example-processors-nar-1.0.0.nar 
http://localhost:18080/nifi-registry-api/buckets/65d6d70d-ecb7-478f-bbc4-1e378e52827c/extensions/bundles/nifi-nar`

From there you can interact with the REST API from the swagger docs.

I've also started a branch of NiFi with CLI commands related to extension 
bundles:
https://github.com/bbende/nifi/tree/extensions

If you run a full build of this PR first with mvn clean install, and then a 
build of the NiFi branch, you can launch the CLI in 
nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.9.0-SNAPSHOT-bin/nifi-toolkit-1.9.0-SNAPSHOT/bin/cli.sh
 and run the "upload-nars" command:

`registry upload-nars -b 024dba45-95c2-4a54-b79d-0abe1bb1122a -ebd 
/path/to/nifi-releases/nifi-1.8.0/lib -u http://localhost:18080 -verbose`

This should upload all of the NARs from the 1.8.0 release to your local 
registry.

Here is the commit history for all the changes made in this PR:

- Setting up DB tables and entities for extensions
- Updated MetadataService and DatabaseMetadataService with new methods for 
extension entities
- Added ON DELETE CASCADE to existing tables and simplified delete logic 
for buckets and flows
- Created data model for extension bundles and mapping to/from DB entities
- Created ExtensionBundleExtractor with an implemenetation for NARs
- Setup LinkService and LinkBuilder for extension bundles
- Setup pluggable persistence provider for extension bundles and 
implemented a local file-system provider.
- Refactored LinkService and add links for all extension related items
- Changed extension service to write out bundles to a temp directory before 
extracting and passing to persistence provider
- Implemented multi-part form upload for extensions bundles
- Upgraded to spring-boot 2.1.0
- Added SHA-256 checksums for bundle versions
- Initial client methods for uploading and retrieving bundles
- Configuring NiFi Registry Jersey client to use chunked entity processing 
so we don't load the entire bundle content into memory during an upload

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry extensions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/148.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #148


commit 7744072b5f059fe5240746e968493eddefa719a6
Author: Bryan Bende 
Date:   2018-11-20T15:57:08Z

NIFIREG-211 Initial work for adding extenion bundles to NiFi Registry
- Setting up DB tables and entities for extensions
- Updated MetadataService and DatabaseMetadataService with new methods for 
extension entities
- Added ON DELETE CASCADE to existing tables and simplified delete logic 
for buckets and flows
- Created data model for extension bundles and mapping to/from DB entities
- Created ExtensionBundleExtractor with an implemenetation for NARs
- Setup LinkService and LinkBuilder for extension bundles
- Setup pluggable persistence provider for extension bundles and 
implemented a local file-system provider.
- Refactored LinkService and add links for all extension related items
- Changed extension service to write out bundles to a temp directory before 
extracting and passing to persistence provider
- Implemented multi-part form upload for extensions bundles
- Upgraded to spring-boot 2.1.0
- Added SHA-256 checksums for bundle 

[GitHub] nifi-registry pull request #148: NIFIREG-211 Initial work for adding extenio...

2018-11-20 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi-registry/pull/148

NIFIREG-211 Initial work for adding extenion bundles to NiFi Registry

This PR includes the foundational work for adding extension bundles as a 
new type of versioned item. With this PR it is possible to upload NARs via the 
REST API with them being stored in the local filesystem persistence provider, 
and then interact with them through various REST APIs.

In the current state of this PR, registry does not have a way to know 
anything about the actual extensions contained in a bundle. This will need to 
come from a component descriptor provided within the NAR which will then be 
parsed and extracted on registry side.

We can leave NIFIREG-211 open after merging this so that we can continue to 
work against it and build out the additional functionality.

A NAR can be uploaded by making a multi-part form POST to the end-point:

_buckets/{bucketId}/extensions/bundles/{bundleType}_

An example using curl would be:

`curl -v -F file=@./nifi-example-processors-nar-1.0.0.nar 
http://localhost:18080/nifi-registry-api/buckets/65d6d70d-ecb7-478f-bbc4-1e378e52827c/extensions/bundles/nifi-nar`

From there you can interact with the REST API from the swagger docs.

I've also started a branch of NiFi with CLI commands related to extension 
bundles:
https://github.com/bbende/nifi/tree/extensions

If you run a full build of this PR first with mvn clean install, and then a 
build of the NiFi branch, you can launch the CLI in 
nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.9.0-SNAPSHOT-bin/nifi-toolkit-1.9.0-SNAPSHOT/bin/cli.sh
 and run the "upload-nars" command:

`registry upload-nars -b 024dba45-95c2-4a54-b79d-0abe1bb1122a -ebd 
/path/to/nifi-releases/nifi-1.8.0/lib -u http://localhost:18080 -verbose`

This should upload all of the NARs from the 1.8.0 release to your local 
registry.

Here is the commit history for all the changes made in this PR:

- Setting up DB tables and entities for extensions
- Updated MetadataService and DatabaseMetadataService with new methods for 
extension entities
- Added ON DELETE CASCADE to existing tables and simplified delete logic 
for buckets and flows
- Created data model for extension bundles and mapping to/from DB entities
- Created ExtensionBundleExtractor with an implemenetation for NARs
- Setup LinkService and LinkBuilder for extension bundles
- Setup pluggable persistence provider for extension bundles and 
implemented a local file-system provider.
- Refactored LinkService and add links for all extension related items
- Changed extension service to write out bundles to a temp directory before 
extracting and passing to persistence provider
- Implemented multi-part form upload for extensions bundles
- Upgraded to spring-boot 2.1.0
- Added SHA-256 checksums for bundle versions
- Initial client methods for uploading and retrieving bundles
- Configuring NiFi Registry Jersey client to use chunked entity processing 
so we don't load the entire bundle content into memory during an upload

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi-registry extensions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-registry/pull/148.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #148


commit 7744072b5f059fe5240746e968493eddefa719a6
Author: Bryan Bende 
Date:   2018-11-20T15:57:08Z

NIFIREG-211 Initial work for adding extenion bundles to NiFi Registry
- Setting up DB tables and entities for extensions
- Updated MetadataService and DatabaseMetadataService with new methods for 
extension entities
- Added ON DELETE CASCADE to existing tables and simplified delete logic 
for buckets and flows
- Created data model for extension bundles and mapping to/from DB entities
- Created ExtensionBundleExtractor with an implemenetation for NARs
- Setup LinkService and LinkBuilder for extension bundles
- Setup pluggable persistence provider for extension bundles and 
implemented a local file-system provider.
- Refactored LinkService and add links for all extension related items
- Changed extension service to write out bundles to a temp directory before 
extracting and passing to persistence provider
- Implemented multi-part form upload for extensions bundles
- Upgraded to spring-boot 2.1.0
- Added SHA-256 checksums for bundle versions
- Initial client methods for uploading and retrieving bundles
- Configuring NiFi Registry Jersey client to use chunked entity processing 
so we don't load the entire bundle content into memory during an upload




---


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693412#comment-16693412
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user david-streamlio commented on the issue:

https://github.com/apache/nifi/pull/3178
  
@joewitt I have incorporated all of the changes / corrections that came 
from both your comments as well as @pvillard31's 
[changes](https://github.com/apache/nifi/pull/2882#issuecomment-431839080) from 
September 21st.  

I am more of an (git idiot) than a git expert so the last two times I tried 
to rebase, squash, etc. I ended up either in a state where the PR wouldn't 
merge, or I had brought in several hundred commits from other people, or the 
branch was rejected, etc.  Since this module is self-contained, I found it 
easier to just start with a fresh branch off of the latest master, so I have 
been doing that for the 1.6.x, 1.7.x, 1.8.x, and now 1.9.x release.


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread david-streamlio
Github user david-streamlio commented on the issue:

https://github.com/apache/nifi/pull/3178
  
@joewitt I have incorporated all of the changes / corrections that came 
from both your comments as well as @pvillard31's 
[changes](https://github.com/apache/nifi/pull/2882#issuecomment-431839080) from 
September 21st.  

I am more of an (git idiot) than a git expert so the last two times I tried 
to rebase, squash, etc. I ended up either in a state where the PR wouldn't 
merge, or I had brought in several hundred commits from other people, or the 
branch was rejected, etc.  Since this module is self-contained, I found it 
easier to just start with a fresh branch off of the latest master, so I have 
been doing that for the 1.6.x, 1.7.x, 1.8.x, and now 1.9.x release.


---


[jira] [Commented] (NIFI-5744) Put exception message to attribute while ExecuteSQL fail

2018-11-20 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693406#comment-16693406
 ] 

ASF subversion and git services commented on NIFI-5744:
---

Commit 3c7012ffda777b4692e4ea8db042ee1943a7a66a in nifi's branch 
refs/heads/master from yjhyjhyjh0
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3c7012f ]

NIFI-5744: Put exception message to attribute while ExecuteSQL fail

This closes #3107.

Signed-off-by: Peter Wicks 


> Put exception message to attribute while ExecuteSQL fail
> 
>
> Key: NIFI-5744
> URL: https://issues.apache.org/jira/browse/NIFI-5744
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1
>Reporter: Deon Huang
>Assignee: Deon Huang
>Priority: Minor
>
> In some scenario, it would be great if we could have different behavior based 
> on exception.
>  Better error tracking afterwards in attribute format instead of tracking in 
> log.
> For example, if it’s connection refused exception due to wrong url. 
>  We won’t want to retry and error message attribute would be helpful to keep 
> track of.
> While it’s other scenario that database temporary unavailable, we should 
> retry it based on should retry exception.
> Should be a quick fix at AbstractExecuteSQL before transfer flowfile to 
> failure relationship
> {code:java}
>  session.transfer(fileToProcess, REL_FAILURE);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5744) Put exception message to attribute while ExecuteSQL fail

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693408#comment-16693408
 ] 

ASF GitHub Bot commented on NIFI-5744:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3107


> Put exception message to attribute while ExecuteSQL fail
> 
>
> Key: NIFI-5744
> URL: https://issues.apache.org/jira/browse/NIFI-5744
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1
>Reporter: Deon Huang
>Assignee: Deon Huang
>Priority: Minor
>
> In some scenario, it would be great if we could have different behavior based 
> on exception.
>  Better error tracking afterwards in attribute format instead of tracking in 
> log.
> For example, if it’s connection refused exception due to wrong url. 
>  We won’t want to retry and error message attribute would be helpful to keep 
> track of.
> While it’s other scenario that database temporary unavailable, we should 
> retry it based on should retry exception.
> Should be a quick fix at AbstractExecuteSQL before transfer flowfile to 
> failure relationship
> {code:java}
>  session.transfer(fileToProcess, REL_FAILURE);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #3107: NIFI-5744: Put exception message to attribute while...

2018-11-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/3107


---


[jira] [Commented] (NIFI-5744) Put exception message to attribute while ExecuteSQL fail

2018-11-20 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693402#comment-16693402
 ] 

ASF subversion and git services commented on NIFI-5744:
---

Commit 9358a60d33be966233d80b00c07928ccb17c4059 in nifi's branch 
refs/heads/NIFI-5744_a from yjhyjhyjh0
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=9358a60 ]

NIFI-5744: Put exception message to attribute while ExecuteSQL fail

This closes #3107.

Signed-off-by: Peter Wicks 


> Put exception message to attribute while ExecuteSQL fail
> 
>
> Key: NIFI-5744
> URL: https://issues.apache.org/jira/browse/NIFI-5744
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1
>Reporter: Deon Huang
>Assignee: Deon Huang
>Priority: Minor
>
> In some scenario, it would be great if we could have different behavior based 
> on exception.
>  Better error tracking afterwards in attribute format instead of tracking in 
> log.
> For example, if it’s connection refused exception due to wrong url. 
>  We won’t want to retry and error message attribute would be helpful to keep 
> track of.
> While it’s other scenario that database temporary unavailable, we should 
> retry it based on should retry exception.
> Should be a quick fix at AbstractExecuteSQL before transfer flowfile to 
> failure relationship
> {code:java}
>  session.transfer(fileToProcess, REL_FAILURE);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693397#comment-16693397
 ] 

ASF GitHub Bot commented on NIFI-4914:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
@david-streamlio i had comments on the previous PR.  Can you please review 
those/pull them forward to this.  In the future you could rebase, squash, and 
force push the PR if you want to get back to a clean state but the history 
could all stay on the same PR.  It helps both you and reviewer.  I'd love to 
help you get this thing in but it is a bit elusive to get the timing right 
between contrib, review cycles, pr resets, etc..  


> Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, 
> PublishPulsarRecord
> --
>
> Key: NIFI-4914
> URL: https://issues.apache.org/jira/browse/NIFI-4914
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: David Kjerrumgaard
>Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Create record-based processors for Apache Pulsar 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3178: NIFI-4914: Add Apache Pulsar processors

2018-11-20 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/3178
  
@david-streamlio i had comments on the previous PR.  Can you please review 
those/pull them forward to this.  In the future you could rebase, squash, and 
force push the PR if you want to get back to a clean state but the history 
could all stay on the same PR.  It helps both you and reviewer.  I'd love to 
help you get this thing in but it is a bit elusive to get the timing right 
between contrib, review cycles, pr resets, etc..  


---


[jira] [Updated] (MINIFICPP-621) Create log aggregator ECU for CAPI

2018-11-20 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-621:
-
Summary: Create log aggregator ECU for CAPI   (was: Crate log aggregator 
ECU for CAPI )

> Create log aggregator ECU for CAPI 
> ---
>
> Key: MINIFICPP-621
> URL: https://issues.apache.org/jira/browse/MINIFICPP-621
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: Mr TheSegfault
>Priority: Major
>  Labels: CAPI, ECU, nanofi
>
> Create examples that can tail log files ( perhaps using TailFile ) that works 
> in windows and *nix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-673) Define ECU (Edge Collector Unit )

2018-11-20 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-673:
-
Description: 
Edge Collector Units ( ECU ) will be minimized agents whose sole focus is 
collection and retrieval of data. Defined as feature facing constructs, ECUs 
will define functionality that is agnostic of system. We'll build the 
portability behind the library. 

 

[~aboda] This ticket encompasses building software from individual units of 
functionality. So take the log aggregation example in MINIFICPP-621 . There we 
have an instance of "tail file"  linked to site to site. To do this will 
require the ability to link these units ( we've called them building blocks, 
but that's more documentation ), stream data between them, and then define 
success and failure conditions ( or relationships ). 

  was:Edge Collector Units ( ECU ) will be minimized agents whose sole focus is 
collection and retrieval of data. Defined as feature facing constructs, ECUs 
will define functionality that is agnostic of system. We'll build the 
portability behind the library. 


> Define ECU (Edge Collector Unit )
> -
>
> Key: MINIFICPP-673
> URL: https://issues.apache.org/jira/browse/MINIFICPP-673
> Project: NiFi MiNiFi C++
>  Issue Type: Epic
>Reporter: Mr TheSegfault
>Priority: Major
>  Labels: ECU, nanofi
>
> Edge Collector Units ( ECU ) will be minimized agents whose sole focus is 
> collection and retrieval of data. Defined as feature facing constructs, ECUs 
> will define functionality that is agnostic of system. We'll build the 
> portability behind the library. 
>  
> [~aboda] This ticket encompasses building software from individual units of 
> functionality. So take the log aggregation example in MINIFICPP-621 . There 
> we have an instance of "tail file"  linked to site to site. To do this will 
> require the ability to link these units ( we've called them building blocks, 
> but that's more documentation ), stream data between them, and then define 
> success and failure conditions ( or relationships ). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4862) Copy original FlowFile attributes to output FlowFiles at SelectHiveQL processor

2018-11-20 Thread patrick white (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693310#comment-16693310
 ] 

patrick white commented on NIFI-4862:
-

Thanks much [~ijokarumawak], went with that, 1.6.0 built and unit tested 
successfully with #2605.

[~mattyb149], thanks much for the PR.

 

> Copy original FlowFile attributes to output FlowFiles at SelectHiveQL 
> processor
> ---
>
> Key: NIFI-4862
> URL: https://issues.apache.org/jira/browse/NIFI-4862
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Jakub Leś
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.7.0
>
> Attachments: 
> 0001-NIFI-4862-Add-Copy-original-attributtes-to-SelectHiv.patch
>
>
> Hi, 
> Please add "Copy original attributes" to processor SelectHiveQL. Thanks to 
> that we can use HttpRequest and HttpResponse to synchronize fetching query 
> result in webservice.
>  
> UPDATED:
> SelectHiveQL creates new FlowFiles from Hive query result sets. When it also 
> has incoming FlowFiles, it should create new FlowFiles from the input 
> FlowFile, so that it can copy original FlowFile attributes to output 
> FlowFiles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5812) Make database processors as 'PrimaryNodeOnly'

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693226#comment-16693226
 ] 

ASF GitHub Bot commented on NIFI-5812:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/3167
  
@zenfenan If you're too busy, I can make that change and merge.


> Make database processors as 'PrimaryNodeOnly'
> -
>
> Key: NIFI-5812
> URL: https://issues.apache.org/jira/browse/NIFI-5812
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.7.0, 1.8.0, 1.7.1
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> With NIFI-543, we have introduced an behavior annotation to mark a particular 
> processor to run only on the Primary Node. It is recommended to mark the 
> following database related processors as 'PrimaryNodeOnly':
>  * QueryDatabaseTable
>  * GenerateTableFetch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5812) Make database processors as 'PrimaryNodeOnly'

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693225#comment-16693225
 ] 

ASF GitHub Bot commented on NIFI-5812:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3167#discussion_r234999192
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -114,6 +114,7 @@
 + "max value for max value columns. Properties should be added in 
the format `initial.maxvalue.`. This value is only used the 
first time "
 + "the table is accessed (when a Maximum Value Column is 
specified). In the case of incoming connections, the value is only used the 
first time for each table "
 + "specified in the flow files.")
+@PrimaryNodeOnly
--- End diff --

Agreed


> Make database processors as 'PrimaryNodeOnly'
> -
>
> Key: NIFI-5812
> URL: https://issues.apache.org/jira/browse/NIFI-5812
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.7.0, 1.8.0, 1.7.1
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> With NIFI-543, we have introduced an behavior annotation to mark a particular 
> processor to run only on the Primary Node. It is recommended to mark the 
> following database related processors as 'PrimaryNodeOnly':
>  * QueryDatabaseTable
>  * GenerateTableFetch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #3167: NIFI-5812: Marked Database processors as 'PrimaryNodeOnly'

2018-11-20 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/3167
  
@zenfenan If you're too busy, I can make that change and merge.


---


[GitHub] nifi pull request #3167: NIFI-5812: Marked Database processors as 'PrimaryNo...

2018-11-20 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/3167#discussion_r234999192
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
 ---
@@ -114,6 +114,7 @@
 + "max value for max value columns. Properties should be added in 
the format `initial.maxvalue.`. This value is only used the 
first time "
 + "the table is accessed (when a Maximum Value Column is 
specified). In the case of incoming connections, the value is only used the 
first time for each table "
 + "specified in the flow files.")
+@PrimaryNodeOnly
--- End diff --

Agreed


---


[jira] [Commented] (MINIFICPP-684) ExtractText processor doesn't handle "Size limit" property

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693196#comment-16693196
 ] 

ASF GitHub Bot commented on MINIFICPP-684:
--

GitHub user arpadboda opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/446

MINIFICPP-684 - ExtractText processor doesn't handle \

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arpadboda/nifi-minifi-cpp MINIFICPP-684

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/446.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #446


commit a04e6c7975e164e7631aa06c77d144b862cfaf15
Author: Arpad Boda 
Date:   2018-11-20T12:56:48Z

MINIFICPP-684 - ExtractText processor doesn't handle \




> ExtractText processor doesn't handle "Size limit" property
> --
>
> Key: MINIFICPP-684
> URL: https://issues.apache.org/jira/browse/MINIFICPP-684
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Major
> Fix For: 0.6.0
>
>
> Size limit property is ignored (cannot even be set), the whole content is 
> extracted to the attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #446: MINIFICPP-684 - ExtractText processor doe...

2018-11-20 Thread arpadboda
GitHub user arpadboda opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/446

MINIFICPP-684 - ExtractText processor doesn't handle \

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arpadboda/nifi-minifi-cpp MINIFICPP-684

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/446.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #446


commit a04e6c7975e164e7631aa06c77d144b862cfaf15
Author: Arpad Boda 
Date:   2018-11-20T12:56:48Z

MINIFICPP-684 - ExtractText processor doesn't handle \




---


[jira] [Created] (MINIFICPP-684) ExtractText processor doesn't handle "Size limit" property

2018-11-20 Thread Arpad Boda (JIRA)
Arpad Boda created MINIFICPP-684:


 Summary: ExtractText processor doesn't handle "Size limit" property
 Key: MINIFICPP-684
 URL: https://issues.apache.org/jira/browse/MINIFICPP-684
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Arpad Boda
Assignee: Arpad Boda
 Fix For: 0.6.0


Size limit property is ignored (cannot even be set), the whole content is 
extracted to the attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5832) PutHiveQL - Flowfile isn't transferred to failure rel on actual failure

2018-11-20 Thread Ed Berezitsky (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Berezitsky updated NIFI-5832:

Description: 
PutHiveQL is stuck if error occurred when flow file contains multiple 
statements.

Example:

 
{code:java}
set tez.queue.name=qwe;
create table t_table1 (s string) stored as orc;{code}
 

This will fail if such queue doesn't exist. But FF will be stuck in incoming 
connection forever without even emitting bulletin (bulletin will appear only 
when the processor is in debug mode).

Another example:
{code:java}
insert into table t_table1 select 'test' from test limit 1;
insert into table non_existing_table select * from another_table;{code}
Note, first statement is correct one, second should fail.

  was:
PutHiveQL is stuck if error occurred when flow file contains multiple 
statements.

Example:

 
{code:java}
 
set tez.queue.name=qwe;
create table t_table1 (s string) stored as orc;{code}
 

This will fail if such queue doesn't exist. But FF will be stuck in incoming 
connection forever without even emitting bulletin (bulletin will appear only 
when the processor is in debug mode).

Another example:
{code:java}
insert into table t_table1 select 'test' from test limit 1;
insert into table non_existing_table select * from another_table;{code}
Note, first statement is correct one, second should fail.


> PutHiveQL - Flowfile isn't transferred to failure rel on actual failure
> ---
>
> Key: NIFI-5832
> URL: https://issues.apache.org/jira/browse/NIFI-5832
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Ed Berezitsky
>Assignee: Ed Berezitsky
>Priority: Major
>
> PutHiveQL is stuck if error occurred when flow file contains multiple 
> statements.
> Example:
>  
> {code:java}
> set tez.queue.name=qwe;
> create table t_table1 (s string) stored as orc;{code}
>  
> This will fail if such queue doesn't exist. But FF will be stuck in incoming 
> connection forever without even emitting bulletin (bulletin will appear only 
> when the processor is in debug mode).
> Another example:
> {code:java}
> insert into table t_table1 select 'test' from test limit 1;
> insert into table non_existing_table select * from another_table;{code}
> Note, first statement is correct one, second should fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-681) Add content hash processor

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693137#comment-16693137
 ] 

ASF GitHub Bot commented on MINIFICPP-681:
--

GitHub user arpadboda opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/445

MINIFICPP-681 - Add content hash processor

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arpadboda/nifi-minifi-cpp MINIFICPP-681

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/445.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #445


commit 3e1d31bc9f379bf46f29278ed5bd395d337771c5
Author: Arpad Boda 
Date:   2018-11-19T12:49:39Z

MINIFICPP-681 - Add content hash processor




> Add content hash processor
> --
>
> Key: MINIFICPP-681
> URL: https://issues.apache.org/jira/browse/MINIFICPP-681
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Arpad Boda
>Assignee: Arpad Boda
>Priority: Major
> Fix For: 0.6.0
>
>
> Add a new processor that supports hashing content and add the checksum to the 
> flowfile as an attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #445: MINIFICPP-681 - Add content hash processo...

2018-11-20 Thread arpadboda
GitHub user arpadboda opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/445

MINIFICPP-681 - Add content hash processor

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arpadboda/nifi-minifi-cpp MINIFICPP-681

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/445.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #445


commit 3e1d31bc9f379bf46f29278ed5bd395d337771c5
Author: Arpad Boda 
Date:   2018-11-19T12:49:39Z

MINIFICPP-681 - Add content hash processor




---


[GitHub] nifi issue #2866: NIFI-4710 Kerberos support for user auth in Docker instanc...

2018-11-20 Thread SarthakSahu
Github user SarthakSahu commented on the issue:

https://github.com/apache/nifi/pull/2866
  
I had configure my own KDCD server and followed  
https://bryanbende.com/development/2016/08/31/apache-nifi-1.0.0-kerberos-authentication.
Please have a look on this.


---


[jira] [Commented] (NIFI-4710) Kerberos

2018-11-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693128#comment-16693128
 ] 

ASF GitHub Bot commented on NIFI-4710:
--

Github user SarthakSahu commented on the issue:

https://github.com/apache/nifi/pull/2866
  
I had configure my own KDCD server and followed  
https://bryanbende.com/development/2016/08/31/apache-nifi-1.0.0-kerberos-authentication.
Please have a look on this.


> Kerberos
> 
>
> Key: NIFI-4710
> URL: https://issues.apache.org/jira/browse/NIFI-4710
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Docker
>Reporter: Aldrin Piri
>Assignee: Sarthak
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)