[jira] [Commented] (NIFI-5583) Make Change Data Capture (CDC) processor for MySQL refer to GTID

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610115#comment-16610115
 ] 

ASF GitHub Bot commented on NIFI-5583:
--

Github user yoshiata commented on the issue:

https://github.com/apache/nifi/pull/2997
  
I will add unit tests later.


> Make Change Data Capture (CDC) processor for MySQL refer to GTID
> 
>
> Key: NIFI-5583
> URL: https://issues.apache.org/jira/browse/NIFI-5583
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Yoshiaki Takahashi
>Priority: Major
>
> When connecting to a MySQL cluster consisting of multiple hosts, existing CDC 
> processor can not automatically switch to other host when the connected host 
> goes down.
> The reason is that file names and positions of binary log may be different 
> for each host in the same cluster.
> In such a case it is impossible to continue reading files from the same 
> position.
> In order to continue reading in such cases, a processor referring to the GTID 
> (Global Transaction ID) is necessary.
> GTID is ID that uniquely identifies a transaction, and it is recorded in 
> binary log of MySQL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2997: NIFI-5583: Add cdc processor for MySQL referring to GTID.

2018-09-10 Thread yoshiata
Github user yoshiata commented on the issue:

https://github.com/apache/nifi/pull/2997
  
I will add unit tests later.


---


[jira] [Commented] (NIFI-5583) Make Change Data Capture (CDC) processor for MySQL refer to GTID

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610113#comment-16610113
 ] 

ASF GitHub Bot commented on NIFI-5583:
--

GitHub user yoshiata opened a pull request:

https://github.com/apache/nifi/pull/2997

NIFI-5583: Add cdc processor for MySQL referring to GTID.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yoshiata/nifi nifi-5583

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2997.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2997


commit 662df7a7b7dc0f4529ac9cf10dfe4e6e300acde2
Author: yoshiata 
Date:   2018-07-02T06:55:50Z

NIFI-5583: Add cdc processor for MySQL referring to GTID.




> Make Change Data Capture (CDC) processor for MySQL refer to GTID
> 
>
> Key: NIFI-5583
> URL: https://issues.apache.org/jira/browse/NIFI-5583
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Yoshiaki Takahashi
>Priority: Major
>
> When connecting to a MySQL cluster consisting of multiple hosts, existing CDC 
> processor can not automatically switch to other host when the connected host 
> goes down.
> The reason is that file names and positions of binary log may be different 
> for each host in the same cluster.
> In such a case it is impossible to continue reading files from the same 
> position.
> In order to continue reading in such cases, a processor referring to the GTID 
> (Global Transaction ID) is necessary.
> GTID is ID that uniquely identifies a transaction, and it is recorded in 
> binary log of MySQL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2997: NIFI-5583: Add cdc processor for MySQL referring to...

2018-09-10 Thread yoshiata
GitHub user yoshiata opened a pull request:

https://github.com/apache/nifi/pull/2997

NIFI-5583: Add cdc processor for MySQL referring to GTID.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yoshiata/nifi nifi-5583

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2997.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2997


commit 662df7a7b7dc0f4529ac9cf10dfe4e6e300acde2
Author: yoshiata 
Date:   2018-07-02T06:55:50Z

NIFI-5583: Add cdc processor for MySQL referring to GTID.




---


[jira] [Commented] (NIFI-5474) ReplaceText RegexReplace evaluates payload as Expression language

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610035#comment-16610035
 ] 

ASF GitHub Bot commented on NIFI-5474:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2951
  
I'll take a look here too, and will merge after my +1 unless there are 
objections


> ReplaceText RegexReplace evaluates payload as Expression language
> -
>
> Key: NIFI-5474
> URL: https://issues.apache.org/jira/browse/NIFI-5474
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.0, 1.7.1
>Reporter: Joseph Percivall
>Assignee: Mark Payne
>Priority: Major
>
> To reproduce, add "${this will fail}" to the ReplaceTest unit test resource 
> "hello.txt" and run one of the tests (like testRegexWithExpressionLanguage). 
> You'll end up seeing an error message like this: 
> {quote}java.lang.AssertionError: 
> org.apache.nifi.attribute.expression.language.exception.AttributeExpressionLanguageException:
>  Invalid Expression: ${replaceValue}, World! ${this will fail} due to 
> Unexpected token 'will' at line 1, column 7. Query: ${this will fail}
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:201)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:160)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:155)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:150)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:145)
> at 
> org.apache.nifi.processors.standard.TestReplaceText.testRegexWithExpressionLanguage(TestReplaceText.java:382)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: 
> org.apache.nifi.attribute.expression.language.exception.AttributeExpressionLanguageException:
>  Invalid Expression: ${replaceValue}, World! ${this will fail} due to 
> Unexpected token 'will' at line 1, column 7. Query: ${this will fail}
> at 
> org.apache.nifi.attribute.expression.language.InvalidPreparedQuery.evaluateExpressions(InvalidPreparedQuery.java:49)
> at 
> org.apache.nifi.attribute.expression.language.StandardPropertyValue.evaluateAttributeExpressions(StandardPropertyValue.java:160)
> at 
> org.apache.nifi.util.MockPropertyValue.evaluateAttributeExpressions(MockPropertyValue.java:257)
> at 
> org.apache.nifi.util.MockPropertyValue.evaluateAttributeExpressions(MockPropertyValue.java:244)
> at 
> org.apache.nifi.processors.standard.ReplaceText$RegexReplace.replace(ReplaceText.java:564)
> at 
> org.apache.nifi.processors.standard.ReplaceText.onTrigger(ReplaceText.java:299)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> 

[GitHub] nifi issue #2951: NIFI-5474: When using Regex Replace with ReplaceText, and ...

2018-09-10 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2951
  
I'll take a look here too, and will merge after my +1 unless there are 
objections


---


[jira] [Updated] (NIFI-5474) ReplaceText RegexReplace evaluates payload as Expression language

2018-09-10 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5474:
---
Status: Patch Available  (was: Open)

> ReplaceText RegexReplace evaluates payload as Expression language
> -
>
> Key: NIFI-5474
> URL: https://issues.apache.org/jira/browse/NIFI-5474
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.1, 1.7.0
>Reporter: Joseph Percivall
>Assignee: Mark Payne
>Priority: Major
>
> To reproduce, add "${this will fail}" to the ReplaceTest unit test resource 
> "hello.txt" and run one of the tests (like testRegexWithExpressionLanguage). 
> You'll end up seeing an error message like this: 
> {quote}java.lang.AssertionError: 
> org.apache.nifi.attribute.expression.language.exception.AttributeExpressionLanguageException:
>  Invalid Expression: ${replaceValue}, World! ${this will fail} due to 
> Unexpected token 'will' at line 1, column 7. Query: ${this will fail}
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:201)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:160)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:155)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:150)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:145)
> at 
> org.apache.nifi.processors.standard.TestReplaceText.testRegexWithExpressionLanguage(TestReplaceText.java:382)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: 
> org.apache.nifi.attribute.expression.language.exception.AttributeExpressionLanguageException:
>  Invalid Expression: ${replaceValue}, World! ${this will fail} due to 
> Unexpected token 'will' at line 1, column 7. Query: ${this will fail}
> at 
> org.apache.nifi.attribute.expression.language.InvalidPreparedQuery.evaluateExpressions(InvalidPreparedQuery.java:49)
> at 
> org.apache.nifi.attribute.expression.language.StandardPropertyValue.evaluateAttributeExpressions(StandardPropertyValue.java:160)
> at 
> org.apache.nifi.util.MockPropertyValue.evaluateAttributeExpressions(MockPropertyValue.java:257)
> at 
> org.apache.nifi.util.MockPropertyValue.evaluateAttributeExpressions(MockPropertyValue.java:244)
> at 
> org.apache.nifi.processors.standard.ReplaceText$RegexReplace.replace(ReplaceText.java:564)
> at 
> org.apache.nifi.processors.standard.ReplaceText.onTrigger(ReplaceText.java:299)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner$RunProcessor.call(StandardProcessorTestRunner.java:251)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner$RunProcessor.call(StandardProcessorTestRunner.java:245)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Created] (NIFI-5586) Add capability to generate ECDSA keys to TLS Toolkit

2018-09-10 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-5586:
---

 Summary: Add capability to generate ECDSA keys to TLS Toolkit
 Key: NIFI-5586
 URL: https://issues.apache.org/jira/browse/NIFI-5586
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Affects Versions: 1.7.1
Reporter: Andy LoPresto


The TLS Toolkit should be able to generate ECDSA keys to enable NiFi to support 
ECDSA cipher suites. 

Currently, ECDSA keys can be manually generated using external tools and loaded 
into a keystore and truststore that are compatible with NiFi. 

{code}
keytool -genkeypair -alias ec -keyalg EC -keysize 256 -sigalg SHA256withECDSA 
-validity 365 -storetype JKS -keystore ec-keystore.jks -storepass 
passwordpassword
keytool -export -alias ec -keystore ec-keystore.jks -file ec-public.pem
keytool -import -alias ec -file ec-public.pem -keystore ec-truststore.jks 
-storepass passwordpassword
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-604) Convert C++ namespace operator to Java packing to keep responses aligned.

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609944#comment-16609944
 ] 

ASF GitHub Bot commented on MINIFICPP-604:
--

GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/396

MINIFICPP-604: Convert C++ namespace operators to Java packing operators

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-604

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/396.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #396


commit 5b95bb9a00dc2ccd48d882cea6b1d9abfe7bbfc7
Author: Marc Parisi 
Date:   2018-09-11T00:21:33Z

MINIFICPP-604: Convert C++ namespace operators to Java packing operators




> Convert C++ namespace operator to Java packing to keep responses aligned. 
> --
>
> Key: MINIFICPP-604
> URL: https://issues.apache.org/jira/browse/MINIFICPP-604
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Mr TheSegfault
>Assignee: Mr TheSegfault
>Priority: Major
>
> This ensures that we have a uniform response from agents. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #396: MINIFICPP-604: Convert C++ namespace oper...

2018-09-10 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/396

MINIFICPP-604: Convert C++ namespace operators to Java packing operators

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-604

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/396.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #396


commit 5b95bb9a00dc2ccd48d882cea6b1d9abfe7bbfc7
Author: Marc Parisi 
Date:   2018-09-11T00:21:33Z

MINIFICPP-604: Convert C++ namespace operators to Java packing operators




---


[jira] [Created] (MINIFICPP-604) Convert C++ namespace operator to Java packing to keep responses aligned.

2018-09-10 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-604:


 Summary: Convert C++ namespace operator to Java packing to keep 
responses aligned. 
 Key: MINIFICPP-604
 URL: https://issues.apache.org/jira/browse/MINIFICPP-604
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Mr TheSegfault
Assignee: Mr TheSegfault


This ensures that we have a uniform response from agents. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2983: NIFI-5566 Improve HashContent processor and standardize Ha...

2018-09-10 Thread thenatog
Github user thenatog commented on the issue:

https://github.com/apache/nifi/pull/2983
  
I am finding that the files fail for Processor Configuration 1 (Allow 
partial) with Flowfile B which is why I'm confused. 


---


[jira] [Commented] (NIFI-5566) Bring HashContent inline with HashService and rename legacy components

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609934#comment-16609934
 ] 

ASF GitHub Bot commented on NIFI-5566:
--

Github user thenatog commented on the issue:

https://github.com/apache/nifi/pull/2983
  
I am finding that the files fail for Processor Configuration 1 (Allow 
partial) with Flowfile B which is why I'm confused. 


> Bring HashContent inline with HashService and rename legacy components
> --
>
> Key: NIFI-5566
> URL: https://issues.apache.org/jira/browse/NIFI-5566
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: backwards-compatibility, hash, security
>
> As documented in [NIFI-5147|https://issues.apache.org/jira/browse/NIFI-5147] 
> and [PR 2980|https://github.com/apache/nifi/pull/2980], the {{HashAttribute}} 
> processor and {{HashContent}} processor are lacking some features, do not 
> offer consistent algorithms across platforms, etc. 
> I propose the following:
> * Rename {{HashAttribute}} (which does not provide the service of calculating 
> a hash over one or more attributes) to {{HashAttributeLegacy}}
> * Renamed {{CalculateAttributeHash}} to {{HashAttribute}} to make semantic 
> sense
> * Rename {{HashContent}} to {{HashContentLegacy}} for users who need obscure 
> digest algorithms which may or may not have been offered on their platform
> * Implement a processor {{HashContent}} with similar semantics to the 
> existing processor but with consistent algorithm offerings and using the 
> common {{HashService}} offering
> With the new component versioning features provided as part of the flow 
> versioning behavior, silently disrupting existing flows which use these 
> processors is no longer a concern. Rather, Any flow currently using the 
> existing processors will either:
> 1. continue normal operation
> 1. require flow manager interaction and provide documentation about the change
>   1. migration notes and upgrade instructions will be provided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5585) Decommision Nodes from Cluster

2018-09-10 Thread Jeff Storck (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-5585:
--
Description: Allow a node in the cluster to be decommissioned, rebalancing 
flowfiles on the node to be decommissioned to the other active nodes.  This 
work depends on NIFI-5516.  (was: Decommission nodes from the cluster, 
rebalancing flowfiles on the decommissioned nodes.  This work is based off of 
the work done in NIFI-5516.)

> Decommision Nodes from Cluster
> --
>
> Key: NIFI-5585
> URL: https://issues.apache.org/jira/browse/NIFI-5585
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.1
>Reporter: Jeff Storck
>Assignee: Jeff Storck
>Priority: Major
>
> Allow a node in the cluster to be decommissioned, rebalancing flowfiles on 
> the node to be decommissioned to the other active nodes.  This work depends 
> on NIFI-5516.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5318) Implement NiFi test harness

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609835#comment-16609835
 ] 

ASF GitHub Bot commented on NIFI-5318:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2872#discussion_r216485286
  
--- Diff: 
nifi-testharness/src/test/java/org/apache/nifi/testharness/samples/Constants.java
 ---
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.nifi.testharness.samples;
+
+import java.io.File;
+
+public final class Constants {
+
+static final File OUTPUT_DIR = new File("./NiFiTest/NiFiReadTest");
+static final File NIFI_ZIP_DIR = new 
File("../../nifi-assembly/target");
--- End diff --

Do these constants assume a particular location for the code that will 
instantiate the test harness? That `NIFI_ZIP_DIR` one looks particularly 
brittle.


> Implement NiFi test harness
> ---
>
> Key: NIFI-5318
> URL: https://issues.apache.org/jira/browse/NIFI-5318
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Peter Horvath
>Priority: Major
>
> Currently, it is not really possible to automatically test the behaviour of a 
> specific NiFi flow and make unit test type asserts if it works as expected. 
> For example, if the expected behaviour of a NiFi flow is that a file placed 
> to a specific directory will trigger some operation after which some output 
> file will appear at another directory, once currently can only do one thing: 
> test the NiFi flow manually. 
> Manual testing is especially hard to manage if a NiFi flow is being actively 
> developed: any change to a complex, existing NiFi flow might require a lot of 
> manual testing just to ensure there are no regressions introduced. 
> Some kind of Java API that allows managing a NiFi instance and manipulating 
> flow deployments like for example, [Codehaus 
> Cargo|]https://codehaus-cargo.github.io/] would be of great help. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5318) Implement NiFi test harness

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609837#comment-16609837
 ] 

ASF GitHub Bot commented on NIFI-5318:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2872#discussion_r216484221
  
--- Diff: nifi-testharness/pom.xml ---
@@ -0,0 +1,176 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+org.apache.nifi
+nifi
+1.8.0-SNAPSHOT
+
+nifi-testharness
+A test harness for running NiFi flow tests
+pom
+
+
+
+
+org.apache.rat
+apache-rat-plugin
+
+
+
nifi_testharness_nifi_home/NIFI_TESTHARNESS_README.txt
+
src/test/resources/sample_technology_rss.xml
+
src/test/resources/logback-test.xml
+src/test/resources/flow.xml
+
+
+
+
+
+org.apache.maven.plugins
+maven-compiler-plugin
+
+
+
+compile
+testCompile
+
+
+
+
+1.8
+1.8
+
+
+
+
+org.apache.maven.plugins
+maven-surefire-plugin
+2.20.1
+
+1
+false
+
nifi_testharness_nifi_home
+
+
+
+
+
+
+
+
+
+UTF-8
+1.7.25
+9.4.3.v20170317
+
+
+
--- End diff --

L looks good here.


> Implement NiFi test harness
> ---
>
> Key: NIFI-5318
> URL: https://issues.apache.org/jira/browse/NIFI-5318
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Peter Horvath
>Priority: Major
>
> Currently, it is not really possible to automatically test the behaviour of a 
> specific NiFi flow and make unit test type asserts if it works as expected. 
> For example, if the expected behaviour of a NiFi flow is that a file placed 
> to a specific directory will trigger some operation after which some output 
> file will appear at another directory, once currently can only do one thing: 
> test the NiFi flow manually. 
> Manual testing is especially hard to manage if a NiFi flow is being actively 
> developed: any change to a complex, existing NiFi flow might require a lot of 
> manual testing just to ensure there are no regressions introduced. 
> Some kind of Java API that allows managing a NiFi instance and manipulating 
> flow deployments like for example, [Codehaus 
> Cargo|]https://codehaus-cargo.github.io/] would be of great help. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5318) Implement NiFi test harness

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609836#comment-16609836
 ] 

ASF GitHub Bot commented on NIFI-5318:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2872#discussion_r216484785
  
--- Diff: 
nifi-testharness/src/main/java/org/apache/nifi/testharness/util/FileUtils.java 
---
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+
+package org.apache.nifi.testharness.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.FileVisitResult;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.SimpleFileVisitor;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.util.Arrays;
+
+public final class FileUtils {
+
+
+private static final String MAC_DS_STORE_NAME = ".DS_Store";
+
+private FileUtils() {
+// no instances
+}
+
+public static void deleteDirectoryRecursive(Path directory) throws 
IOException {
+Files.walkFileTree(directory, new SimpleFileVisitor() {
+@Override
+public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IOException {
+Files.delete(file);
+return FileVisitResult.CONTINUE;
+}
+
+@Override
+public FileVisitResult postVisitDirectory(Path dir, 
IOException exc) throws IOException {
+Files.delete(dir);
+return FileVisitResult.CONTINUE;
+}
+});
+}
+
+public static void deleteDirectoryRecursive(File dir) {
+try {
+deleteDirectoryRecursive(dir.toPath());
+} catch (IOException e) {
+throw new RuntimeException(e);
+}
+}
+
+public static void createLink(Path newLink, Path existingFile)  {
+try {
+Files.createSymbolicLink(newLink, existingFile);
--- End diff --

Have you tried this on Windows?


> Implement NiFi test harness
> ---
>
> Key: NIFI-5318
> URL: https://issues.apache.org/jira/browse/NIFI-5318
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Peter Horvath
>Priority: Major
>
> Currently, it is not really possible to automatically test the behaviour of a 
> specific NiFi flow and make unit test type asserts if it works as expected. 
> For example, if the expected behaviour of a NiFi flow is that a file placed 
> to a specific directory will trigger some operation after which some output 
> file will appear at another directory, once currently can only do one thing: 
> test the NiFi flow manually. 
> Manual testing is especially hard to manage if a NiFi flow is being actively 
> developed: any change to a complex, existing NiFi flow might require a lot of 
> manual testing just to ensure there are no regressions introduced. 
> Some kind of Java API that allows managing a NiFi instance and manipulating 
> flow deployments like for example, [Codehaus 
> Cargo|]https://codehaus-cargo.github.io/] would be of great help. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2872: NIFI-5318 Implement NiFi test harness: initial comm...

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2872#discussion_r216484785
  
--- Diff: 
nifi-testharness/src/main/java/org/apache/nifi/testharness/util/FileUtils.java 
---
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+
+package org.apache.nifi.testharness.util;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.FileVisitResult;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.SimpleFileVisitor;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.util.Arrays;
+
+public final class FileUtils {
+
+
+private static final String MAC_DS_STORE_NAME = ".DS_Store";
+
+private FileUtils() {
+// no instances
+}
+
+public static void deleteDirectoryRecursive(Path directory) throws 
IOException {
+Files.walkFileTree(directory, new SimpleFileVisitor() {
+@Override
+public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IOException {
+Files.delete(file);
+return FileVisitResult.CONTINUE;
+}
+
+@Override
+public FileVisitResult postVisitDirectory(Path dir, 
IOException exc) throws IOException {
+Files.delete(dir);
+return FileVisitResult.CONTINUE;
+}
+});
+}
+
+public static void deleteDirectoryRecursive(File dir) {
+try {
+deleteDirectoryRecursive(dir.toPath());
+} catch (IOException e) {
+throw new RuntimeException(e);
+}
+}
+
+public static void createLink(Path newLink, Path existingFile)  {
+try {
+Files.createSymbolicLink(newLink, existingFile);
--- End diff --

Have you tried this on Windows?


---


[GitHub] nifi pull request #2872: NIFI-5318 Implement NiFi test harness: initial comm...

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2872#discussion_r216484221
  
--- Diff: nifi-testharness/pom.xml ---
@@ -0,0 +1,176 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+org.apache.nifi
+nifi
+1.8.0-SNAPSHOT
+
+nifi-testharness
+A test harness for running NiFi flow tests
+pom
+
+
+
+
+org.apache.rat
+apache-rat-plugin
+
+
+
nifi_testharness_nifi_home/NIFI_TESTHARNESS_README.txt
+
src/test/resources/sample_technology_rss.xml
+
src/test/resources/logback-test.xml
+src/test/resources/flow.xml
+
+
+
+
+
+org.apache.maven.plugins
+maven-compiler-plugin
+
+
+
+compile
+testCompile
+
+
+
+
+1.8
+1.8
+
+
+
+
+org.apache.maven.plugins
+maven-surefire-plugin
+2.20.1
+
+1
+false
+
nifi_testharness_nifi_home
+
+
+
+
+
+
+
+
+
+UTF-8
+1.7.25
+9.4.3.v20170317
+
+
+
--- End diff --

L looks good here.


---


[GitHub] nifi pull request #2872: NIFI-5318 Implement NiFi test harness: initial comm...

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2872#discussion_r216485286
  
--- Diff: 
nifi-testharness/src/test/java/org/apache/nifi/testharness/samples/Constants.java
 ---
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.nifi.testharness.samples;
+
+import java.io.File;
+
+public final class Constants {
+
+static final File OUTPUT_DIR = new File("./NiFiTest/NiFiReadTest");
+static final File NIFI_ZIP_DIR = new 
File("../../nifi-assembly/target");
--- End diff --

Do these constants assume a particular location for the code that will 
instantiate the test harness? That `NIFI_ZIP_DIR` one looks particularly 
brittle.


---


[jira] [Commented] (NIFI-5537) Create Neo4J cypher execution processor

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609720#comment-16609720
 ] 

ASF GitHub Bot commented on NIFI-5537:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2956
  
Did a `mvn dependency:tree` on the bundle and only `neo4j-java-driver` was 
new for the whole build. Checked it out so we seem to be good on the 
dependencies being at least compatibly licensed.


> Create Neo4J cypher execution processor
> ---
>
> Key: NIFI-5537
> URL: https://issues.apache.org/jira/browse/NIFI-5537
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: graph, neo4j, node, relationship
> Fix For: 1.8.0
>
>
> Create Nifi Neo4J cypher queries processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2956: NIFI-5537 Create Neo4J cypher execution processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2956
  
Did a `mvn dependency:tree` on the bundle and only `neo4j-java-driver` was 
new for the whole build. Checked it out so we seem to be good on the 
dependencies being at least compatibly licensed.


---


[jira] [Created] (NIFI-5585) Decommision Nodes from Cluster

2018-09-10 Thread Jeff Storck (JIRA)
Jeff Storck created NIFI-5585:
-

 Summary: Decommision Nodes from Cluster
 Key: NIFI-5585
 URL: https://issues.apache.org/jira/browse/NIFI-5585
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.7.1
Reporter: Jeff Storck
Assignee: Jeff Storck


Decommission nodes from the cluster, rebalancing flowfiles on the 
decommissioned nodes.  This work is based off of the work done in NIFI-5516.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5566) Bring HashContent inline with HashService and rename legacy components

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609665#comment-16609665
 ] 

ASF GitHub Bot commented on NIFI-5566:
--

Github user ottobackwards commented on the issue:

https://github.com/apache/nifi/pull/2983
  
https://github.com/apache/nifi/pull/2802#discussion_r199346844


> Bring HashContent inline with HashService and rename legacy components
> --
>
> Key: NIFI-5566
> URL: https://issues.apache.org/jira/browse/NIFI-5566
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: backwards-compatibility, hash, security
>
> As documented in [NIFI-5147|https://issues.apache.org/jira/browse/NIFI-5147] 
> and [PR 2980|https://github.com/apache/nifi/pull/2980], the {{HashAttribute}} 
> processor and {{HashContent}} processor are lacking some features, do not 
> offer consistent algorithms across platforms, etc. 
> I propose the following:
> * Rename {{HashAttribute}} (which does not provide the service of calculating 
> a hash over one or more attributes) to {{HashAttributeLegacy}}
> * Renamed {{CalculateAttributeHash}} to {{HashAttribute}} to make semantic 
> sense
> * Rename {{HashContent}} to {{HashContentLegacy}} for users who need obscure 
> digest algorithms which may or may not have been offered on their platform
> * Implement a processor {{HashContent}} with similar semantics to the 
> existing processor but with consistent algorithm offerings and using the 
> common {{HashService}} offering
> With the new component versioning features provided as part of the flow 
> versioning behavior, silently disrupting existing flows which use these 
> processors is no longer a concern. Rather, Any flow currently using the 
> existing processors will either:
> 1. continue normal operation
> 1. require flow manager interaction and provide documentation about the change
>   1. migration notes and upgrade instructions will be provided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2983: NIFI-5566 Improve HashContent processor and standardize Ha...

2018-09-10 Thread ottobackwards
Github user ottobackwards commented on the issue:

https://github.com/apache/nifi/pull/2983
  
https://github.com/apache/nifi/pull/2802#discussion_r199346844


---


[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609600#comment-16609600
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2992
  

[PutCassandraRecord_Test.xml.txt](https://github.com/apache/nifi/files/2367829/PutCassandraRecord_Test.xml.txt)

Validated that it works with that simple flow. I was able to use that to 
insert a few records and see them with `cqlsh`.


> Allow records to be put directly into Cassandra
> ---
>
> Key: NIFI-5510
> URL: https://issues.apache.org/jira/browse/NIFI-5510
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> Currently the standard way of getting data into Cassandra through NiFi is to 
> use PutCassandraQL, which often means raw data needs to be converted to CQL 
> statements, usually done (with modifications) via ConvertJSONToSQL.
> It would be better to have something closer to PutDatabaseRecord, a processor 
> called PutCassandraRecord perhaps, that would take the raw data and input it 
> into Cassandra "directly", without the need for the user to convert the data 
> into CQL statements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2992
  

[PutCassandraRecord_Test.xml.txt](https://github.com/apache/nifi/files/2367829/PutCassandraRecord_Test.xml.txt)

Validated that it works with that simple flow. I was able to use that to 
insert a few records and see them with `cqlsh`.


---


[jira] [Commented] (NIFI-3425) Cache prepared statements in PutCassandraQL

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609581#comment-16609581
 ] 

ASF GitHub Bot commented on NIFI-3425:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2986#discussion_r216411646
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraQL.java
 ---
@@ -399,11 +434,13 @@ protected void setStatementObject(final 
BoundStatement statement, final int para
 @OnUnscheduled
 public void stop() {
 super.stop();
+statementCache.clear();
 }
 
 @OnShutdown
 public void shutdown() {
 super.stop();
+statementCache.clear();
--- End diff --

Given that we're calling this when the processor is stopped, I don't think 
there's a need for it here.


> Cache prepared statements in PutCassandraQL
> ---
>
> Key: NIFI-3425
> URL: https://issues.apache.org/jira/browse/NIFI-3425
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Attachments: NIFI-3425.patch
>
>
> Currently PutCassandraQL supports prepared statements (using ? parameters 
> much like JDBC SQL statements) via flow file attributes specifying the type 
> and value of the parameters.
> However, the prepared statement is created (and thus possibly re-created) for 
> each flow file, which neuters its effectiveness over literal CQL statements. 
> The driver warns of this:
> 2017-01-31 14:30:54,287 WARN [cluster1-worker-1] 
> com.datastax.driver.core.Cluster Re-preparing already prepared query insert 
> into test_table (id, timestamp, id1, timestamp1, id
> 2, timestamp2, id3, timestamp3, id4, timestamp4, id5, timestamp5, id6, 
> timestamp6) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);. Please note 
> that preparing the same query
>  more than once is generally an anti-pattern and will likely affect 
> performance. Consider preparing the statement only once.
> Prepared statements should be cached and reused where possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5579) Allow ExecuteGroovyScript to take a SQL property that is a DBCPConnectionPoolLookup

2018-09-10 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609567#comment-16609567
 ] 

Colin Dean commented on NIFI-5579:
--

The absence of this has a *nasty side effect*: when I reference a CS in my 
{{ExecuteGroovyScript}} processor by its name and do the lookup for its ID 
manually, that CS is _not_ exported in a template because it seems that NiFi 
does not export unreferenced controller services.


> Allow ExecuteGroovyScript to take a SQL property that is a 
> DBCPConnectionPoolLookup
> ---
>
> Key: NIFI-5579
> URL: https://issues.apache.org/jira/browse/NIFI-5579
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core Framework
>Affects Versions: 1.7.0, 1.7.1
> Environment: Any
>Reporter: Colin Dean
>Priority: Major
>  Labels: groovy
>
> I would like to reference a {{DBCPConnectionPoolLookup}} controller service 
> from within an {{ExecuteGroovyScript}} processor. If I create a property 
> {{SQL.lookup}} and set its value to an existing {{DBCPConnectionPoolLookup}} 
> controller service, when I start the processor, it fails immediately because 
> the creation of the "connection" by the {{DBCPConnectionPoolLookup}} 
> controller service fails because it was not passed the expected 
> {{database.name}} attribute.
> {code}
> 2018-09-07 16:04:28,229 ERROR [Timer-Driven Process Thread-227] 
> o.a.n.p.groovyx.ExecuteGroovyScript 
> ExecuteGroovyScript[id=684100f5-78cf-35f9-28db-0fa4d1d30c13] 
> org.apache.nifi.processor.exception.ProcessException: Attributes must contain
>  an attribute name 'database.name': 
> org.apache.nifi.processor.exception.ProcessException: Attributes must contain 
> an attribute name 'database.name'
> org.apache.nifi.processor.exception.ProcessException: Attributes must contain 
> an attribute name 'database.name'
> at 
> org.apache.nifi.dbcp.DBCPConnectionPoolLookup.getConnection(DBCPConnectionPoolLookup.java:124)
> at sun.reflect.GeneratedMethodAccessor507.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
> at com.sun.proxy.$Proxy150.getConnection(Unknown Source)
> at 
> org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onInitSQL(ExecuteGroovyScript.java:339)
> at 
> org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onTrigger(ExecuteGroovyScript.java:439)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> 2018-09-07 16:04:28,232 ERROR [Timer-Driven Process Thread-227] 
> o.a.n.p.groovyx.ExecuteGroovyScript 
> ExecuteGroovyScript[id=684100f5-78cf-35f9-28db-0fa4d1d30c13] 
> ExecuteGroovyScript[id=684100f5-78cf-35f9-28db-0fa4d1d30c13] failed to 
> process session due to java.lang.ClassCastException: com.sun.proxy.$Proxy150 
> cannot be cast to org.apache.nifi.processors.groovyx.sql.OSql; Processor 
> Administratively Yielded for 1 sec: java.lang.ClassCastException: 
> com.sun.proxy.$Proxy150 cannot be cast to 
> org.apache.nifi.processors.groovyx.sql.OSql
> java.lang.ClassCastException: com.sun.proxy.$Proxy150 cannot be cast to 
> org.apache.nifi.processors.groovyx.sql.OSql
> at 
> org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onFinitSQL(ExecuteGroovyScript.java:371)
> at 
> org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onTrigger(ExecuteGroovyScript.java:464)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> 

[GitHub] nifi pull request #2986: NIFI-3425: Provide ability to cache CQL statements

2018-09-10 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2986#discussion_r21640
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/test/java/org/apache/nifi/processors/cassandra/PutCassandraQLTest.java
 ---
@@ -171,6 +171,98 @@ public void testProcessorHappyPathELConfig() {
 testRunner.clearTransferState();
 }
 
+@Test
+public void testMultipleQuery() {
+setUpStandardTestConfig();
+testRunner.setProperty(PutCassandraQL.STATEMENT_CACHE_SIZE, "1");
+
+testRunner.enqueue("INSERT INTO users (user_id, first_name, 
last_name, properties, bits, scaleset, largenum, scale, byteobject, ts) VALUES 
?, ?, ?, ?, ?, ?, ?, ?, ?, ?",
+new HashMap() {
--- End diff --

Is probably best to avoid the anonymous inner class and just use a HashMap 
and then call put() methods, no? It also appears to be the same values each 
time, so can create just a single HashMap and then pass it each time.


---


[jira] [Commented] (NIFI-3425) Cache prepared statements in PutCassandraQL

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609578#comment-16609578
 ] 

ASF GitHub Bot commented on NIFI-3425:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2986#discussion_r216411528
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraQL.java
 ---
@@ -399,11 +434,13 @@ protected void setStatementObject(final 
BoundStatement statement, final int para
 @OnUnscheduled
 public void stop() {
 super.stop();
+statementCache.clear();
--- End diff --

Would recommend putting this into an @OnStopped method instead of 
@OnUnscheduled. As-is, we could get into a race condition where line 225 calls 
statementCache.get(), which returns null. Then the @OnUnscheduled method clears 
the cache, and then line 228 adds the value to the cache. Then, the next time 
this processor is run, the cache is already populated. Given that we are 
calling `clear()` here I'm assuming we expect it clear when the processor 
starts.


> Cache prepared statements in PutCassandraQL
> ---
>
> Key: NIFI-3425
> URL: https://issues.apache.org/jira/browse/NIFI-3425
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Attachments: NIFI-3425.patch
>
>
> Currently PutCassandraQL supports prepared statements (using ? parameters 
> much like JDBC SQL statements) via flow file attributes specifying the type 
> and value of the parameters.
> However, the prepared statement is created (and thus possibly re-created) for 
> each flow file, which neuters its effectiveness over literal CQL statements. 
> The driver warns of this:
> 2017-01-31 14:30:54,287 WARN [cluster1-worker-1] 
> com.datastax.driver.core.Cluster Re-preparing already prepared query insert 
> into test_table (id, timestamp, id1, timestamp1, id
> 2, timestamp2, id3, timestamp3, id4, timestamp4, id5, timestamp5, id6, 
> timestamp6) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);. Please note 
> that preparing the same query
>  more than once is generally an anti-pattern and will likely affect 
> performance. Consider preparing the statement only once.
> Prepared statements should be cached and reused where possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5566) Bring HashContent inline with HashService and rename legacy components

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609560#comment-16609560
 ] 

ASF GitHub Bot commented on NIFI-5566:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2983
  
Those attributes were added by @ottobackwards in the original PR for this 
issue. My understanding of the scenarios is as follows:

**Flowfile A**
*username*: `alopresto`
*email*: `alopre...@apache.org`

**Flowfile B**
*username*: `alopresto`

**Flowfile C**
*no attributes*

### Processor Configuration 1 (Allow partial):

**Fail when no attributes present**: `true`
**Missing attribute policy**: `Allow missing attributes`

**Flowfile A** -> *success*
**Flowfile B** -> *success*
**Flowfile C** -> *failure*

### Processor Configuration 2 (Fail on partial):

**Fail when no attributes present**: `true`
**Missing attribute policy**: `Fail if missing attributes`

**Flowfile A** -> *success*
**Flowfile B** -> *failure*
**Flowfile C** -> *failure*

### Processor Configuration 3 (Allow empty):

**Fail when no attributes present**: `false`
**Missing attribute policy**: `Allow missing attributes`

**Flowfile A** -> *success*
**Flowfile B** -> *success*
**Flowfile C** -> *success*

### Processor Configuration 4 (Allow empty but fail partial):

**Fail when no attributes present**: `false`
**Missing attribute policy**: `Fail if missing attributes`

**Flowfile A** -> *success*
**Flowfile B** -> *failure*
**Flowfile C** -> *success*


> Bring HashContent inline with HashService and rename legacy components
> --
>
> Key: NIFI-5566
> URL: https://issues.apache.org/jira/browse/NIFI-5566
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: backwards-compatibility, hash, security
>
> As documented in [NIFI-5147|https://issues.apache.org/jira/browse/NIFI-5147] 
> and [PR 2980|https://github.com/apache/nifi/pull/2980], the {{HashAttribute}} 
> processor and {{HashContent}} processor are lacking some features, do not 
> offer consistent algorithms across platforms, etc. 
> I propose the following:
> * Rename {{HashAttribute}} (which does not provide the service of calculating 
> a hash over one or more attributes) to {{HashAttributeLegacy}}
> * Renamed {{CalculateAttributeHash}} to {{HashAttribute}} to make semantic 
> sense
> * Rename {{HashContent}} to {{HashContentLegacy}} for users who need obscure 
> digest algorithms which may or may not have been offered on their platform
> * Implement a processor {{HashContent}} with similar semantics to the 
> existing processor but with consistent algorithm offerings and using the 
> common {{HashService}} offering
> With the new component versioning features provided as part of the flow 
> versioning behavior, silently disrupting existing flows which use these 
> processors is no longer a concern. Rather, Any flow currently using the 
> existing processors will either:
> 1. continue normal operation
> 1. require flow manager interaction and provide documentation about the change
>   1. migration notes and upgrade instructions will be provided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2986: NIFI-3425: Provide ability to cache CQL statements

2018-09-10 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2986#discussion_r216411528
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraQL.java
 ---
@@ -399,11 +434,13 @@ protected void setStatementObject(final 
BoundStatement statement, final int para
 @OnUnscheduled
 public void stop() {
 super.stop();
+statementCache.clear();
--- End diff --

Would recommend putting this into an @OnStopped method instead of 
@OnUnscheduled. As-is, we could get into a race condition where line 225 calls 
statementCache.get(), which returns null. Then the @OnUnscheduled method clears 
the cache, and then line 228 adds the value to the cache. Then, the next time 
this processor is run, the cache is already populated. Given that we are 
calling `clear()` here I'm assuming we expect it clear when the processor 
starts.


---


[GitHub] nifi pull request #2986: NIFI-3425: Provide ability to cache CQL statements

2018-09-10 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2986#discussion_r216411646
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraQL.java
 ---
@@ -399,11 +434,13 @@ protected void setStatementObject(final 
BoundStatement statement, final int para
 @OnUnscheduled
 public void stop() {
 super.stop();
+statementCache.clear();
 }
 
 @OnShutdown
 public void shutdown() {
 super.stop();
+statementCache.clear();
--- End diff --

Given that we're calling this when the processor is stopped, I don't think 
there's a need for it here.


---


[jira] [Commented] (NIFI-3425) Cache prepared statements in PutCassandraQL

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609579#comment-16609579
 ] 

ASF GitHub Bot commented on NIFI-3425:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2986#discussion_r21640
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/test/java/org/apache/nifi/processors/cassandra/PutCassandraQLTest.java
 ---
@@ -171,6 +171,98 @@ public void testProcessorHappyPathELConfig() {
 testRunner.clearTransferState();
 }
 
+@Test
+public void testMultipleQuery() {
+setUpStandardTestConfig();
+testRunner.setProperty(PutCassandraQL.STATEMENT_CACHE_SIZE, "1");
+
+testRunner.enqueue("INSERT INTO users (user_id, first_name, 
last_name, properties, bits, scaleset, largenum, scale, byteobject, ts) VALUES 
?, ?, ?, ?, ?, ?, ?, ?, ?, ?",
+new HashMap() {
--- End diff --

Is probably best to avoid the anonymous inner class and just use a HashMap 
and then call put() methods, no? It also appears to be the same values each 
time, so can create just a single HashMap and then pass it each time.


> Cache prepared statements in PutCassandraQL
> ---
>
> Key: NIFI-3425
> URL: https://issues.apache.org/jira/browse/NIFI-3425
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Attachments: NIFI-3425.patch
>
>
> Currently PutCassandraQL supports prepared statements (using ? parameters 
> much like JDBC SQL statements) via flow file attributes specifying the type 
> and value of the parameters.
> However, the prepared statement is created (and thus possibly re-created) for 
> each flow file, which neuters its effectiveness over literal CQL statements. 
> The driver warns of this:
> 2017-01-31 14:30:54,287 WARN [cluster1-worker-1] 
> com.datastax.driver.core.Cluster Re-preparing already prepared query insert 
> into test_table (id, timestamp, id1, timestamp1, id
> 2, timestamp2, id3, timestamp3, id4, timestamp4, id5, timestamp5, id6, 
> timestamp6) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);. Please note 
> that preparing the same query
>  more than once is generally an anti-pattern and will likely affect 
> performance. Consider preparing the statement only once.
> Prepared statements should be cached and reused where possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2983: NIFI-5566 Improve HashContent processor and standardize Ha...

2018-09-10 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2983
  
Those attributes were added by @ottobackwards in the original PR for this 
issue. My understanding of the scenarios is as follows:

**Flowfile A**
*username*: `alopresto`
*email*: `alopre...@apache.org`

**Flowfile B**
*username*: `alopresto`

**Flowfile C**
*no attributes*

### Processor Configuration 1 (Allow partial):

**Fail when no attributes present**: `true`
**Missing attribute policy**: `Allow missing attributes`

**Flowfile A** -> *success*
**Flowfile B** -> *success*
**Flowfile C** -> *failure*

### Processor Configuration 2 (Fail on partial):

**Fail when no attributes present**: `true`
**Missing attribute policy**: `Fail if missing attributes`

**Flowfile A** -> *success*
**Flowfile B** -> *failure*
**Flowfile C** -> *failure*

### Processor Configuration 3 (Allow empty):

**Fail when no attributes present**: `false`
**Missing attribute policy**: `Allow missing attributes`

**Flowfile A** -> *success*
**Flowfile B** -> *success*
**Flowfile C** -> *success*

### Processor Configuration 4 (Allow empty but fail partial):

**Fail when no attributes present**: `false`
**Missing attribute policy**: `Fail if missing attributes`

**Flowfile A** -> *success*
**Flowfile B** -> *failure*
**Flowfile C** -> *success*


---


[jira] [Commented] (NIFI-5566) Bring HashContent inline with HashService and rename legacy components

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609517#comment-16609517
 ] 

ASF GitHub Bot commented on NIFI-5566:
--

Github user thenatog commented on the issue:

https://github.com/apache/nifi/pull/2983
  
How do "Fail when no attributes present" and "Missing attribute policy" 
work?

Using your template, "CryptographicHashAttribute (New)" processor fails 
files because the "static_sha256" attribute is not present.

"Fail when no attributes present" sounds like it implies that it will fail 
if ALL attributes are not present, but it seems that it will fail if at least 
one is not present. From this, I take it that "Missing attribute policy" 
applies to null attributes and not attributes that are not present. I also 
noticed that if "Fail when no attributes present" is false, it will pass to 
success but none of the present attributes (say 2 out of 3 were present) will 
be hashed. 

I suggest maybe that these config properties could be combined into "If 
missing attribute(s): { Route to Failure, Route to Success}", where missing 
means either the attribute is null or not present. If it is configured as route 
to success, it will still hash any present attributes that were configured.




> Bring HashContent inline with HashService and rename legacy components
> --
>
> Key: NIFI-5566
> URL: https://issues.apache.org/jira/browse/NIFI-5566
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: backwards-compatibility, hash, security
>
> As documented in [NIFI-5147|https://issues.apache.org/jira/browse/NIFI-5147] 
> and [PR 2980|https://github.com/apache/nifi/pull/2980], the {{HashAttribute}} 
> processor and {{HashContent}} processor are lacking some features, do not 
> offer consistent algorithms across platforms, etc. 
> I propose the following:
> * Rename {{HashAttribute}} (which does not provide the service of calculating 
> a hash over one or more attributes) to {{HashAttributeLegacy}}
> * Renamed {{CalculateAttributeHash}} to {{HashAttribute}} to make semantic 
> sense
> * Rename {{HashContent}} to {{HashContentLegacy}} for users who need obscure 
> digest algorithms which may or may not have been offered on their platform
> * Implement a processor {{HashContent}} with similar semantics to the 
> existing processor but with consistent algorithm offerings and using the 
> common {{HashService}} offering
> With the new component versioning features provided as part of the flow 
> versioning behavior, silently disrupting existing flows which use these 
> processors is no longer a concern. Rather, Any flow currently using the 
> existing processors will either:
> 1. continue normal operation
> 1. require flow manager interaction and provide documentation about the change
>   1. migration notes and upgrade instructions will be provided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2983: NIFI-5566 Improve HashContent processor and standardize Ha...

2018-09-10 Thread thenatog
Github user thenatog commented on the issue:

https://github.com/apache/nifi/pull/2983
  
How do "Fail when no attributes present" and "Missing attribute policy" 
work?

Using your template, "CryptographicHashAttribute (New)" processor fails 
files because the "static_sha256" attribute is not present.

"Fail when no attributes present" sounds like it implies that it will fail 
if ALL attributes are not present, but it seems that it will fail if at least 
one is not present. From this, I take it that "Missing attribute policy" 
applies to null attributes and not attributes that are not present. I also 
noticed that if "Fail when no attributes present" is false, it will pass to 
success but none of the present attributes (say 2 out of 3 were present) will 
be hashed. 

I suggest maybe that these config properties could be combined into "If 
missing attribute(s): { Route to Failure, Route to Success}", where missing 
means either the attribute is null or not present. If it is configured as route 
to success, it will still hash any present attributes that were configured.




---


[jira] [Commented] (NIFI-4974) Uncaught AbstractMethodError when converting SQL results to Avro with JTDS driver

2018-09-10 Thread Colin Dean (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609510#comment-16609510
 ] 

Colin Dean commented on NIFI-4974:
--

The workaround for this is to add {{useLOBs=false;}} to the connection string.

> Uncaught AbstractMethodError when converting SQL results to Avro with JTDS 
> driver
> -
>
> Key: NIFI-4974
> URL: https://issues.apache.org/jira/browse/NIFI-4974
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: Windows Server 2018, MS SQL Server
>Reporter: Colin Dean
>Priority: Major
>
> I'm using the [jtds|https://jtds.sourceforge.net] driver to retrieve data 
> from a Microsoft SQL Server installation. This is the first time I've come 
> upon this kind of error while using JTDS so I'm not sure if it's a defect in 
> JTDS or in NiFi.
> {code:java}
> java.lang.AbstractMethodError: null
>     at net.sourceforge.jtds.jdbc.ClobImpl.free(ClobImpl.java:219)
>     at 
> org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:288)
>     at 
> org.apache.nifi.processors.standard.ExecuteSQL$2.process(ExecuteSQL.java:230)
>     at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
>     at 
> org.apache.nifi.processors.standard.ExecuteSQL.onTrigger(ExecuteSQL.java:218)
>     at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>     at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
>     at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>     at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>     at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> {code}
> Looking at the JTDS source code, provided that the Github repo I'm looking at 
> it up to date with the SF repo, it appears that [this call to 
> {{free()}}|https://github.com/milesibastos/jTDS/blob/master/src/main/net/sourceforge/jtds/jdbc/ClobImpl.java#L219]
>  is intended to generate an error from JTDS:
> {code}
> /// JDBC4 demarcation, do NOT put any JDBC3 code below this line ///
> public void free() throws SQLException {
> // TODO Auto-generated method stub
> throw new AbstractMethodError();
> }
> {code}
> We're going to try to use the official MS SQL JDBC driver as a workaround. I 
> minimally want to document this in case someone else encounters this problem 
> or there's a way to workaround it within NiFi.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609347#comment-16609347
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216353534
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+  

[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609346#comment-16609346
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216353686
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+  

[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609348#comment-16609348
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216070172
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+  

[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609349#comment-16609349
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216355406
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/pom.xml ---
@@ -64,5 +64,20 @@
 nifi-ssl-context-service
 test
 
+
--- End diff --

No new dependencies from outside of our project, so L looks good.


> Allow records to be put directly into Cassandra
> ---
>
> Key: NIFI-5510
> URL: https://issues.apache.org/jira/browse/NIFI-5510
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> Currently the standard way of getting data into Cassandra through NiFi is to 
> use PutCassandraQL, which often means raw data needs to be converted to CQL 
> statements, usually done (with modifications) via ConvertJSONToSQL.
> It would be better to have something closer to PutDatabaseRecord, a processor 
> called PutCassandraRecord perhaps, that would take the raw data and input it 
> into Cassandra "directly", without the need for the user to convert the data 
> into CQL statements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609350#comment-16609350
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216354635
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+  

[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609351#comment-16609351
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216353050
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+  

[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609345#comment-16609345
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216070922
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+  

[jira] [Commented] (NIFI-5510) Allow records to be put directly into Cassandra

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609344#comment-16609344
 ] 

ASF GitHub Bot commented on NIFI-5510:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216069857
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
--- End diff --

It would be helpful to have more content here, particularly explicitly 
calling out that it is a record-aware processor.


> Allow records to be put directly into Cassandra
> ---
>
> Key: NIFI-5510
> URL: https://issues.apache.org/jira/browse/NIFI-5510
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> Currently the standard way of getting data into Cassandra through NiFi is to 
> use PutCassandraQL, which often means raw data needs to be converted to CQL 
> statements, usually done (with modifications) via ConvertJSONToSQL.
> It would be better to have something closer to PutDatabaseRecord, a processor 
> called PutCassandraRecord perhaps, that would take the raw data and input it 
> into Cassandra "directly", without the 

[GitHub] nifi pull request #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216070172
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216353686
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216353050
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216354635
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216353534
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216355406
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/pom.xml ---
@@ -64,5 +64,20 @@
 nifi-ssl-context-service
 test
 
+
--- End diff --

No new dependencies from outside of our project, so L looks good.


---


[GitHub] nifi pull request #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216070922
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
+public class PutCassandraRecord extends AbstractCassandraProcessor {
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-reader")
+.displayName("Record Reader")
+.description("Specifies the type of Record Reader controller 
service to use for parsing the incoming data " +
+"and determining the schema")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+static final PropertyDescriptor TABLE = new 
PropertyDescriptor.Builder()
+.name("put-cassandra-record-table")
+.displayName("Table name")
+.description("The name of the Cassandra table to which the 
records have to be written.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #2992: NIFI-5510: Introducing PutCassandraRecord processor

2018-09-10 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2992#discussion_r216069857
  
--- Diff: 
nifi-nar-bundles/nifi-cassandra-bundle/nifi-cassandra-processors/src/main/java/org/apache/nifi/processors/cassandra/PutCassandraRecord.java
 ---
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.cassandra;
+
+import com.datastax.driver.core.BatchStatement;
+import com.datastax.driver.core.ConsistencyLevel;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.NoHostAvailableException;
+import com.datastax.driver.core.querybuilder.Insert;
+import com.datastax.driver.core.querybuilder.QueryBuilder;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.annotation.lifecycle.OnUnscheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+@Tags({"cassandra", "cql", "put", "insert", "update", "set", "record"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Writes the content of the incoming FlowFile as 
individual records to Apache Cassandra using native protocol version 3 or 
higher.")
--- End diff --

It would be helpful to have more content here, particularly explicitly 
calling out that it is a record-aware processor.


---


[jira] [Resolved] (NIFI-5581) Seeing timeouts when trying to replicate requests across the cluster

2018-09-10 Thread Matt Gilman (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-5581.
---
Resolution: Fixed

> Seeing timeouts when trying to replicate requests across the cluster
> 
>
> Key: NIFI-5581
> URL: https://issues.apache.org/jira/browse/NIFI-5581
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.8.0
>
>
> When trying to replicate requests across the cluster on the current master 
> branch, I see everything go smoothly for GET requests, but all mutable 
> requests timeout.
> This issue appears to have been introduced by the upgrade to a new version of 
> Jetty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5584) Reorder if statement in DataTypeUtils.toTimestamp so that Timestamp comes before Date

2018-09-10 Thread Gideon Korir (JIRA)
Gideon Korir created NIFI-5584:
--

 Summary: Reorder if statement in DataTypeUtils.toTimestamp so that 
Timestamp comes before Date
 Key: NIFI-5584
 URL: https://issues.apache.org/jira/browse/NIFI-5584
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.7.1
 Environment: RHEL, JDK 8
Reporter: Gideon Korir


The method DataTypeUtils.toTimestamp in package nifi-record has the if 
statement structured as follows:
{code:java}
public static Timestamp toTimestamp(final Object value, final 
Supplier format, final String fieldName) {
if (value == null) {
return null;
}

if (value instanceof java.util.Date) {
return new Timestamp(((java.util.Date)value).getTime());
}

if (value instanceof Timestamp) {
return (Timestamp) value;
}
{code}
Since Timestamp extends java.util.Date a value of type timestamp always matches 
the 1st if statement and allocates a new timestamp object. The 1st if statement 
should check for timestamp followed by java.util.Date check.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5581) Seeing timeouts when trying to replicate requests across the cluster

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609216#comment-16609216
 ] 

ASF GitHub Bot commented on NIFI-5581:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2996


> Seeing timeouts when trying to replicate requests across the cluster
> 
>
> Key: NIFI-5581
> URL: https://issues.apache.org/jira/browse/NIFI-5581
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.8.0
>
>
> When trying to replicate requests across the cluster on the current master 
> branch, I see everything go smoothly for GET requests, but all mutable 
> requests timeout.
> This issue appears to have been introduced by the upgrade to a new version of 
> Jetty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5581) Seeing timeouts when trying to replicate requests across the cluster

2018-09-10 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609212#comment-16609212
 ] 

ASF subversion and git services commented on NIFI-5581:
---

Commit 87cf474e542ef16601a86cc66c624fb8902c9fc2 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=87cf474 ]

NIFI-5581: Disable connection pooling for OkHttpReplicationClient. We should 
revisit this in the future, but for now, it appears that Jetty is having 
problems with the connections if they are reused. By disabling the Connection 
Pooling, we address the concern, but for secure connections this means that 
every request results in a TLS handshake - and for a mutable request, both the 
verification and the 'performance' stages require the TLS handshake. But it's 
better than timing out, which is the currently observed behavior

This closes #2996


> Seeing timeouts when trying to replicate requests across the cluster
> 
>
> Key: NIFI-5581
> URL: https://issues.apache.org/jira/browse/NIFI-5581
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.8.0
>
>
> When trying to replicate requests across the cluster on the current master 
> branch, I see everything go smoothly for GET requests, but all mutable 
> requests timeout.
> This issue appears to have been introduced by the upgrade to a new version of 
> Jetty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2996: NIFI-5581: Disable connection pooling for OkHttpRep...

2018-09-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2996


---


[jira] [Commented] (NIFI-5166) Create deep learning classification and regression processor

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609194#comment-16609194
 ] 

ASF GitHub Bot commented on NIFI-5166:
--

Github user mans2singh commented on the issue:

https://github.com/apache/nifi/pull/2686
  
@markap14 @ijokarumawak @jzonthemtn and Nifi Team: Please let me know if 
you have any additional comments for this processor.  Thanks.


> Create deep learning classification and regression processor
> 
>
> Key: NIFI-5166
> URL: https://issues.apache.org/jira/browse/NIFI-5166
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: Learning, classification,, deep, regression,
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> We need a deep learning classification and regression processor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2686: NIFI-5166 - Deep learning classification and regression pr...

2018-09-10 Thread mans2singh
Github user mans2singh commented on the issue:

https://github.com/apache/nifi/pull/2686
  
@markap14 @ijokarumawak @jzonthemtn and Nifi Team: Please let me know if 
you have any additional comments for this processor.  Thanks.


---


[jira] [Commented] (NIFI-5581) Seeing timeouts when trying to replicate requests across the cluster

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609165#comment-16609165
 ] 

ASF GitHub Bot commented on NIFI-5581:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2996
  
Will review...


> Seeing timeouts when trying to replicate requests across the cluster
> 
>
> Key: NIFI-5581
> URL: https://issues.apache.org/jira/browse/NIFI-5581
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Blocker
> Fix For: 1.8.0
>
>
> When trying to replicate requests across the cluster on the current master 
> branch, I see everything go smoothly for GET requests, but all mutable 
> requests timeout.
> This issue appears to have been introduced by the upgrade to a new version of 
> Jetty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2996: NIFI-5581: Disable connection pooling for OkHttpReplicatio...

2018-09-10 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2996
  
Will review...


---


[jira] [Created] (MINIFICPP-603) Fill gaps in C2 responses for Windows

2018-09-10 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-603:


 Summary: Fill gaps in C2 responses for Windows
 Key: MINIFICPP-603
 URL: https://issues.apache.org/jira/browse/MINIFICPP-603
 Project: NiFi MiNiFi C++
  Issue Type: Sub-task
Reporter: Mr TheSegfault
Assignee: Mr TheSegfault


C2 responses aren't functionally complete in windows. This ticket is meant to 
fill those gaps. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-32) MINIFI-CPP should support reading windows event logs

2018-09-10 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-32?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault updated MINIFICPP-32:

Status: Patch Available  (was: Open)

Created basic WEL reader

> MINIFI-CPP should support reading windows event logs
> 
>
> Key: MINIFICPP-32
> URL: https://issues.apache.org/jira/browse/MINIFICPP-32
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: Andre F de Miranda
>Assignee: Mr TheSegfault
>Priority: Major
>
> MINIFI-CPP could be a great alternative to tools like nxlog if it was able to:
> - Run as a service (NIFI-89)
> - Read windows event files (NIFI-90)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (MINIFICPP-32) MINIFI-CPP should support reading windows event logs

2018-09-10 Thread Mr TheSegfault (JIRA)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-32?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mr TheSegfault reassigned MINIFICPP-32:
---

Assignee: Mr TheSegfault

> MINIFI-CPP should support reading windows event logs
> 
>
> Key: MINIFICPP-32
> URL: https://issues.apache.org/jira/browse/MINIFICPP-32
> Project: NiFi MiNiFi C++
>  Issue Type: Sub-task
>Reporter: Andre F de Miranda
>Assignee: Mr TheSegfault
>Priority: Major
>
> MINIFI-CPP could be a great alternative to tools like nxlog if it was able to:
> - Run as a service (NIFI-89)
> - Read windows event files (NIFI-90)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5583) Make Change Data Capture (CDC) processor for MySQL refer to GTID

2018-09-10 Thread Yoshiaki Takahashi (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoshiaki Takahashi updated NIFI-5583:
-
Summary: Make Change Data Capture (CDC) processor for MySQL refer to GTID  
(was: Make Change Data Capture (CDC) processor for MySQL referring to GTID)

> Make Change Data Capture (CDC) processor for MySQL refer to GTID
> 
>
> Key: NIFI-5583
> URL: https://issues.apache.org/jira/browse/NIFI-5583
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Yoshiaki Takahashi
>Priority: Major
>
> When connecting to a MySQL cluster consisting of multiple hosts, existing CDC 
> processor can not automatically switch to other host when the connected host 
> goes down.
> The reason is that file names and positions of binary log may be different 
> for each host in the same cluster.
> In such a case it is impossible to continue reading files from the same 
> position.
> In order to continue reading in such cases, a processor referring to the GTID 
> (Global Transaction ID) is necessary.
> GTID is ID that uniquely identifies a transaction, and it is recorded in 
> binary log of MySQL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5583) Make Change Data Capture (CDC) processor for MySQL referring to GTID

2018-09-10 Thread Yoshiaki Takahashi (JIRA)
Yoshiaki Takahashi created NIFI-5583:


 Summary: Make Change Data Capture (CDC) processor for MySQL 
referring to GTID
 Key: NIFI-5583
 URL: https://issues.apache.org/jira/browse/NIFI-5583
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Affects Versions: 1.7.1
Reporter: Yoshiaki Takahashi


When connecting to a MySQL cluster consisting of multiple hosts, existing CDC 
processor can not automatically switch to other host when the connected host 
goes down.
The reason is that file names and positions of binary log may be different for 
each host in the same cluster.
In such a case it is impossible to continue reading files from the same 
position.

In order to continue reading in such cases, a processor referring to the GTID 
(Global Transaction ID) is necessary.
GTID is ID that uniquely identifies a transaction, and it is recorded in binary 
log of MySQL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5573) Allow overriding of nifi-env.sh

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608897#comment-16608897
 ] 

ASF GitHub Bot commented on NIFI-5573:
--

Github user pepov commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2985#discussion_r216244697
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/bin/nifi-env.sh
 ---
@@ -16,16 +16,38 @@
 #limitations under the License.
 #
 
+# By default this file will unconditionally override whatever environment 
variables you have set
+# and set them to defaults defined here.
+# If you want to define your own versions outside of this script please 
set the environment variable
+# NIFI_OVERRIDE_NIFIENV to "true". That will then use whatever variables 
you used outside of
+# this script.
+
 # The java implementation to use.
 #export JAVA_HOME=/usr/java/jdk1.8.0/
 
-export NIFI_HOME=$(cd "${SCRIPT_DIR}" && cd .. && pwd)
+setOrDefault() {
+  declare envvar="$1" default="$2"
+
+  local res="$envvar"
+  if [ -z "$envvar" ] || [ "$NIFI_OVERRIDE_NIFIENV" != "true" ]
--- End diff --

Didn't mean to block this. I recommend showing a warning message (probably 
to stderr) when setting a default value for a variable.


> Allow overriding of nifi-env.sh
> ---
>
> Key: NIFI-5573
> URL: https://issues.apache.org/jira/browse/NIFI-5573
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Lars Francke
>Assignee: Lars Francke
>Priority: Minor
>
> (as discussed in 
> https://lists.apache.org/thread.html/ddfbff7f371d47c6da013ff14e28bce3b353716653a01649a408d0ce@%3Cdev.nifi.apache.org%3E)
> Currently nifi-env.sh unconditionally sets NIFI_HOME, NIFI_PID_DIR, 
> NIFI_LOG_DIR and NIFI_ALLOW_EXPLICIT_KEYTAB so they can only be overridden by 
> changing nifi-env.sh.
> Other *-env.sh files I looked at (e.g. from Hadoop or HBase) have most/all 
> their settings commented out or only override variables if they have not 
> already been set outside of the *-env.sh script.
> Peter and [~joewitt] witt from the mailing list are in favor of keeping the 
> current behavior of the file unchanged due to the fear that it might break 
> something for some people out there.
> There are a few different options I can think of on how to work around this:
>  # Have another environment variable NIFI_DISABLE_NIFIENV that basically 
> exits the nifi-env.sh script if it's set
>  # NIFI_OVERRIDE_NIFIENV which - if set to true - allows externally set 
> environment variables to override the ones in nifi-env.sh
> I'm sure there are more but those are the ones I can think of now.
> I'm in favor of option 2 as that allows me to selectively use the defaults 
> from nifi-env.sh
>  
> I can provide a patch once we've agreed on a way to go forward.
>  
> This would help me tremendously in an environment where I cannot easily alter 
> the nifi-env.sh file. This is also useful in the Docker image which currently 
> wipes out the nifi-env.sh script so its own environment variable take effect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2985: NIFI-5573 Allow overriding of nifi-env.sh

2018-09-10 Thread pepov
Github user pepov commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2985#discussion_r216244697
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/bin/nifi-env.sh
 ---
@@ -16,16 +16,38 @@
 #limitations under the License.
 #
 
+# By default this file will unconditionally override whatever environment 
variables you have set
+# and set them to defaults defined here.
+# If you want to define your own versions outside of this script please 
set the environment variable
+# NIFI_OVERRIDE_NIFIENV to "true". That will then use whatever variables 
you used outside of
+# this script.
+
 # The java implementation to use.
 #export JAVA_HOME=/usr/java/jdk1.8.0/
 
-export NIFI_HOME=$(cd "${SCRIPT_DIR}" && cd .. && pwd)
+setOrDefault() {
+  declare envvar="$1" default="$2"
+
+  local res="$envvar"
+  if [ -z "$envvar" ] || [ "$NIFI_OVERRIDE_NIFIENV" != "true" ]
--- End diff --

Didn't mean to block this. I recommend showing a warning message (probably 
to stderr) when setting a default value for a variable.


---


[jira] [Commented] (NIFI-375) New user role: Operator who can start and stop components

2018-09-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608804#comment-16608804
 ] 

ASF GitHub Bot commented on NIFI-375:
-

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2990
  
@mcgilman I hope this PR is now fully ready for review. Thanks.


> New user role: Operator who can start and stop components
> -
>
> Key: NIFI-375
> URL: https://issues.apache.org/jira/browse/NIFI-375
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Daniel Ueberfluss
>Assignee: Koji Kawamura
>Priority: Major
>
> Would like to have a user role that allows a user to stop/start processors 
> but perform no other changes to the dataflow.
> This would allow users to address simple problems without providing full 
> access to modifying a data flow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2990: NIFI-375: Added operation policy

2018-09-10 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2990
  
@mcgilman I hope this PR is now fully ready for review. Thanks.


---