[jira] [Commented] (NIFI-3440) Test failures on Win10 for recent test changes

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950367#comment-15950367
 ] 

ASF GitHub Bot commented on NIFI-3440:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/1638
  
@joewitt Thanks for the change. I've taken a look at the code and all 
change looks good to me. +1


> Test failures on Win10 for recent test changes
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
>Assignee: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch, NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3440) Test failures on Win10 for recent test changes

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950356#comment-15950356
 ] 

ASF GitHub Bot commented on NIFI-3440:
--

GitHub user joewitt opened a pull request:

https://github.com/apache/nifi/pull/1638

NIFI-3440 fixing tests not written for windows to not run on windows

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joewitt/incubator-nifi NIFI-3440

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1638.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1638


commit 6ec13af8dc5fa2f333c3f80e922684e0130ac550
Author: joewitt 
Date:   2017-03-31T05:38:37Z

NIFI-3440 fixing tests not written for windows to not run on windows




> Test failures on Win10 for recent test changes
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch, NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> 

[jira] [Updated] (NIFI-3440) Test failures on Win10 for recent test changes

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Attachment: NIFI-3340_ignore_on_windows.patch

> Test failures on Win10 for recent test changes
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch, NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3440) Test failures on Win10 for recent test changes

2017-03-30 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950263#comment-15950263
 ] 

Joseph Witt commented on NIFI-3440:
---

after content repo got failures on prov repo, kafka 09 test, site to site http 
client, schemereg transformers, update attr w/state, etc..

> Test failures on Win10 for recent test changes
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failures on Win10 for recent test changes

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Summary: Test failures on Win10 for recent test changes  (was: Test failure 
on Win10 for FileSystemRepository/content repo and provenance)

> Test failures on Win10 for recent test changes
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3260) Official Docker Image

2017-03-30 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950252#comment-15950252
 ] 

Joseph Witt commented on NIFI-3260:
---

i see ... didn't realize we were waiting on the infra piece there.  We can let 
it sit until release is ready then discuss next best steps if still blocked.

> Official Docker Image
> -
>
> Key: NIFI-3260
> URL: https://issues.apache.org/jira/browse/NIFI-3260
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Jeremy Dyer
>Assignee: Aldrin Piri
>  Labels: docker
> Fix For: 1.2.0
>
>
> This JIRA is for setting up a Docker folder structure within the NiFi source 
> code as discussed in the dev mailing list at
> https://lists.apache.org/thread.html/e905a559cb01b30f1a7032cec5c9605685f27a65bdf7fee41b735089@%3Cdev.nifi.apache.org%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo and provenance

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Attachment: NIFI-3340_ignore_on_windows.patch

> Test failure on Win10 for FileSystemRepository/content repo and provenance
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo and provenance

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Attachment: (was: NIFI-3340_ignore_on_windows.patch)

> Test failure on Win10 for FileSystemRepository/content repo and provenance
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo and provenance

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Attachment: (was: NIFI-3340_ignore_on_windows.patch)

> Test failure on Win10 for FileSystemRepository/content repo and provenance
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo and provenance

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Summary: Test failure on Win10 for FileSystemRepository/content repo and 
provenance  (was: Test failure on Win10 for FileSystemRepository/content repo)

> Test failure on Win10 for FileSystemRepository/content repo and provenance
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch, NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo and provenance

2017-03-30 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950198#comment-15950198
 ] 

Joseph Witt commented on NIFI-3440:
---

past the content repo tests.  Now provenace repo test

Tests run: 35, Failures: 1, Errors: 0, Skipped: 2, Time elapsed: 101.748 sec 
<<< FAILURE! - in org.apache.nifi.provenance.TestPersistentProvenanceRepository
testMergeJournalsEmptyJournal(org.apache.nifi.provenance.TestPersistentProvenanceRepository)
  Time elapsed: 0.126 sec  <<< FAILURE!
java.lang.AssertionError: mergeJournals() should not error on empty journal 
expected:<0> but was:<16>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.nifi.provenance.TestPersistentProvenanceRepository.testMergeJournalsEmptyJournal(TestPersistentProvenanceRepository.java:1998)


> Test failure on Win10 for FileSystemRepository/content repo and provenance
> --
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch, NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Attachment: NIFI-3340_ignore_on_windows.patch

> Test failure on Win10 for FileSystemRepository/content repo
> ---
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch, NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Attachment: NIFI-3340_ignore_on_windows.patch

> Test failure on Win10 for FileSystemRepository/content repo
> ---
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Attachment: (was: NIFI-3340_ignore_on_windows.patch)

> Test failure on Win10 for FileSystemRepository/content repo
> ---
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo

2017-03-30 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950177#comment-15950177
 ] 

Joseph Witt commented on NIFI-3440:
---

just added a patch which ignores the two problematic tests.  If the full build 
does now pass I will push that patch into master but leave this JIRA open.  It 
is not clear if these tests really should be skipped/adjusted on windows or if 
they are indeed exposing a defect on windows.

> Test failure on Win10 for FileSystemRepository/content repo
> ---
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3440) Test failure on Win10 for FileSystemRepository/content repo

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3440:
--
Attachment: NIFI-3340_ignore_on_windows.patch

> Test failure on Win10 for FileSystemRepository/content repo
> ---
>
> Key: NIFI-3440
> URL: https://issues.apache.org/jira/browse/NIFI-3440
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: Windows10
>Reporter: Joseph Witt
> Fix For: 1.2.0
>
> Attachments: 
> 0001-NIFI-3440-Added-retry-logic-when-trying-to-delete-ne.patch, 
> 0001-NIFI-3440-Use-File-I-O-API-instead-of-NIO-API-for-de.patch, 
> NIFI-3340_ignore_on_windows.patch
>
>
> Running org.apache.nifi.controller.repository.io.TestLimitedInputStream
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec - in 
> org.apache.nifi.controller.repository.io.Test
> LimitedInputStream
> Running org.apache.nifi.controller.repository.TestFileSystemRepository
> Tests run: 31, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 28.698 sec 
> <<< FAILURE! - in org.apache.nifi.controller
> .repository.TestFileSystemRepository
> testReadWithContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.596 sec  <<
> < ERROR!
> java.nio.file.FileSystemException: 
> C:\Users\nifi\nifi.git\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-fra
> mework-core\target\content_repository\1\1486264479123-1: The process cannot 
> access the file because it is being used by
> another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithContentArchived(TestFileSystemRepo
> sitory.java:408)
> testReadWithNoContentArchived(org.apache.nifi.controller.repository.TestFileSystemRepository)
>   Time elapsed: 0.621 sec
> <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected but
> was
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
> at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
> at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
> at 
> sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
> at java.nio.file.Files.deleteIfExists(Files.java:1165)
> at 
> org.apache.nifi.controller.repository.TestFileSystemRepository.testReadWithNoContentArchived(TestFileSystemRe
> pository.java:429)
> Running org.apache.nifi.controller.repository.TestRingBufferEventRepository
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - 
> in org.apache.nifi.controller.repository.Tes
> tRingBufferEventRepository
> Running org.apache.nifi.controller.repository.TestStandardProcessSession
> [~markap14] isnt this something you've addressed before perhaps in another 
> test?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950175#comment-15950175
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r109074642
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r109074642
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.CommitTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.DeleteRowsWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.InsertRowsWriter;
+import 

[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950170#comment-15950170
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r109073989
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950169#comment-15950169
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r109074347
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r109073989
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.CommitTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.DeleteRowsWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.InsertRowsWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r109074347
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.CommitTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.DeleteRowsWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.InsertRowsWriter;
+import 

[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950157#comment-15950157
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r109072990
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r109072990
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.CommitTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.DeleteRowsWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.InsertRowsWriter;
+import 

[jira] [Created] (NIFI-3665) Documentation doesn't cover handling of local state directory in upgrade scenarios

2017-03-30 Thread Aldrin Piri (JIRA)
Aldrin Piri created NIFI-3665:
-

 Summary: Documentation doesn't cover handling of local state 
directory in upgrade scenarios
 Key: NIFI-3665
 URL: https://issues.apache.org/jira/browse/NIFI-3665
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation & Website
Reporter: Aldrin Piri
Priority: Minor


We had a user that was running a standalone instance of NiFi and was unsure of 
what to do with the state directory.  

{quote}
I just wanted to confirm something, but I couldn't find anything online.

A lot of docs that talk about setting NiFi up from a System Admin's 
point-of-view, mention changing config to move several folders out of the NiFi 
root dir. These typically include all 4 repositories, as well as files like the 
flow.xml.gz.

I see most guide leave the 'work' dir in place, so I assume when a new version 
comes out, nothing is lost if 'work' is deleted.

However, I'm not sure about the 'state' dir. Should this exist outside of the 
root dir, if the idea is to replace NiFi's root dir with a new version, or does 
it also get re-built like 'work'? Does it make a difference when running single 
node vs (master-less) cluster?

Thanks for any advice in advance,
--Wes
{quote}

We don't seem to capture the idea of externalizing this to make upgrades easier 
in our Administrator's guide 
(https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#state_providers)
 nor in the Upgrade guide in our Wiki 
(https://cwiki.apache.org/confluence/display/NIFI/Upgrading+NiFi)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFIREG-2) Design logo for Registry

2017-03-30 Thread Rob Moran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950045#comment-15950045
 ] 

Rob Moran commented on NIFIREG-2:
-

Thanks all. [~andrewmlim] yes, I'll work on preparing a version for use on the 
site.

> Design logo for Registry
> 
>
> Key: NIFIREG-2
> URL: https://issues.apache.org/jira/browse/NIFIREG-2
> Project: NiFi Registry
>  Issue Type: Task
>Reporter: Rob Moran
>Assignee: Rob Moran
>Priority: Minor
> Attachments: registry-logo-concept_2017-03-27.png, 
> registry-logo-concept_2017-03-30.png
>
>
> The attached image contains the proposed logo design for Registry. The points 
> below describe some of the thinking behind it:
> * Relationship to NiFi and MiNiFi through the use of the same color palette, 
> typeface, and block elements representing bits of data
> * For Registry these blocks also represent the storage/organization aspect 
> through their even distribution and arrangement
> * The 3 gradated blocks across the top – forming the terminal part of a 
> lowercase *r* – represent movement (e.g., a versioned flow being saved to 
> NiFi or imported to NiFi from the registry)
> * Relating back to the original water/flow concept of NiFi, the curved line 
> integrated into the gradated blocks represent the continuous motion of 
> flowing water
> * The light gray block helps with idea of storage as previously mentioned, 
> but also alludes to unused storage/free space
> * The gray block also helps establish the strong diagonal slicing through it 
> and the lowest green block. Again this helps with the idea of movement, but 
> more so speaks to how Registry operates in the background, tucked away, 
> largely unseen by NiFi operators as it facilitates deployment tasks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3653) Create PolicyBasedAuthorizer interface to allow authorization chain

2017-03-30 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3653:

Description: 
Rather than using AbstractPolicyBasedAuthorizer to trigger policy management, 
refactor to use a new interface.  New implementations of this interface can 
then create an authorization chain with existing AbstractPolicyBasedAuthorizer 
sub-classes.


While investigating alternate implementations of the Authorizer interface, I 
see the AbstractPolicyBasedAuthorizer is meant to be extended.  It's 
authorize() method is final, however, and does not have an abstract 
doAuthorize() method that sub-classes can extend.

In particular, the existing AbstractPolicyBasedAuthorizer authorize() method 
does not take into account the AuthorizationRequest "resourceContext" in its 
authorization decision.  This is especially important when authorizing access 
to events in Provenance, which places attributes in resouceContext of its 
AuthorizationRequest when obtaining an authorization decision.  I would like to 
use attributes to authorize access to Provenance download & view content 
feature.

If I had my own sub-class of AbstractPolicyBasedAuthorizer, with the 
availability of a doAuthorize() method, then I could maintain my own user 
policies for allowing access to flowfile content via Provenance.

  was:
While investigating alternate implementations of the Authorizer interface, I 
see the AbstractPolicyBasedAuthorizer is meant to be extended.  It's 
authorize() method is final, however, and does not have an abstract 
doAuthorize() method that sub-classes can extend.

In particular, the existing AbstractPolicyBasedAuthorizer authorize() method 
does not take into account the AuthorizationRequest "resourceContext" in its 
authorization decision.  This is especially important when authorizing access 
to events in Provenance, which places attributes in resouceContext of its 
AuthorizationRequest when obtaining an authorization decision.  I would like to 
use attributes to authorize access to Provenance download & view content 
feature.

If I had my own sub-class of AbstractPolicyBasedAuthorizer, with the 
availability of a doAuthorize() method, then I could maintain my own user 
policies for allowing access to flowfile content via Provenance.


> Create PolicyBasedAuthorizer interface to allow authorization chain
> ---
>
> Key: NIFI-3653
> URL: https://issues.apache.org/jira/browse/NIFI-3653
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Michael Moser
>Assignee: Matt Gilman
>
> Rather than using AbstractPolicyBasedAuthorizer to trigger policy management, 
> refactor to use a new interface.  New implementations of this interface can 
> then create an authorization chain with existing 
> AbstractPolicyBasedAuthorizer sub-classes.
> 
> While investigating alternate implementations of the Authorizer interface, I 
> see the AbstractPolicyBasedAuthorizer is meant to be extended.  It's 
> authorize() method is final, however, and does not have an abstract 
> doAuthorize() method that sub-classes can extend.
> In particular, the existing AbstractPolicyBasedAuthorizer authorize() method 
> does not take into account the AuthorizationRequest "resourceContext" in its 
> authorization decision.  This is especially important when authorizing access 
> to events in Provenance, which places attributes in resouceContext of its 
> AuthorizationRequest when obtaining an authorization decision.  I would like 
> to use attributes to authorize access to Provenance download & view content 
> feature.
> If I had my own sub-class of AbstractPolicyBasedAuthorizer, with the 
> availability of a doAuthorize() method, then I could maintain my own user 
> policies for allowing access to flowfile content via Provenance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3630) FlowFileRepository / WriteAhead Log should use a BufferedOutputStream when checkpointing

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3630:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> FlowFileRepository / WriteAhead Log should use a BufferedOutputStream when 
> checkpointing
> 
>
> Key: NIFI-3630
> URL: https://issues.apache.org/jira/browse/NIFI-3630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
> Fix For: 1.2.0
>
>
> Currently, calls to MinimalLockingWirteAheadLog's checkpoint() method create 
> a FileOutputStream, then wrap that in a DataOutputStream and start writing to 
> the DataOutputStream, passing that along the SerDe as well. The 
> FileOutputStream that it writes to should be wrapped in a 
> BufferedOutputStream, as each call to checkpoint() is very taxing on the file 
> system currently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3630) FlowFileRepository / WriteAhead Log should use a BufferedOutputStream when checkpointing

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949794#comment-15949794
 ] 

ASF GitHub Bot commented on NIFI-3630:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1632


> FlowFileRepository / WriteAhead Log should use a BufferedOutputStream when 
> checkpointing
> 
>
> Key: NIFI-3630
> URL: https://issues.apache.org/jira/browse/NIFI-3630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
> Fix For: 1.2.0
>
>
> Currently, calls to MinimalLockingWirteAheadLog's checkpoint() method create 
> a FileOutputStream, then wrap that in a DataOutputStream and start writing to 
> the DataOutputStream, passing that along the SerDe as well. The 
> FileOutputStream that it writes to should be wrapped in a 
> BufferedOutputStream, as each call to checkpoint() is very taxing on the file 
> system currently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3630) FlowFileRepository / WriteAhead Log should use a BufferedOutputStream when checkpointing

2017-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949787#comment-15949787
 ] 

ASF subversion and git services commented on NIFI-3630:
---

Commit 091359b450a7d0fb6bb04e2238c9171728cd2720 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=091359b ]

NIFI-3630 This closes #1632. Use a BufferedOutputStream when checkpointing 
FlowFile Repository

Signed-off-by: joewitt 


> FlowFileRepository / WriteAhead Log should use a BufferedOutputStream when 
> checkpointing
> 
>
> Key: NIFI-3630
> URL: https://issues.apache.org/jira/browse/NIFI-3630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
> Fix For: 1.2.0
>
>
> Currently, calls to MinimalLockingWirteAheadLog's checkpoint() method create 
> a FileOutputStream, then wrap that in a DataOutputStream and start writing to 
> the DataOutputStream, passing that along the SerDe as well. The 
> FileOutputStream that it writes to should be wrapped in a 
> BufferedOutputStream, as each call to checkpoint() is very taxing on the file 
> system currently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1632: NIFI-3630: Use a BufferedOutputStream when checkpoi...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1632


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3630) FlowFileRepository / WriteAhead Log should use a BufferedOutputStream when checkpointing

2017-03-30 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949783#comment-15949783
 ] 

Joseph Witt commented on NIFI-3630:
---

+1 will merge.

Ran some pretty legit before and after testing on this.  For a fairly 
pathological/taxing flow I am seeing consistently 50% better io utilization 
while doing the same amount of work, seeing slightly improved cpu idle, seeing 
about 20-30% faster checkpointing of the ff wal.  These are all consistent with 
what such a change would do.  My testing is on an SSD and on a flow not doing 
much else other than annoying the ff wal.  So on HDDs, etc.. , we could expect 
even better gains.



> FlowFileRepository / WriteAhead Log should use a BufferedOutputStream when 
> checkpointing
> 
>
> Key: NIFI-3630
> URL: https://issues.apache.org/jira/browse/NIFI-3630
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Joseph Witt
> Fix For: 1.2.0
>
>
> Currently, calls to MinimalLockingWirteAheadLog's checkpoint() method create 
> a FileOutputStream, then wrap that in a DataOutputStream and start writing to 
> the DataOutputStream, passing that along the SerDe as well. The 
> FileOutputStream that it writes to should be wrapped in a 
> BufferedOutputStream, as each call to checkpoint() is very taxing on the file 
> system currently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3189) ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is engaged

2017-03-30 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-3189:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is 
> engaged
> ---
>
> Key: NIFI-3189
> URL: https://issues.apache.org/jira/browse/NIFI-3189
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Joseph Witt
>Assignee: Koji Kawamura
> Fix For: 1.2.0
>
>
> ConsumeKafka processors can alert to rebalance issues when backpressure is 
> engaged on the output connection and is then freed up.  This is because we're 
> not doing anything with those consumers for a period of time and the kafka 
> client detects this and initiates a rebalance.  We should ensure that even 
> when we cannot send more data due to back pressure that we at least have some 
> sort of keep alive behavior with the kafka client.  Or, if that isn't an 
> option we should at least document the situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3653) Create PolicyBasedAuthorizer interface to allow authorization chain

2017-03-30 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3653:

Summary: Create PolicyBasedAuthorizer interface to allow authorization 
chain  (was: Allow extension of authorize method in 
AbstractPolicyBasedAuthorizer)

> Create PolicyBasedAuthorizer interface to allow authorization chain
> ---
>
> Key: NIFI-3653
> URL: https://issues.apache.org/jira/browse/NIFI-3653
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Michael Moser
>Assignee: Matt Gilman
>
> While investigating alternate implementations of the Authorizer interface, I 
> see the AbstractPolicyBasedAuthorizer is meant to be extended.  It's 
> authorize() method is final, however, and does not have an abstract 
> doAuthorize() method that sub-classes can extend.
> In particular, the existing AbstractPolicyBasedAuthorizer authorize() method 
> does not take into account the AuthorizationRequest "resourceContext" in its 
> authorization decision.  This is especially important when authorizing access 
> to events in Provenance, which places attributes in resouceContext of its 
> AuthorizationRequest when obtaining an authorization decision.  I would like 
> to use attributes to authorize access to Provenance download & view content 
> feature.
> If I had my own sub-class of AbstractPolicyBasedAuthorizer, with the 
> availability of a doAuthorize() method, then I could maintain my own user 
> policies for allowing access to flowfile content via Provenance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3189) ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is engaged

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949747#comment-15949747
 ] 

ASF GitHub Bot commented on NIFI-3189:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1527


> ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is 
> engaged
> ---
>
> Key: NIFI-3189
> URL: https://issues.apache.org/jira/browse/NIFI-3189
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Joseph Witt
>Assignee: Koji Kawamura
> Fix For: 1.2.0
>
>
> ConsumeKafka processors can alert to rebalance issues when backpressure is 
> engaged on the output connection and is then freed up.  This is because we're 
> not doing anything with those consumers for a period of time and the kafka 
> client detects this and initiates a rebalance.  We should ensure that even 
> when we cannot send more data due to back pressure that we at least have some 
> sort of keep alive behavior with the kafka client.  Or, if that isn't an 
> option we should at least document the situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3189) ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is engaged

2017-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949745#comment-15949745
 ] 

ASF subversion and git services commented on NIFI-3189:
---

Commit fd92999dafc040940011c87bb2ee2c8edf5f96a2 in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=fd92999 ]

NIFI-3189: ConsumeKafka 0.9 and 0.10 with downstream backpressure

Currently, NiFi Kafka consumer processors have following issue.

While downstream connections are full, ConsumeKafka is not scheduled to run 
onTrigger.
It stopps executing poll to tell Kafka server that this client is alive.
Thus, after a while in that situation, Kafka server rebalances the client.
When downstream connections back to normal, although ConsumeKafka is scheduled 
again,
the client is no longer a part of a consumer group.

If this happens, Kafka client succeeds polling messages when ConsumeKafka 
processor resumes, but fails to commit offset.
Received messages are already committed into NiFi flow, but since consumer 
offset is not updated, those will be consumed again, duplicated.

In order to address above issue:

- For ConsumeKafka_0_10, use latest client library

Above issue has been addressed by KIP-62.
The latest Kafka consumer poll checks if the client instance is still 
valid, and rejoin the group if not, before consuming messages.

- For ConsumeKafka (0.9), added manual retention logic using pause/resume

Kafka client 0.9 doesn't have background thread heartbeat, so similar 
machanism is added manually.
Use Kafka pause/resume consumer API to tell Kafka server that the client 
stops consuming messages but is still alive.
Another internal thread is used to perform paused poll periodically based 
on the time passed since the last onTrigger(poll) is executed.

This closes #1527.

Signed-off-by: Bryan Bende 


> ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is 
> engaged
> ---
>
> Key: NIFI-3189
> URL: https://issues.apache.org/jira/browse/NIFI-3189
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Joseph Witt
>Assignee: Koji Kawamura
> Fix For: 1.2.0
>
>
> ConsumeKafka processors can alert to rebalance issues when backpressure is 
> engaged on the output connection and is then freed up.  This is because we're 
> not doing anything with those consumers for a period of time and the kafka 
> client detects this and initiates a rebalance.  We should ensure that even 
> when we cannot send more data due to back pressure that we at least have some 
> sort of keep alive behavior with the kafka client.  Or, if that isn't an 
> option we should at least document the situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1527: NIFI-3189: ConsumeKafka & Back-pressure. ConsumeKaf...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1527


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3189) ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is engaged

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949740#comment-15949740
 ] 

ASF GitHub Bot commented on NIFI-3189:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1527
  
+1 I recreated the scenario with 0.9 and 0.10 by setting back-pressure to 1 
on the queues after ConsumeKafka and ConsumeKafka_0_10, and verified that after 
this patch they have no problem resuming after sitting with back-pressure for 
long periods of time. Will merge to master.


> ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is 
> engaged
> ---
>
> Key: NIFI-3189
> URL: https://issues.apache.org/jira/browse/NIFI-3189
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Joseph Witt
>Assignee: Koji Kawamura
> Fix For: 1.2.0
>
>
> ConsumeKafka processors can alert to rebalance issues when backpressure is 
> engaged on the output connection and is then freed up.  This is because we're 
> not doing anything with those consumers for a period of time and the kafka 
> client detects this and initiates a rebalance.  We should ensure that even 
> when we cannot send more data due to back pressure that we at least have some 
> sort of keep alive behavior with the kafka client.  Or, if that isn't an 
> option we should at least document the situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1527: NIFI-3189: ConsumeKafka & Back-pressure. ConsumeKafka_0_10

2017-03-30 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1527
  
+1 I recreated the scenario with 0.9 and 0.10 by setting back-pressure to 1 
on the queues after ConsumeKafka and ConsumeKafka_0_10, and verified that after 
this patch they have no problem resuming after sitting with back-pressure for 
long periods of time. Will merge to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFIREG-2) Design logo for Registry

2017-03-30 Thread Andrew Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949728#comment-15949728
 ] 

Andrew Lim commented on NIFIREG-2:
--

I like the latest change.  Styling the curved line that way gave it more weight 
and added more "movement", which is the effect you get from the NiFi and MiNiFi 
logos too.

Quick note:  can you create a version of the logo that will work on a dark 
background?  Just looking ahead to what is needed when we add these to the 
Registry web page (https://nifi.apache.org/registry.html), specifically the top 
menu bar.


> Design logo for Registry
> 
>
> Key: NIFIREG-2
> URL: https://issues.apache.org/jira/browse/NIFIREG-2
> Project: NiFi Registry
>  Issue Type: Task
>Reporter: Rob Moran
>Assignee: Rob Moran
>Priority: Minor
> Attachments: registry-logo-concept_2017-03-27.png, 
> registry-logo-concept_2017-03-30.png
>
>
> The attached image contains the proposed logo design for Registry. The points 
> below describe some of the thinking behind it:
> * Relationship to NiFi and MiNiFi through the use of the same color palette, 
> typeface, and block elements representing bits of data
> * For Registry these blocks also represent the storage/organization aspect 
> through their even distribution and arrangement
> * The 3 gradated blocks across the top – forming the terminal part of a 
> lowercase *r* – represent movement (e.g., a versioned flow being saved to 
> NiFi or imported to NiFi from the registry)
> * Relating back to the original water/flow concept of NiFi, the curved line 
> integrated into the gradated blocks represent the continuous motion of 
> flowing water
> * The light gray block helps with idea of storage as previously mentioned, 
> but also alludes to unused storage/free space
> * The gray block also helps establish the strong diagonal slicing through it 
> and the lowest green block. Again this helps with the idea of movement, but 
> more so speaks to how Registry operates in the background, tucked away, 
> largely unseen by NiFi operators as it facilitates deployment tasks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (NIFI-3653) Allow extension of authorize method in AbstractPolicyBasedAuthorizer

2017-03-30 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reassigned NIFI-3653:
-

Assignee: Matt Gilman

> Allow extension of authorize method in AbstractPolicyBasedAuthorizer
> 
>
> Key: NIFI-3653
> URL: https://issues.apache.org/jira/browse/NIFI-3653
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Michael Moser
>Assignee: Matt Gilman
>
> While investigating alternate implementations of the Authorizer interface, I 
> see the AbstractPolicyBasedAuthorizer is meant to be extended.  It's 
> authorize() method is final, however, and does not have an abstract 
> doAuthorize() method that sub-classes can extend.
> In particular, the existing AbstractPolicyBasedAuthorizer authorize() method 
> does not take into account the AuthorizationRequest "resourceContext" in its 
> authorization decision.  This is especially important when authorizing access 
> to events in Provenance, which places attributes in resouceContext of its 
> AuthorizationRequest when obtaining an authorization decision.  I would like 
> to use attributes to authorize access to Provenance download & view content 
> feature.
> If I had my own sub-class of AbstractPolicyBasedAuthorizer, with the 
> availability of a doAuthorize() method, then I could maintain my own user 
> policies for allowing access to flowfile content via Provenance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3653) Allow extension of authorize method in AbstractPolicyBasedAuthorizer

2017-03-30 Thread Matt Gilman (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949716#comment-15949716
 ] 

Matt Gilman commented on NIFI-3653:
---

Sounds like a plan. Please update the title/description. I will assign to 
myself as I have some cycles to dig in a little more.

Thanks!

Sorry for the confusion on the mentions. 

> Allow extension of authorize method in AbstractPolicyBasedAuthorizer
> 
>
> Key: NIFI-3653
> URL: https://issues.apache.org/jira/browse/NIFI-3653
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Michael Moser
>
> While investigating alternate implementations of the Authorizer interface, I 
> see the AbstractPolicyBasedAuthorizer is meant to be extended.  It's 
> authorize() method is final, however, and does not have an abstract 
> doAuthorize() method that sub-classes can extend.
> In particular, the existing AbstractPolicyBasedAuthorizer authorize() method 
> does not take into account the AuthorizationRequest "resourceContext" in its 
> authorization decision.  This is especially important when authorizing access 
> to events in Provenance, which places attributes in resouceContext of its 
> AuthorizationRequest when obtaining an authorization decision.  I would like 
> to use attributes to authorize access to Provenance download & view content 
> feature.
> If I had my own sub-class of AbstractPolicyBasedAuthorizer, with the 
> availability of a doAuthorize() method, then I could maintain my own user 
> policies for allowing access to flowfile content via Provenance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3653) Allow extension of authorize method in AbstractPolicyBasedAuthorizer

2017-03-30 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949706#comment-15949706
 ] 

Michael Moser commented on NIFI-3653:
-

OK [~mcgilman] this sounds like a plan.  Shall I change the title and 
description of this ticket?

Oh by the way, I am user [~mosermw] instead of [~boardm26].  I can't imagine 
the damage to space/time if the two of us were in the same room together, 
though.

> Allow extension of authorize method in AbstractPolicyBasedAuthorizer
> 
>
> Key: NIFI-3653
> URL: https://issues.apache.org/jira/browse/NIFI-3653
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Michael Moser
>
> While investigating alternate implementations of the Authorizer interface, I 
> see the AbstractPolicyBasedAuthorizer is meant to be extended.  It's 
> authorize() method is final, however, and does not have an abstract 
> doAuthorize() method that sub-classes can extend.
> In particular, the existing AbstractPolicyBasedAuthorizer authorize() method 
> does not take into account the AuthorizationRequest "resourceContext" in its 
> authorization decision.  This is especially important when authorizing access 
> to events in Provenance, which places attributes in resouceContext of its 
> AuthorizationRequest when obtaining an authorization decision.  I would like 
> to use attributes to authorize access to Provenance download & view content 
> feature.
> If I had my own sub-class of AbstractPolicyBasedAuthorizer, with the 
> availability of a doAuthorize() method, then I could maintain my own user 
> policies for allowing access to flowfile content via Provenance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (NIFI-3664) UI - Timestamp Issue

2017-03-30 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-3664:
-

 Summary: UI - Timestamp Issue
 Key: NIFI-3664
 URL: https://issues.apache.org/jira/browse/NIFI-3664
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework, Core UI
Reporter: Matt Gilman
Assignee: Matt Gilman
Priority: Minor


The timestamps shown throughout the UI that do not contain a date may get be 
incorrectly converted when the replicated request is interpreted by the cluster 
coordinator. Due to the lack of date, the date portion of the parsed timestamp 
is 1/1/70. Reformatting the adjusted date may result in a time different than 
was originally parsed if timezone rules have changed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3648) Address Excessive Garbage Collection

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949677#comment-15949677
 ] 

ASF GitHub Bot commented on NIFI-3648:
--

Github user mosermw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1637#discussion_r109018746
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/protocol/impl/SocketProtocolListener.java
 ---
@@ -134,15 +132,21 @@ public void dispatchRequest(final Socket socket) {
 
 // unmarshall message
 final ProtocolMessageUnmarshaller 
unmarshaller = protocolContext.createUnmarshaller();
-final InputStream inStream = socket.getInputStream();
-final CopyingInputStream copyingInputStream = new 
CopyingInputStream(inStream, maxMsgBuffer); // don't copy more than 1 MB
+final ByteCountingInputStream countingIn = new 
ByteCountingInputStream(socket.getInputStream());
+InputStream wrappedInStream = countingIn;
+if (logger.isDebugEnabled()) {
+final int maxMsgBuffer = 1024 * 1024;   // don't buffer 
more than 1 MB of the message
+final CopyingInputStream copyingInputStream = new 
CopyingInputStream(wrappedInStream, maxMsgBuffer);
+wrappedInStream = copyingInputStream;
+}
 
 final ProtocolMessage request;
 try {
-request = unmarshaller.unmarshal(copyingInputStream);
+request = unmarshaller.unmarshal(wrappedInStream);
 } finally {
-receivedMessage = copyingInputStream.getBytesRead();
 if (logger.isDebugEnabled()) {
--- End diff --

Excellent!  I thought of using instanceof CopyingInputStream but didn't 
think it would help.  The potential race condition makes it useful and 
necessary.  I pushed a fix, will squash if needed.


> Address Excessive Garbage Collection
> 
>
> Key: NIFI-3648
> URL: https://issues.apache.org/jira/browse/NIFI-3648
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> We have a lot of places in the codebase where we generate lots of unnecessary 
> garbage - especially byte arrays. We need to clean this up to in order to 
> relieve stress on the garbage collector.
> Specific points that I've found create unnecessary garbage:
> Provenance CompressableRecordWriter creates a new BufferedOutputStream for 
> each 'compression block' that it creates. Each one has a 64 KB byte[]. This 
> is very wasteful. We should instead subclass BufferedOutputStream so that we 
> are able to provide a byte[] to use instead of an int that indicates the 
> size. This way, we can just keep re-using the same byte[] that we create for 
> each writer. This saves about 32,000 of these 64 KB byte[] for each writer 
> that we create. And we create more than 1 of these per minute.
> EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because 
> the underlying library will also buffer data. So we are unnecessarily 
> creating a lot of byte[]'s
> CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And 
> doesn't need them at all, because it reads and writes with its own byte[] 
> buffer via StreamUtils.copy
> Site-to-site uses CompressionInputStream. This stream creates a new byte[] in 
> the readChunkHeader() method continually. We should instead only create a new 
> byte[] if we need a bigger buffer and otherwise just use an offset & length 
> variable.
> Right now, SplitText uses TextLineDemarcator. The fill() method increases the 
> size of the internal byte[] by 8 KB each time. When dealing with a large 
> chunk of data, this is VERY expensive on GC because we continually create a 
> byte[] and then discard it to create a new one. Take for example an 800 KB 
> chunk. We would do this 100,000 times. If we instead double the size we would 
> only have to create 8 of these.
> Other Processors that use Buffered streams unnecessarily:
> ConvertJSONToSQL
> ExecuteProcess
> ExecuteStreamCommand
> AttributesToJSON
> EvaluateJsonPath (when writing to content)
> ExtractGrok
> JmsConsumer



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1637: NIFI-3648 removed cluster message copying when not ...

2017-03-30 Thread mosermw
Github user mosermw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1637#discussion_r109018746
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/protocol/impl/SocketProtocolListener.java
 ---
@@ -134,15 +132,21 @@ public void dispatchRequest(final Socket socket) {
 
 // unmarshall message
 final ProtocolMessageUnmarshaller 
unmarshaller = protocolContext.createUnmarshaller();
-final InputStream inStream = socket.getInputStream();
-final CopyingInputStream copyingInputStream = new 
CopyingInputStream(inStream, maxMsgBuffer); // don't copy more than 1 MB
+final ByteCountingInputStream countingIn = new 
ByteCountingInputStream(socket.getInputStream());
+InputStream wrappedInStream = countingIn;
+if (logger.isDebugEnabled()) {
+final int maxMsgBuffer = 1024 * 1024;   // don't buffer 
more than 1 MB of the message
+final CopyingInputStream copyingInputStream = new 
CopyingInputStream(wrappedInStream, maxMsgBuffer);
+wrappedInStream = copyingInputStream;
+}
 
 final ProtocolMessage request;
 try {
-request = unmarshaller.unmarshal(copyingInputStream);
+request = unmarshaller.unmarshal(wrappedInStream);
 } finally {
-receivedMessage = copyingInputStream.getBytesRead();
 if (logger.isDebugEnabled()) {
--- End diff --

Excellent!  I thought of using instanceof CopyingInputStream but didn't 
think it would help.  The potential race condition makes it useful and 
necessary.  I pushed a fix, will squash if needed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (NIFI-3528) Include dynamic JAAS configuration for Kafka processors 0.10+

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt reassigned NIFI-3528:
-

Assignee: Bryan Bende  (was: Joseph Witt)

> Include dynamic JAAS configuration for Kafka processors 0.10+
> -
>
> Key: NIFI-3528
> URL: https://issues.apache.org/jira/browse/NIFI-3528
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Bryan Bende
> Fix For: 1.2.0
>
>
> Kafka 0.10.2.0 has been released few days ago and introduced KAFKA-4259.
> It should now be possible to dynamically specify the client when using Kafka 
> client library. Consequently, in a multi-tenant context, it won't be 
> necessary anymore to write as a single user (defined in JAAS configuration 
> file and loaded by the JVM) in all running Kafka processors.
> More details here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-85%3A+Dynamic+JAAS+configuration+for+Kafka+clients



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3528) Include dynamic JAAS configuration for Kafka processors 0.10+

2017-03-30 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949674#comment-15949674
 ] 

Joseph Witt commented on NIFI-3528:
---

[~bbende] noted on another jira that he can review this with the other kafka 
one.

> Include dynamic JAAS configuration for Kafka processors 0.10+
> -
>
> Key: NIFI-3528
> URL: https://issues.apache.org/jira/browse/NIFI-3528
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Bryan Bende
> Fix For: 1.2.0
>
>
> Kafka 0.10.2.0 has been released few days ago and introduced KAFKA-4259.
> It should now be possible to dynamically specify the client when using Kafka 
> client library. Consequently, in a multi-tenant context, it won't be 
> necessary anymore to write as a single user (defined in JAAS configuration 
> file and loaded by the JVM) in all running Kafka processors.
> More details here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-85%3A+Dynamic+JAAS+configuration+for+Kafka+clients



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3189) ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is engaged

2017-03-30 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949670#comment-15949670
 ] 

Joseph Witt commented on NIFI-3189:
---

rgr that [~bende] sounds good

> ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is 
> engaged
> ---
>
> Key: NIFI-3189
> URL: https://issues.apache.org/jira/browse/NIFI-3189
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Joseph Witt
>Assignee: Koji Kawamura
> Fix For: 1.2.0
>
>
> ConsumeKafka processors can alert to rebalance issues when backpressure is 
> engaged on the output connection and is then freed up.  This is because we're 
> not doing anything with those consumers for a period of time and the kafka 
> client detects this and initiates a rebalance.  We should ensure that even 
> when we cannot send more data due to back pressure that we at least have some 
> sort of keep alive behavior with the kafka client.  Or, if that isn't an 
> option we should at least document the situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3421) Start up failure - java.nio.channels.OverlappingFileLockException: null

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3421:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Start up failure - java.nio.channels.OverlappingFileLockException: null
> ---
>
> Key: NIFI-3421
> URL: https://issues.apache.org/jira/browse/NIFI-3421
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Joseph Witt
>Priority: Critical
> Fix For: 1.2.0
>
>
> If NiFi fails to start up successfully, the act of attempting to shut down 
> appears to result in a duplicate FlowController instantiation. A symptom of 
> this is initializing the flow file repo twice which fails with an overlapping 
> file lock exception.
> The actual reason that start up failed is logged further back in the app log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3421) Start up failure - java.nio.channels.OverlappingFileLockException: null

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949659#comment-15949659
 ] 

ASF GitHub Bot commented on NIFI-3421:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1620


> Start up failure - java.nio.channels.OverlappingFileLockException: null
> ---
>
> Key: NIFI-3421
> URL: https://issues.apache.org/jira/browse/NIFI-3421
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Joseph Witt
>Priority: Critical
> Fix For: 1.2.0
>
>
> If NiFi fails to start up successfully, the act of attempting to shut down 
> appears to result in a duplicate FlowController instantiation. A symptom of 
> this is initializing the flow file repo twice which fails with an overlapping 
> file lock exception.
> The actual reason that start up failed is logged further back in the app log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3421) Start up failure - java.nio.channels.OverlappingFileLockException: null

2017-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949658#comment-15949658
 ] 

ASF subversion and git services commented on NIFI-3421:
---

Commit 282e1a7b1a7123abe21de8ac022b7f5636da79da in nifi's branch 
refs/heads/master from [~mcgilman]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=282e1a7 ]

NIFI-3421: This closes #1620.
- On contextDestroyed, referencing beans created during contextInitialized to 
prevent successive attempts to create a bean if that bean failed to be created 
initially.

Signed-off-by: joewitt 


> Start up failure - java.nio.channels.OverlappingFileLockException: null
> ---
>
> Key: NIFI-3421
> URL: https://issues.apache.org/jira/browse/NIFI-3421
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Joseph Witt
>Priority: Critical
> Fix For: 1.2.0
>
>
> If NiFi fails to start up successfully, the act of attempting to shut down 
> appears to result in a duplicate FlowController instantiation. A symptom of 
> this is initializing the flow file repo twice which fails with an overlapping 
> file lock exception.
> The actual reason that start up failed is logged further back in the app log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1620: NIFI-3421: Preventing successive attempts to create...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1620


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3189) ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is engaged

2017-03-30 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949652#comment-15949652
 ] 

Bryan Bende commented on NIFI-3189:
---

I can review this... [~joewitt] I believe this PR and NIFI-3528 both bump the 
0.10 kafka processors to the 0.10.2.0 client which looks like the latest.

> ConsumeKafka 0.9 and 0.10 can cause consumer rebalance when backpressure is 
> engaged
> ---
>
> Key: NIFI-3189
> URL: https://issues.apache.org/jira/browse/NIFI-3189
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Joseph Witt
>Assignee: Koji Kawamura
> Fix For: 1.2.0
>
>
> ConsumeKafka processors can alert to rebalance issues when backpressure is 
> engaged on the output connection and is then freed up.  This is because we're 
> not doing anything with those consumers for a period of time and the kafka 
> client detects this and initiates a rebalance.  We should ensure that even 
> when we cannot send more data due to back pressure that we at least have some 
> sort of keep alive behavior with the kafka client.  Or, if that isn't an 
> option we should at least document the situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3421) Start up failure - java.nio.channels.OverlappingFileLockException: null

2017-03-30 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949638#comment-15949638
 ] 

Joseph Witt commented on NIFI-3421:
---

[~bbende] actually am about 5 mins from pushing.  waiting on a final full clean 
build/contrib check
code is good to go.

> Start up failure - java.nio.channels.OverlappingFileLockException: null
> ---
>
> Key: NIFI-3421
> URL: https://issues.apache.org/jira/browse/NIFI-3421
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Joseph Witt
>Priority: Critical
> Fix For: 1.2.0
>
>
> If NiFi fails to start up successfully, the act of attempting to shut down 
> appears to result in a duplicate FlowController instantiation. A symptom of 
> this is initializing the flow file repo twice which fails with an overlapping 
> file lock exception.
> The actual reason that start up failed is logged further back in the app log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3421) Start up failure - java.nio.channels.OverlappingFileLockException: null

2017-03-30 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949636#comment-15949636
 ] 

Bryan Bende commented on NIFI-3421:
---

[~joewitt] Are you still in the middle of reviewing this? If not, I can take 
this one.

> Start up failure - java.nio.channels.OverlappingFileLockException: null
> ---
>
> Key: NIFI-3421
> URL: https://issues.apache.org/jira/browse/NIFI-3421
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Joseph Witt
>Priority: Critical
> Fix For: 1.2.0
>
>
> If NiFi fails to start up successfully, the act of attempting to shut down 
> appears to result in a duplicate FlowController instantiation. A symptom of 
> this is initializing the flow file repo twice which fails with an overlapping 
> file lock exception.
> The actual reason that start up failed is logged further back in the app log.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-2907) Schema for flow.xml outdated

2017-03-30 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-2907:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Marking as resolved since PR appears to be merged and closed.

> Schema for flow.xml outdated
> 
>
> Key: NIFI-2907
> URL: https://issues.apache.org/jira/browse/NIFI-2907
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: James Wing
>Priority: Minor
> Fix For: 1.2.0
>
>
> The schema that the framework uses to validate the flow.xml file is outdated. 
> It does not account for the the 'template' element in the flow. As a result, 
> it logs the following error on startup, within the bootstrap.log file:
> 2016-10-17 09:45:12,784 ERROR [NiFi logging handler] org.apache.nifi.StdErr 
> [Error] :10:38: cvc-complex-type.2.4.a: Invalid content was found starting 
> with element 'template'. One of '{processor, inputPort, outputPort, label, 
> funnel, processGroup, remoteProcessGroup, connection, controllerService}' is 
> expected.
> We should add the 'template' element to the schema and ensure that all other 
> elements are accounted for



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (NIFI-3272) UI - GC stats in Cluster view

2017-03-30 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt resolved NIFI-3272.
---
Resolution: Fixed

+1 merged to master.  Looks great

> UI - GC stats in Cluster view
> -
>
> Key: NIFI-3272
> URL: https://issues.apache.org/jira/browse/NIFI-3272
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Joseph Witt
> Fix For: 1.2.0
>
>
> The GC stats in the Cluster view assume the garbage collector is set to G1 as 
> the column headers are hard coded. This needs to be updated to utilize the 
> name of the garbage collector in the response. Since a JVM can return 1 or 
> more garbage collection beans, it is not possible to guarantee the same 
> column model for each row in the table (node in the cluster). We likely need 
> to break this out into a single GC column which shows all the garbage 
> collections probably in a dialog (since there can be many).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3272) UI - GC stats in Cluster view

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949603#comment-15949603
 ] 

ASF GitHub Bot commented on NIFI-3272:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1616


> UI - GC stats in Cluster view
> -
>
> Key: NIFI-3272
> URL: https://issues.apache.org/jira/browse/NIFI-3272
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Joseph Witt
> Fix For: 1.2.0
>
>
> The GC stats in the Cluster view assume the garbage collector is set to G1 as 
> the column headers are hard coded. This needs to be updated to utilize the 
> name of the garbage collector in the response. Since a JVM can return 1 or 
> more garbage collection beans, it is not possible to guarantee the same 
> column model for each row in the table (node in the cluster). We likely need 
> to break this out into a single GC column which shows all the garbage 
> collections probably in a dialog (since there can be many).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3272) UI - GC stats in Cluster view

2017-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949601#comment-15949601
 ] 

ASF subversion and git services commented on NIFI-3272:
---

Commit e20353516339b78d2a1eae2828c620c4d4876805 in nifi's branch 
refs/heads/master from [~mcgilman]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e203535 ]

NIFI-3272 This closes #1616.
- Updating the garbage collection column to use a icon with tooltip to relay 
all garbage collections.


> UI - GC stats in Cluster view
> -
>
> Key: NIFI-3272
> URL: https://issues.apache.org/jira/browse/NIFI-3272
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Joseph Witt
> Fix For: 1.2.0
>
>
> The GC stats in the Cluster view assume the garbage collector is set to G1 as 
> the column headers are hard coded. This needs to be updated to utilize the 
> name of the garbage collector in the response. Since a JVM can return 1 or 
> more garbage collection beans, it is not possible to guarantee the same 
> column model for each row in the table (node in the cluster). We likely need 
> to break this out into a single GC column which shows all the garbage 
> collections probably in a dialog (since there can be many).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1616: NIFI-3272: GC stats in Cluster view

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1616


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3648) Address Excessive Garbage Collection

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949503#comment-15949503
 ] 

ASF GitHub Bot commented on NIFI-3648:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1637#discussion_r108993347
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/protocol/impl/SocketProtocolListener.java
 ---
@@ -134,15 +132,21 @@ public void dispatchRequest(final Socket socket) {
 
 // unmarshall message
 final ProtocolMessageUnmarshaller 
unmarshaller = protocolContext.createUnmarshaller();
-final InputStream inStream = socket.getInputStream();
-final CopyingInputStream copyingInputStream = new 
CopyingInputStream(inStream, maxMsgBuffer); // don't copy more than 1 MB
+final ByteCountingInputStream countingIn = new 
ByteCountingInputStream(socket.getInputStream());
+InputStream wrappedInStream = countingIn;
+if (logger.isDebugEnabled()) {
+final int maxMsgBuffer = 1024 * 1024;   // don't buffer 
more than 1 MB of the message
+final CopyingInputStream copyingInputStream = new 
CopyingInputStream(wrappedInStream, maxMsgBuffer);
+wrappedInStream = copyingInputStream;
+}
 
 final ProtocolMessage request;
 try {
-request = unmarshaller.unmarshal(copyingInputStream);
+request = unmarshaller.unmarshal(wrappedInStream);
 } finally {
-receivedMessage = copyingInputStream.getBytesRead();
 if (logger.isDebugEnabled()) {
--- End diff --

We should probably update this to be:
`if (logger.isDebugEnabled() && wrappedInStream instanceof 
CopyingInputStream) {`

As is, if a user changes logback.xml to set the logger to debug while this 
code is executing, it could result in the first call to logger.isDebugEnabled() 
returning false, in which case wrappedInStream would be a 
ByteCountingInputStream. then, at this point, it checks if 
logger.isDebugEnabled() and now it's true. As a result we try to cast 
ByteCountingInputStream to CopyingInputStream, so it would throw a 
ClassCastException.


> Address Excessive Garbage Collection
> 
>
> Key: NIFI-3648
> URL: https://issues.apache.org/jira/browse/NIFI-3648
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> We have a lot of places in the codebase where we generate lots of unnecessary 
> garbage - especially byte arrays. We need to clean this up to in order to 
> relieve stress on the garbage collector.
> Specific points that I've found create unnecessary garbage:
> Provenance CompressableRecordWriter creates a new BufferedOutputStream for 
> each 'compression block' that it creates. Each one has a 64 KB byte[]. This 
> is very wasteful. We should instead subclass BufferedOutputStream so that we 
> are able to provide a byte[] to use instead of an int that indicates the 
> size. This way, we can just keep re-using the same byte[] that we create for 
> each writer. This saves about 32,000 of these 64 KB byte[] for each writer 
> that we create. And we create more than 1 of these per minute.
> EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because 
> the underlying library will also buffer data. So we are unnecessarily 
> creating a lot of byte[]'s
> CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And 
> doesn't need them at all, because it reads and writes with its own byte[] 
> buffer via StreamUtils.copy
> Site-to-site uses CompressionInputStream. This stream creates a new byte[] in 
> the readChunkHeader() method continually. We should instead only create a new 
> byte[] if we need a bigger buffer and otherwise just use an offset & length 
> variable.
> Right now, SplitText uses TextLineDemarcator. The fill() method increases the 
> size of the internal byte[] by 8 KB each time. When dealing with a large 
> chunk of data, this is VERY expensive on GC because we continually create a 
> byte[] and then discard it to create a new one. Take for example an 800 KB 
> chunk. We would do this 100,000 times. If we instead double the size we would 
> only have to create 8 of these.
> Other Processors that use Buffered streams unnecessarily:
> ConvertJSONToSQL
> ExecuteProcess
> ExecuteStreamCommand
> AttributesToJSON
> EvaluateJsonPath (when writing to content)
> ExtractGrok
> JmsConsumer



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1637: NIFI-3648 removed cluster message copying when not ...

2017-03-30 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1637#discussion_r108993347
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/protocol/impl/SocketProtocolListener.java
 ---
@@ -134,15 +132,21 @@ public void dispatchRequest(final Socket socket) {
 
 // unmarshall message
 final ProtocolMessageUnmarshaller 
unmarshaller = protocolContext.createUnmarshaller();
-final InputStream inStream = socket.getInputStream();
-final CopyingInputStream copyingInputStream = new 
CopyingInputStream(inStream, maxMsgBuffer); // don't copy more than 1 MB
+final ByteCountingInputStream countingIn = new 
ByteCountingInputStream(socket.getInputStream());
+InputStream wrappedInStream = countingIn;
+if (logger.isDebugEnabled()) {
+final int maxMsgBuffer = 1024 * 1024;   // don't buffer 
more than 1 MB of the message
+final CopyingInputStream copyingInputStream = new 
CopyingInputStream(wrappedInStream, maxMsgBuffer);
+wrappedInStream = copyingInputStream;
+}
 
 final ProtocolMessage request;
 try {
-request = unmarshaller.unmarshal(copyingInputStream);
+request = unmarshaller.unmarshal(wrappedInStream);
 } finally {
-receivedMessage = copyingInputStream.getBytesRead();
 if (logger.isDebugEnabled()) {
--- End diff --

We should probably update this to be:
`if (logger.isDebugEnabled() && wrappedInStream instanceof 
CopyingInputStream) {`

As is, if a user changes logback.xml to set the logger to debug while this 
code is executing, it could result in the first call to logger.isDebugEnabled() 
returning false, in which case wrappedInStream would be a 
ByteCountingInputStream. then, at this point, it checks if 
logger.isDebugEnabled() and now it's true. As a result we try to cast 
ByteCountingInputStream to CopyingInputStream, so it would throw a 
ClassCastException.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3648) Address Excessive Garbage Collection

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949499#comment-15949499
 ] 

ASF GitHub Bot commented on NIFI-3648:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1637
  
@mosermw good catch. This should definitely be cleaned up as well!


> Address Excessive Garbage Collection
> 
>
> Key: NIFI-3648
> URL: https://issues.apache.org/jira/browse/NIFI-3648
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> We have a lot of places in the codebase where we generate lots of unnecessary 
> garbage - especially byte arrays. We need to clean this up to in order to 
> relieve stress on the garbage collector.
> Specific points that I've found create unnecessary garbage:
> Provenance CompressableRecordWriter creates a new BufferedOutputStream for 
> each 'compression block' that it creates. Each one has a 64 KB byte[]. This 
> is very wasteful. We should instead subclass BufferedOutputStream so that we 
> are able to provide a byte[] to use instead of an int that indicates the 
> size. This way, we can just keep re-using the same byte[] that we create for 
> each writer. This saves about 32,000 of these 64 KB byte[] for each writer 
> that we create. And we create more than 1 of these per minute.
> EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because 
> the underlying library will also buffer data. So we are unnecessarily 
> creating a lot of byte[]'s
> CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And 
> doesn't need them at all, because it reads and writes with its own byte[] 
> buffer via StreamUtils.copy
> Site-to-site uses CompressionInputStream. This stream creates a new byte[] in 
> the readChunkHeader() method continually. We should instead only create a new 
> byte[] if we need a bigger buffer and otherwise just use an offset & length 
> variable.
> Right now, SplitText uses TextLineDemarcator. The fill() method increases the 
> size of the internal byte[] by 8 KB each time. When dealing with a large 
> chunk of data, this is VERY expensive on GC because we continually create a 
> byte[] and then discard it to create a new one. Take for example an 800 KB 
> chunk. We would do this 100,000 times. If we instead double the size we would 
> only have to create 8 of these.
> Other Processors that use Buffered streams unnecessarily:
> ConvertJSONToSQL
> ExecuteProcess
> ExecuteStreamCommand
> AttributesToJSON
> EvaluateJsonPath (when writing to content)
> ExtractGrok
> JmsConsumer



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1637: NIFI-3648 removed cluster message copying when not in debu...

2017-03-30 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1637
  
@mosermw good catch. This should definitely be cleaned up as well!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #72: MINIFI-246: Adding tests that were used to...

2017-03-30 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/72

MINIFI-246: Adding tests that were used to find bug. Solved by MINIFI…

…-193

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFI-246

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/72.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #72


commit 1a915801ef075e6431a2068e5eaa739ed698aa58
Author: Marc Parisi 
Date:   2017-03-30T16:16:03Z

MINIFI-246: Adding tests that were used to find bug. Solved by MINIFI-193




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #71: MINIFI-236: Make GetFile, PutFile, TailFil...

2017-03-30 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/71#discussion_r108949062
  
--- Diff: libminifi/src/core/FlowConfiguration.cpp ---
@@ -40,6 +40,10 @@ std::shared_ptr 
FlowConfiguration::createProcessor(
 processor = std::make_shared<
 org::apache::nifi::minifi::processors::LogAttribute>(name, uuid);
   } else if (name
+  == org::apache::nifi::minifi::processors::ListenHTTP::ProcessorName) 
{
--- End diff --

minor: can we revert this shuffling?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #71: MINIFI-236: Make GetFile, PutFile, TailFil...

2017-03-30 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/71#discussion_r108910943
  
--- Diff: libminifi/include/core/FlowConfiguration.h ---
@@ -26,9 +26,9 @@
 #include "processors/PutFile.h"
 #include "processors/TailFile.h"
 #include "processors/ListenSyslog.h"
+#include "processors/ListenHTTP.h"
--- End diff --

minor:  can we revert this shuffling?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #71: MINIFI-236: Make GetFile, PutFile, TailFil...

2017-03-30 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/71#discussion_r108911015
  
--- Diff: libminifi/include/core/ConfigurableComponent.h ---
@@ -32,7 +32,7 @@ namespace core {
  * Represents a configurable component
  * Purpose: Extracts configuration items for all components and localized 
them
  */
-class ConfigurableComponent {
+class ConfigurableComponent  {
--- End diff --

minor:  extraneous whitespace


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #71: MINIFI-236: Make GetFile, PutFile, TailFil...

2017-03-30 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/71#discussion_r108971663
  
--- Diff: libminifi/include/processors/GetFile.h ---
@@ -101,23 +113,17 @@ class GetFile : public core::Processor {
   // Put full path file name into directory listing
   void putListing(std::string fileName);
   // Poll directory listing for files
-  void pollListing(std::queue , int maxSize);
+  void pollListing(std::queue ,const GetFileRequest 
);
   // Check whether file can be added to the directory listing
-  bool acceptFile(std::string fullName, std::string name);
+  bool acceptFile(std::string fullName, std::string name, const 
GetFileRequest );
+  // Get file request object.
+  GetFileRequest request_;
   // Mutex for protection of the directory listing
+
   std::mutex mutex_;
-  std::string _directory;
-  bool _recursive;
-  bool _keepSourceFile;
-  int64_t _minAge;
-  int64_t _maxAge;
-  int64_t _minSize;
-  int64_t _maxSize;
-  bool _ignoreHiddenFile;
-  int64_t _pollInterval;
-  int64_t _batchSize;
-  uint64_t _lastDirectoryListingTime;
-  std::string _fileFilter;
+
+  std::map last_listing_times_;
--- End diff --

Not sure I follow the usage of the map.  Presumably we only have one 
directory per instance as there is no input.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3147) Build processor to parse CCDA into attributes

2017-03-30 Thread Kedar Chitale (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949244#comment-15949244
 ] 

Kedar Chitale commented on NIFI-3147:
-

Thank you very much [~jfrazee] for technical reviews and [~joewitt] for 
direction and support. This contribution was just an idea a few months back and 
now I can vouch that contributing to NiFi been an great experience and hope 
that this motivates folks to contribute more!

> Build processor to parse CCDA into attributes
> -
>
> Key: NIFI-3147
> URL: https://issues.apache.org/jira/browse/NIFI-3147
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Kedar Chitale
>Assignee: Joey Frazee
>  Labels: attributes, ccda, healthcare, parser
> Fix For: 1.2.0
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Accept a CCDA document, Parse the document to create individual attributes 
> for example code.codeSystemName=LOINC



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3648) Address Excessive Garbage Collection

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949243#comment-15949243
 ] 

ASF GitHub Bot commented on NIFI-3648:
--

GitHub user mosermw opened a pull request:

https://github.com/apache/nifi/pull/1637

NIFI-3648 removed cluster message copying when not in debug mode

I expect NIFI-3648 could have several PRs, and that they will not be 
included in 1.2.0 release, so review of this can be delayed until the rest of 
that ticket is PRed and reviewed.  I just wanted to get this in so I don't 
forget.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mosermw/nifi nifi-3648-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1637.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1637


commit 55d93ccbec8fd7eaede0759e7abb550c75d36a63
Author: Mike Moser 
Date:   2017-03-30T14:34:26Z

NIFI-3648 removed message copying when not in debug mode




> Address Excessive Garbage Collection
> 
>
> Key: NIFI-3648
> URL: https://issues.apache.org/jira/browse/NIFI-3648
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> We have a lot of places in the codebase where we generate lots of unnecessary 
> garbage - especially byte arrays. We need to clean this up to in order to 
> relieve stress on the garbage collector.
> Specific points that I've found create unnecessary garbage:
> Provenance CompressableRecordWriter creates a new BufferedOutputStream for 
> each 'compression block' that it creates. Each one has a 64 KB byte[]. This 
> is very wasteful. We should instead subclass BufferedOutputStream so that we 
> are able to provide a byte[] to use instead of an int that indicates the 
> size. This way, we can just keep re-using the same byte[] that we create for 
> each writer. This saves about 32,000 of these 64 KB byte[] for each writer 
> that we create. And we create more than 1 of these per minute.
> EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because 
> the underlying library will also buffer data. So we are unnecessarily 
> creating a lot of byte[]'s
> CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And 
> doesn't need them at all, because it reads and writes with its own byte[] 
> buffer via StreamUtils.copy
> Site-to-site uses CompressionInputStream. This stream creates a new byte[] in 
> the readChunkHeader() method continually. We should instead only create a new 
> byte[] if we need a bigger buffer and otherwise just use an offset & length 
> variable.
> Right now, SplitText uses TextLineDemarcator. The fill() method increases the 
> size of the internal byte[] by 8 KB each time. When dealing with a large 
> chunk of data, this is VERY 

[GitHub] nifi pull request #1637: NIFI-3648 removed cluster message copying when not ...

2017-03-30 Thread mosermw
GitHub user mosermw opened a pull request:

https://github.com/apache/nifi/pull/1637

NIFI-3648 removed cluster message copying when not in debug mode

I expect NIFI-3648 could have several PRs, and that they will not be 
included in 1.2.0 release, so review of this can be delayed until the rest of 
that ticket is PRed and reviewed.  I just wanted to get this in so I don't 
forget.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mosermw/nifi nifi-3648-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1637.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1637


commit 55d93ccbec8fd7eaede0759e7abb550c75d36a63
Author: Mike Moser 
Date:   2017-03-30T14:34:26Z

NIFI-3648 removed message copying when not in debug mode




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (NIFI-3639) Add HBase Get to HBase_1_1_2_ClientService

2017-03-30 Thread Bjorn Olsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjorn Olsen resolved NIFI-3639.
---
Resolution: Not A Problem

Not required - HBase API Get is equivalent to Scan

> Add HBase Get to HBase_1_1_2_ClientService
> --
>
> Key: NIFI-3639
> URL: https://issues.apache.org/jira/browse/NIFI-3639
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Bjorn Olsen
>Priority: Trivial
>
> Enhance HBase_1_1_2_ClientService and API to provide HBase Get functionality. 
> Currently only Put and Scan are supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFIREG-2) Design logo for Registry

2017-03-30 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949228#comment-15949228
 ] 

Aldrin Piri commented on NIFIREG-2:
---

awesome, I like the design and work you've done!  

> Design logo for Registry
> 
>
> Key: NIFIREG-2
> URL: https://issues.apache.org/jira/browse/NIFIREG-2
> Project: NiFi Registry
>  Issue Type: Task
>Reporter: Rob Moran
>Assignee: Rob Moran
>Priority: Minor
> Attachments: registry-logo-concept_2017-03-27.png, 
> registry-logo-concept_2017-03-30.png
>
>
> The attached image contains the proposed logo design for Registry. The points 
> below describe some of the thinking behind it:
> * Relationship to NiFi and MiNiFi through the use of the same color palette, 
> typeface, and block elements representing bits of data
> * For Registry these blocks also represent the storage/organization aspect 
> through their even distribution and arrangement
> * The 3 gradated blocks across the top – forming the terminal part of a 
> lowercase *r* – represent movement (e.g., a versioned flow being saved to 
> NiFi or imported to NiFi from the registry)
> * Relating back to the original water/flow concept of NiFi, the curved line 
> integrated into the gradated blocks represent the continuous motion of 
> flowing water
> * The light gray block helps with idea of storage as previously mentioned, 
> but also alludes to unused storage/free space
> * The gray block also helps establish the strong diagonal slicing through it 
> and the lowest green block. Again this helps with the idea of movement, but 
> more so speaks to how Registry operates in the background, tucked away, 
> largely unseen by NiFi operators as it facilitates deployment tasks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFIREG-2) Design logo for Registry

2017-03-30 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949219#comment-15949219
 ] 

Bryan Bende commented on NIFIREG-2:
---

+1 looks good!

> Design logo for Registry
> 
>
> Key: NIFIREG-2
> URL: https://issues.apache.org/jira/browse/NIFIREG-2
> Project: NiFi Registry
>  Issue Type: Task
>Reporter: Rob Moran
>Assignee: Rob Moran
>Priority: Minor
> Attachments: registry-logo-concept_2017-03-27.png, 
> registry-logo-concept_2017-03-30.png
>
>
> The attached image contains the proposed logo design for Registry. The points 
> below describe some of the thinking behind it:
> * Relationship to NiFi and MiNiFi through the use of the same color palette, 
> typeface, and block elements representing bits of data
> * For Registry these blocks also represent the storage/organization aspect 
> through their even distribution and arrangement
> * The 3 gradated blocks across the top – forming the terminal part of a 
> lowercase *r* – represent movement (e.g., a versioned flow being saved to 
> NiFi or imported to NiFi from the registry)
> * Relating back to the original water/flow concept of NiFi, the curved line 
> integrated into the gradated blocks represent the continuous motion of 
> flowing water
> * The light gray block helps with idea of storage as previously mentioned, 
> but also alludes to unused storage/free space
> * The gray block also helps establish the strong diagonal slicing through it 
> and the lowest green block. Again this helps with the idea of movement, but 
> more so speaks to how Registry operates in the background, tucked away, 
> largely unseen by NiFi operators as it facilitates deployment tasks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949216#comment-15949216
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108946077
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/db/event/BaseEventInfo.java
 ---
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard.db.event;
+
+
+/**
+ * An abstract base class for all MySQL binlog events
+ */
+public class BaseEventInfo implements EventInfo {
--- End diff --

Doh! I refactored everything at some point, and forgot to update comments. 
Good catch, will fix


> Implement a CaptureChangeMySQL processor
> 
>
> Key: NIFI-3413
> URL: https://issues.apache.org/jira/browse/NIFI-3413
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.2.0
>
>
> Database systems such as MySQL, Oracle, and SQL Server allow access to their 
> transactional logs and such, in order for external clients to have a "change 
> data capture" (CDC) capability. As an initial effort, I propose a 
> CaptureChangeMySQL processor to enable this in NiFi. This would incorporate 
> any APIs necessary for follow-on Jira cases to implement CDC processors for 
> databases such as Oracle, SQL Server, PostgreSQL, etc.
> The processor would include properties needed for database connectivity 
> (unless using a DBCPConnectionPool would suffice), as well as any to 
> configure third-party clients (mysql-binlog-connector, e.g.). It would also 
> need to keep a "sequence ID" such that an EnforceOrder processor (NIFI-3414) 
> for example could guarantee the order of CDC events for use cases such as 
> replication. It will likely need State Management for that, and may need 
> other facilities such as a DistributedMapCache in order to keep information 
> (column names and types, e.g.) that enrich the raw CDC events.
> The processor would accept no incoming connections (it is a "get" or source 
> processor), would be intended to run on the primary node only as a single 
> threaded processor, and would generate a flow file for each operation 
> (INSERT, UPDATE, DELETE, e,g,) in one or some number of formats (JSON, e.g.).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949215#comment-15949215
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108945950
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108946077
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/db/event/BaseEventInfo.java
 ---
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard.db.event;
+
+
+/**
+ * An abstract base class for all MySQL binlog events
+ */
+public class BaseEventInfo implements EventInfo {
--- End diff --

Doh! I refactored everything at some point, and forgot to update comments. 
Good catch, will fix


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108945950
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.CommitTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.DeleteRowsWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.InsertRowsWriter;
+import 

[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949207#comment-15949207
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108945053
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949202#comment-15949202
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108944976
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108945053
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.CommitTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.DeleteRowsWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.InsertRowsWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108944976
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.CommitTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.DeleteRowsWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.InsertRowsWriter;
+import 

[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949199#comment-15949199
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108944680
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949196#comment-15949196
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108944482
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108944680
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
+import com.github.shyiko.mysql.binlog.event.Event;
+import com.github.shyiko.mysql.binlog.event.EventHeaderV4;
+import com.github.shyiko.mysql.binlog.event.EventType;
+import com.github.shyiko.mysql.binlog.event.QueryEventData;
+import com.github.shyiko.mysql.binlog.event.RotateEventData;
+import com.github.shyiko.mysql.binlog.event.TableMapEventData;
+import org.apache.nifi.annotation.behavior.DynamicProperties;
+import org.apache.nifi.annotation.behavior.DynamicProperty;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.Stateful;
+import org.apache.nifi.annotation.behavior.TriggerSerially;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateManager;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.CDCException;
+import org.apache.nifi.processors.standard.db.event.ColumnDefinition;
+import org.apache.nifi.processors.standard.db.event.RowEventException;
+import org.apache.nifi.processors.standard.db.event.TableInfo;
+import org.apache.nifi.processors.standard.db.event.TableInfoCacheKey;
+import org.apache.nifi.processors.standard.db.event.io.EventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BeginTransactionEventInfo;
+import org.apache.nifi.processors.standard.db.impl.mysql.RawBinlogEvent;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.BinlogEventListener;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.BinlogEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.CommitTransactionEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.DeleteRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.SchemaChangeEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.UpdateRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.InsertRowsEventInfo;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.BeginTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.CommitTransactionEventWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.DeleteRowsWriter;
+import 
org.apache.nifi.processors.standard.db.impl.mysql.event.io.InsertRowsWriter;
+import 

[GitHub] nifi pull request #1618: NIFI-3413: Add CaptureChangeMySQL processor

2017-03-30 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108944286
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
--- End diff --

I didn't know there was a newer version, when I started the work 0.8.1 was 
the latest on Maven Central I think. I will upgrade to the latest, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3413) Implement a CaptureChangeMySQL processor

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949191#comment-15949191
 ] 

ASF GitHub Bot commented on NIFI-3413:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1618#discussion_r108944286
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/CaptureChangeMySQL.java
 ---
@@ -0,0 +1,928 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import com.github.shyiko.mysql.binlog.BinaryLogClient;
--- End diff --

I didn't know there was a newer version, when I started the work 0.8.1 was 
the latest on Maven Central I think. I will upgrade to the latest, thanks!


> Implement a CaptureChangeMySQL processor
> 
>
> Key: NIFI-3413
> URL: https://issues.apache.org/jira/browse/NIFI-3413
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.2.0
>
>
> Database systems such as MySQL, Oracle, and SQL Server allow access to their 
> transactional logs and such, in order for external clients to have a "change 
> data capture" (CDC) capability. As an initial effort, I propose a 
> CaptureChangeMySQL processor to enable this in NiFi. This would incorporate 
> any APIs necessary for follow-on Jira cases to implement CDC processors for 
> databases such as Oracle, SQL Server, PostgreSQL, etc.
> The processor would include properties needed for database connectivity 
> (unless using a DBCPConnectionPool would suffice), as well as any to 
> configure third-party clients (mysql-binlog-connector, e.g.). It would also 
> need to keep a "sequence ID" such that an EnforceOrder processor (NIFI-3414) 
> for example could guarantee the order of CDC events for use cases such as 
> replication. It will likely need State Management for that, and may need 
> other facilities such as a DistributedMapCache in order to keep information 
> (column names and types, e.g.) that enrich the raw CDC events.
> The processor would accept no incoming connections (it is a "get" or source 
> processor), would be intended to run on the primary node only as a single 
> threaded processor, and would generate a flow file for each operation 
> (INSERT, UPDATE, DELETE, e,g,) in one or some number of formats (JSON, e.g.).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3257) Cluster stability issues during high throughput

2017-03-30 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949181#comment-15949181
 ] 

Matt Burgess commented on NIFI-3257:


The following commit added logging to help indicate the conditions specified in 
this Jira case:
https://github.com/apache/nifi/commit/3aa1db6ee5d952aa67601e460331940f5e20a166


> Cluster stability issues during high throughput
> ---
>
> Key: NIFI-3257
> URL: https://issues.apache.org/jira/browse/NIFI-3257
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0, 1.1.1, 1.0.1
>Reporter: Jeff Storck
>
> During high throughput of data in a cluster (135MB/s), nodes experience 
> frequent disconnects (every few minutes) and role switching (Primary and 
> Cluster Coordinator).  This makes API requests difficult since the requests 
> can not be replicated to all nodes while reconnecting.  The cluster can 
> recover for a time (as mentioned above, for a few minutes) before going 
> through another round of disconnects and role switching.
> The cluster is able to continue to process data during these connection and 
> role-switching issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3663) Utility to automate upgrade/downgrade of NiFi

2017-03-30 Thread Yolanda M. Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949179#comment-15949179
 ] 

Yolanda M. Davis commented on NIFI-3663:


Just wanted to provide a definition for rolling upgrade in NiFi:

NiFi already supports the backwards compatibility of nodes (from version 1.0.0 
forward) so the challenge is ensuring users are alerted before node is 
disconnected, upgraded and then rejoins the cluster. Users of the UI should be 
alerted of pending upgrade of cluster and alerted when specific node in cluster 
is about to be removed for upgrade. Also once a node is disconnected from a 
cluster NiFi already prevents users from making any changes to the flow (so 
that the flow template won't change when the upgraded node reconnects). However 
if the flow is running it should be allowed to continue to execute on the nodes 
remaining active in the cluster. 

Here is a summary of what can happen during the upgrade of a node:

1) Send bulletin to NiFi alerts users of pending upgrade of node
2) Disconnect node from cluster and shutdown node
3) Backup lib, config, docs, bin directories, license, notice (not repos/state 
dir)
4) Decompress upgrade file and install new library folder, docs and bin 
directories. 
5) Perform configuration delta for conf, properties and xml files writing out 
updated files to designated configuration location
6) Start up node and reconnect to cluster

For downgrade process:
1) Send bulletin to NiFi alerts users of pending downgrade of node
2) If node is running and attached to cluster then disconnect node and shutdown
3) Retrieve backup of lib, config, docs and bin, directories and restore to 
install location
4) Start up node and reconnect to cluster


> Utility to automate upgrade/downgrade of NiFi 
> --
>
> Key: NIFI-3663
> URL: https://issues.apache.org/jira/browse/NIFI-3663
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>
> Upgrading NiFi either in standalone mode or within a cluster currently 
> requires several manual steps to ensure that an existing NiFi node is 
> properly migrated to a new version.  Also there is no support for "rolling 
> upgrade" which would allow upgrade of a NiFi cluster to occur without a full 
> outage of the cluster. This limits a clusters ability to provide a highly 
> available environment for flow execution and also requires more coordination 
> to plan and schedule for full outages.
> Having a utility (or a set of utilities) that can support a more seamless 
> move to a new version of NiFi (either in rolling or non-rolling fashion) 
> would help to further support automation of configuration and management for 
> NiFi.  Such a utility could also be leveraged by more enterprise level 
> configuration management frameworks (e.g. Ansible, Puppet, Chef, Salt) to 
> coordinate upgrades across multiple nodes or clusters within an environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3649) Parallelize Consumption of Cluster Replication Responses

2017-03-30 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-3649:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Parallelize Consumption of Cluster Replication Responses
> 
>
> Key: NIFI-3649
> URL: https://issues.apache.org/jira/browse/NIFI-3649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> Currently, when we replicate a REST API call to all nodes in the cluster, we 
> wait until we get back a Response object for each node. We then go to the 
> Response Mapper and merge those responses into 1 NodeResponse. This merging 
> is done serially, and this is where we actually do the reading of the 
> response. We need to instead do this merging in parallel or buffer responses 
> in parallel and then merge then when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3636) Session should not copy FlowFile Attribute Map when creating new FlowFile object unless attributes are changing

2017-03-30 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-3636:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Session should not copy FlowFile Attribute Map when creating new FlowFile 
> object unless attributes are changing
> ---
>
> Key: NIFI-3636
> URL: https://issues.apache.org/jira/browse/NIFI-3636
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> Currently, each time we create a new FlowFile object, we do so using 
> StandardFlowFileRecord.Builder and then call fromFlowFile(FlowFile). This 
> copies the attributes map of the given FlowFile. We should instead just set 
> the member variable of the builder to point at the same Map object as the 
> given FlowFile and keep a flag indicating whether or not this was done. If 
> this is done, we must lazily copy the hash map before modifying it. 
> Otherwise, we can point to the same Map object.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3636) Session should not copy FlowFile Attribute Map when creating new FlowFile object unless attributes are changing

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949175#comment-15949175
 ] 

ASF GitHub Bot commented on NIFI-3636:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1612


> Session should not copy FlowFile Attribute Map when creating new FlowFile 
> object unless attributes are changing
> ---
>
> Key: NIFI-3636
> URL: https://issues.apache.org/jira/browse/NIFI-3636
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> Currently, each time we create a new FlowFile object, we do so using 
> StandardFlowFileRecord.Builder and then call fromFlowFile(FlowFile). This 
> copies the attributes map of the given FlowFile. We should instead just set 
> the member variable of the builder to point at the same Map object as the 
> given FlowFile and keep a flag indicating whether or not this was done. If 
> this is done, we must lazily copy the hash map before modifying it. 
> Otherwise, we can point to the same Map object.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3636) Session should not copy FlowFile Attribute Map when creating new FlowFile object unless attributes are changing

2017-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949173#comment-15949173
 ] 

ASF subversion and git services commented on NIFI-3636:
---

Commit 3aa1db6ee5d952aa67601e460331940f5e20a166 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3aa1db6 ]

NIFI-3636: Lazily copy FlowFile Attributes Hash Map instead of doing so eagerly.

Signed-off-by: Matt Burgess 

NIFI-3257: Added additional logging regarding timing information when 
replicating requests across cluster in order to glean insight as to what is 
taking so long when replicating some requests

Signed-off-by: Matt Burgess 

NIFI-3649: Buffer node responses when replicating HTTP Requests up to a maximum 
buffer size

Signed-off-by: Matt Burgess 

NIFI-3636: Added unit test to ensure that flowfile attribute maps are copied 
when appropriate

Signed-off-by: Matt Burgess 

NIFI-3636: Removed patch file that should not have been in commit

Signed-off-by: Matt Burgess 

This closes #1612


> Session should not copy FlowFile Attribute Map when creating new FlowFile 
> object unless attributes are changing
> ---
>
> Key: NIFI-3636
> URL: https://issues.apache.org/jira/browse/NIFI-3636
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> Currently, each time we create a new FlowFile object, we do so using 
> StandardFlowFileRecord.Builder and then call fromFlowFile(FlowFile). This 
> copies the attributes map of the given FlowFile. We should instead just set 
> the member variable of the builder to point at the same Map object as the 
> given FlowFile and keep a flag indicating whether or not this was done. If 
> this is done, we must lazily copy the hash map before modifying it. 
> Otherwise, we can point to the same Map object.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi pull request #1612: NIFI-3636: Lazily copy FlowFile Attributes Hash Map...

2017-03-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1612


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3636) Session should not copy FlowFile Attribute Map when creating new FlowFile object unless attributes are changing

2017-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949171#comment-15949171
 ] 

ASF subversion and git services commented on NIFI-3636:
---

Commit 3aa1db6ee5d952aa67601e460331940f5e20a166 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3aa1db6 ]

NIFI-3636: Lazily copy FlowFile Attributes Hash Map instead of doing so eagerly.

Signed-off-by: Matt Burgess 

NIFI-3257: Added additional logging regarding timing information when 
replicating requests across cluster in order to glean insight as to what is 
taking so long when replicating some requests

Signed-off-by: Matt Burgess 

NIFI-3649: Buffer node responses when replicating HTTP Requests up to a maximum 
buffer size

Signed-off-by: Matt Burgess 

NIFI-3636: Added unit test to ensure that flowfile attribute maps are copied 
when appropriate

Signed-off-by: Matt Burgess 

NIFI-3636: Removed patch file that should not have been in commit

Signed-off-by: Matt Burgess 

This closes #1612


> Session should not copy FlowFile Attribute Map when creating new FlowFile 
> object unless attributes are changing
> ---
>
> Key: NIFI-3636
> URL: https://issues.apache.org/jira/browse/NIFI-3636
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> Currently, each time we create a new FlowFile object, we do so using 
> StandardFlowFileRecord.Builder and then call fromFlowFile(FlowFile). This 
> copies the attributes map of the given FlowFile. We should instead just set 
> the member variable of the builder to point at the same Map object as the 
> given FlowFile and keep a flag indicating whether or not this was done. If 
> this is done, we must lazily copy the hash map before modifying it. 
> Otherwise, we can point to the same Map object.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3649) Parallelize Consumption of Cluster Replication Responses

2017-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949170#comment-15949170
 ] 

ASF subversion and git services commented on NIFI-3649:
---

Commit 3aa1db6ee5d952aa67601e460331940f5e20a166 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3aa1db6 ]

NIFI-3636: Lazily copy FlowFile Attributes Hash Map instead of doing so eagerly.

Signed-off-by: Matt Burgess 

NIFI-3257: Added additional logging regarding timing information when 
replicating requests across cluster in order to glean insight as to what is 
taking so long when replicating some requests

Signed-off-by: Matt Burgess 

NIFI-3649: Buffer node responses when replicating HTTP Requests up to a maximum 
buffer size

Signed-off-by: Matt Burgess 

NIFI-3636: Added unit test to ensure that flowfile attribute maps are copied 
when appropriate

Signed-off-by: Matt Burgess 

NIFI-3636: Removed patch file that should not have been in commit

Signed-off-by: Matt Burgess 

This closes #1612


> Parallelize Consumption of Cluster Replication Responses
> 
>
> Key: NIFI-3649
> URL: https://issues.apache.org/jira/browse/NIFI-3649
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> Currently, when we replicate a REST API call to all nodes in the cluster, we 
> wait until we get back a Response object for each node. We then go to the 
> Response Mapper and merge those responses into 1 NodeResponse. This merging 
> is done serially, and this is where we actually do the reading of the 
> response. We need to instead do this merging in parallel or buffer responses 
> in parallel and then merge then when done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3636) Session should not copy FlowFile Attribute Map when creating new FlowFile object unless attributes are changing

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949172#comment-15949172
 ] 

ASF GitHub Bot commented on NIFI-3636:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1612
  
+1 LGTM, built and ran unit tests and contrib-check. Tested with 3-node 
secure NiFi cluster using sample flows (including site-to-site), all looks 
good. Thanks! Merging to master.


> Session should not copy FlowFile Attribute Map when creating new FlowFile 
> object unless attributes are changing
> ---
>
> Key: NIFI-3636
> URL: https://issues.apache.org/jira/browse/NIFI-3636
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.2.0
>
>
> Currently, each time we create a new FlowFile object, we do so using 
> StandardFlowFileRecord.Builder and then call fromFlowFile(FlowFile). This 
> copies the attributes map of the given FlowFile. We should instead just set 
> the member variable of the builder to point at the same Map object as the 
> given FlowFile and keep a flag indicating whether or not this was done. If 
> this is done, we must lazily copy the hash map before modifying it. 
> Otherwise, we can point to the same Map object.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] nifi issue #1612: NIFI-3636: Lazily copy FlowFile Attributes Hash Map instea...

2017-03-30 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1612
  
+1 LGTM, built and ran unit tests and contrib-check. Tested with 3-node 
secure NiFi cluster using sample flows (including site-to-site), all looks 
good. Thanks! Merging to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >