[jira] [Updated] (NIFI-2757) Site-to-Site Auth Breaks when using DN Identity Mapping Patterns

2016-09-12 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-2757:

Fix Version/s: 1.1.0

> Site-to-Site Auth Breaks when using DN Identity Mapping Patterns
> 
>
> Key: NIFI-2757
> URL: https://issues.apache.org/jira/browse/NIFI-2757
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Peter Wicks
>Assignee: Koji Kawamura
> Fix For: 1.1.0
>
>
> If you setup a nifi.security.identity.mapping for DN's Site-to-Site won't be 
> able to authenticate against the server with identity mappings unless you 
> create two user accounts, one for the identity mapped one and another with 
> the full DN from the certificate.
> Maybe look at StandardRootGroupPort.java, 
> final CommunicationsSession commsSession = peer.getCommunicationsSession();
> final String sourceDn = commsSession.getUserDn();
> ..
> final PortAuthorizationResult authorizationResult = 
> checkUserAuthorization(sourceDn);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2757) Site-to-Site Auth Breaks when using DN Identity Mapping Patterns

2016-09-12 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-2757:

Status: Patch Available  (was: Open)

> Site-to-Site Auth Breaks when using DN Identity Mapping Patterns
> 
>
> Key: NIFI-2757
> URL: https://issues.apache.org/jira/browse/NIFI-2757
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Peter Wicks
>Assignee: Koji Kawamura
>
> If you setup a nifi.security.identity.mapping for DN's Site-to-Site won't be 
> able to authenticate against the server with identity mappings unless you 
> create two user accounts, one for the identity mapped one and another with 
> the full DN from the certificate.
> Maybe look at StandardRootGroupPort.java, 
> final CommunicationsSession commsSession = peer.getCommunicationsSession();
> final String sourceDn = commsSession.getUserDn();
> ..
> final PortAuthorizationResult authorizationResult = 
> checkUserAuthorization(sourceDn);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2757) Site-to-Site Auth Breaks when using DN Identity Mapping Patterns

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486232#comment-15486232
 ] 

ASF GitHub Bot commented on NIFI-2757:
--

GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/1010

NIFI-2757: Site-to-Site with DN mapping

Added DN identity mapping pattern support to Site-to-Site client
authorization.

HTTP Site-to-Site has been working without this fix since it uses the same 
mechanism with other REST endpoints for authenticating user identity. This PR 
fixes RAW transport protocol, by adding mapping code at 
`StandardRootGroupPort.checkUserAuthorization(final String dn)`.

Confirmed it worked using two running NiFi instances. Contrib check passed 
locally.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-2757

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1010.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1010


commit fff756728b93c3c962b2ce40327cf02700eca3ff
Author: Koji Kawamura 
Date:   2016-09-13T04:24:59Z

NIFI-2757: Site-to-Site with DN mapping

Added DN identity mapping pattern support to Site-to-Site client
authorization.




> Site-to-Site Auth Breaks when using DN Identity Mapping Patterns
> 
>
> Key: NIFI-2757
> URL: https://issues.apache.org/jira/browse/NIFI-2757
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Peter Wicks
>Assignee: Koji Kawamura
>
> If you setup a nifi.security.identity.mapping for DN's Site-to-Site won't be 
> able to authenticate against the server with identity mappings unless you 
> create two user accounts, one for the identity mapped one and another with 
> the full DN from the certificate.
> Maybe look at StandardRootGroupPort.java, 
> final CommunicationsSession commsSession = peer.getCommunicationsSession();
> final String sourceDn = commsSession.getUserDn();
> ..
> final PortAuthorizationResult authorizationResult = 
> checkUserAuthorization(sourceDn);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1010: NIFI-2757: Site-to-Site with DN mapping

2016-09-12 Thread ijokarumawak
GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/1010

NIFI-2757: Site-to-Site with DN mapping

Added DN identity mapping pattern support to Site-to-Site client
authorization.

HTTP Site-to-Site has been working without this fix since it uses the same 
mechanism with other REST endpoints for authenticating user identity. This PR 
fixes RAW transport protocol, by adding mapping code at 
`StandardRootGroupPort.checkUserAuthorization(final String dn)`.

Confirmed it worked using two running NiFi instances. Contrib check passed 
locally.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-2757

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1010.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1010


commit fff756728b93c3c962b2ce40327cf02700eca3ff
Author: Koji Kawamura 
Date:   2016-09-13T04:24:59Z

NIFI-2757: Site-to-Site with DN mapping

Added DN identity mapping pattern support to Site-to-Site client
authorization.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2764) JdbcCommon Avro Can't Process Java Short Types

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486157#comment-15486157
 ] 

ASF GitHub Bot commented on NIFI-2764:
--

GitHub user patricker opened a pull request:

https://github.com/apache/nifi/pull/1009

NIFI-2764 - MS SQL TINYINT Avro Issue

Small fix for TINYINT Columns causing Avro serialization to die.

I did not commit the unit test, I did write one and attached it as a 
comment to the JIRA ticket; it felt pretty dirty how it was written, wasn't 
sure if I should include it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/patricker/nifi NIFI-2764

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1009.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1009


commit 3755ecb96acca90344d25e2717e58ce9940f69b2
Author: patricker 
Date:   2016-09-13T03:52:58Z

NIFI-2764




> JdbcCommon Avro Can't Process Java Short Types
> --
>
> Key: NIFI-2764
> URL: https://issues.apache.org/jira/browse/NIFI-2764
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Peter Wicks
>
> Microsoft SQL Server returns TINYINT values as Java Short's.  Avro is unable 
> to write datum's of this type and throws an exception when trying to.
> This currently breaks QueryDatabaseTable at the very least when querying MS 
> SQL Server with TINYINT's in the ResultSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1009: NIFI-2764 - MS SQL TINYINT Avro Issue

2016-09-12 Thread patricker
GitHub user patricker opened a pull request:

https://github.com/apache/nifi/pull/1009

NIFI-2764 - MS SQL TINYINT Avro Issue

Small fix for TINYINT Columns causing Avro serialization to die.

I did not commit the unit test, I did write one and attached it as a 
comment to the JIRA ticket; it felt pretty dirty how it was written, wasn't 
sure if I should include it.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/patricker/nifi NIFI-2764

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1009.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1009


commit 3755ecb96acca90344d25e2717e58ce9940f69b2
Author: patricker 
Date:   2016-09-13T03:52:58Z

NIFI-2764




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (NIFI-2764) JdbcCommon Avro Can't Process Java Short Types

2016-09-12 Thread Peter Wicks (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486147#comment-15486147
 ] 

Peter Wicks edited comment on NIFI-2764 at 9/13/16 3:50 AM:


The Unit Test I wrote, that I can't bring myself to check-in:

{code}
@Test
@Ignore("Intended only for local testing, not automated testing")
public void testMSSQLIntColumns() throws SQLException, 
InitializationException {
QueryDatabaseTable qdbProcessor = new MockQueryDatabaseTable();
TestRunner qdbRunner = TestRunners.newTestRunner(qdbProcessor);

final DBCPConnectionPool service = new DBCPConnectionPool();
runner.addControllerService("dbcp", service);

// set embedded Derby database connection url
runner.setProperty(service, DBCPConnectionPool.DATABASE_URL, 
"jdbc:sqlserver://localhost;DatabaseName=nifiTest;portNumber=1433;instanceName=./SQLEXPRESS");
runner.setProperty(service, DBCPConnectionPool.DB_USER, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_PASSWORD, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVERNAME, 
"com.microsoft.sqlserver.jdbc.SQLServerDriver");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVER_LOCATION, 
"C:/Users/pwicks/Downloads/sqljdbc_6.0/enu/sqljdbc41.jar");

runner.enableControllerService(service);

runner.setIncomingConnection(false);
runner.setProperty(QueryDatabaseTable.TABLE_NAME, "testtable");
runner.run();

runner.assertAllFlowFilesTransferred(QueryDatabaseTable.REL_SUCCESS, 1);

runner.getFlowFilesForRelationship(QueryDatabaseTable.REL_SUCCESS).get(0).assertAttributeEquals(QueryDatabaseTable.RESULT_ROW_COUNT,
 "3");
}
{code}


was (Author: patricker):
The Unit Test I wrote, that I can't bring myself to check-in:

`@Test
@Ignore("Intended only for local testing, not automated testing")
public void testMSSQLIntColumns() throws SQLException, 
InitializationException {
QueryDatabaseTable qdbProcessor = new MockQueryDatabaseTable();
TestRunner qdbRunner = TestRunners.newTestRunner(qdbProcessor);

final DBCPConnectionPool service = new DBCPConnectionPool();
runner.addControllerService("dbcp", service);

// set embedded Derby database connection url
runner.setProperty(service, DBCPConnectionPool.DATABASE_URL, 
"jdbc:sqlserver://localhost;DatabaseName=nifiTest;portNumber=1433;instanceName=./SQLEXPRESS");
runner.setProperty(service, DBCPConnectionPool.DB_USER, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_PASSWORD, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVERNAME, 
"com.microsoft.sqlserver.jdbc.SQLServerDriver");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVER_LOCATION, 
"C:/Users/pwicks/Downloads/sqljdbc_6.0/enu/sqljdbc41.jar");

runner.enableControllerService(service);

runner.setIncomingConnection(false);
runner.setProperty(QueryDatabaseTable.TABLE_NAME, "testtable");
runner.run();

runner.assertAllFlowFilesTransferred(QueryDatabaseTable.REL_SUCCESS, 1);

runner.getFlowFilesForRelationship(QueryDatabaseTable.REL_SUCCESS).get(0).assertAttributeEquals(QueryDatabaseTable.RESULT_ROW_COUNT,
 "3");
}`

> JdbcCommon Avro Can't Process Java Short Types
> --
>
> Key: NIFI-2764
> URL: https://issues.apache.org/jira/browse/NIFI-2764
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Peter Wicks
>
> Microsoft SQL Server returns TINYINT values as Java Short's.  Avro is unable 
> to write datum's of this type and throws an exception when trying to.
> This currently breaks QueryDatabaseTable at the very least when querying MS 
> SQL Server with TINYINT's in the ResultSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-2764) JdbcCommon Avro Can't Process Java Short Types

2016-09-12 Thread Peter Wicks (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486147#comment-15486147
 ] 

Peter Wicks edited comment on NIFI-2764 at 9/13/16 3:49 AM:


The Unit Test I wrote, that I can't bring myself to check-in:

`@Test
@Ignore("Intended only for local testing, not automated testing")
public void testMSSQLIntColumns() throws SQLException, 
InitializationException {
QueryDatabaseTable qdbProcessor = new MockQueryDatabaseTable();
TestRunner qdbRunner = TestRunners.newTestRunner(qdbProcessor);

final DBCPConnectionPool service = new DBCPConnectionPool();
runner.addControllerService("dbcp", service);

// set embedded Derby database connection url
runner.setProperty(service, DBCPConnectionPool.DATABASE_URL, 
"jdbc:sqlserver://localhost;DatabaseName=nifiTest;portNumber=1433;instanceName=./SQLEXPRESS");
runner.setProperty(service, DBCPConnectionPool.DB_USER, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_PASSWORD, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVERNAME, 
"com.microsoft.sqlserver.jdbc.SQLServerDriver");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVER_LOCATION, 
"C:/Users/pwicks/Downloads/sqljdbc_6.0/enu/sqljdbc41.jar");

runner.enableControllerService(service);

runner.setIncomingConnection(false);
runner.setProperty(QueryDatabaseTable.TABLE_NAME, "testtable");
runner.run();

runner.assertAllFlowFilesTransferred(QueryDatabaseTable.REL_SUCCESS, 1);

runner.getFlowFilesForRelationship(QueryDatabaseTable.REL_SUCCESS).get(0).assertAttributeEquals(QueryDatabaseTable.RESULT_ROW_COUNT,
 "3");
}`


was (Author: patricker):
The Unit Test I wrote, that I can't bring myself to check-in:

@Test
@Ignore("Intended only for local testing, not automated testing")
public void testMSSQLIntColumns() throws SQLException, 
InitializationException {
QueryDatabaseTable qdbProcessor = new MockQueryDatabaseTable();
TestRunner qdbRunner = TestRunners.newTestRunner(qdbProcessor);

final DBCPConnectionPool service = new DBCPConnectionPool();
runner.addControllerService("dbcp", service);

// set embedded Derby database connection url
runner.setProperty(service, DBCPConnectionPool.DATABASE_URL, 
"jdbc:sqlserver://localhost;DatabaseName=nifiTest;portNumber=1433;instanceName=./SQLEXPRESS");
runner.setProperty(service, DBCPConnectionPool.DB_USER, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_PASSWORD, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVERNAME, 
"com.microsoft.sqlserver.jdbc.SQLServerDriver");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVER_LOCATION, 
"C:/Users/pwicks/Downloads/sqljdbc_6.0/enu/sqljdbc41.jar");

runner.enableControllerService(service);

runner.setIncomingConnection(false);
runner.setProperty(QueryDatabaseTable.TABLE_NAME, "testtable");
runner.run();

runner.assertAllFlowFilesTransferred(QueryDatabaseTable.REL_SUCCESS, 1);

runner.getFlowFilesForRelationship(QueryDatabaseTable.REL_SUCCESS).get(0).assertAttributeEquals(QueryDatabaseTable.RESULT_ROW_COUNT,
 "3");
}

> JdbcCommon Avro Can't Process Java Short Types
> --
>
> Key: NIFI-2764
> URL: https://issues.apache.org/jira/browse/NIFI-2764
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Peter Wicks
>
> Microsoft SQL Server returns TINYINT values as Java Short's.  Avro is unable 
> to write datum's of this type and throws an exception when trying to.
> This currently breaks QueryDatabaseTable at the very least when querying MS 
> SQL Server with TINYINT's in the ResultSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2764) JdbcCommon Avro Can't Process Java Short Types

2016-09-12 Thread Peter Wicks (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486147#comment-15486147
 ] 

Peter Wicks commented on NIFI-2764:
---

The Unit Test I wrote, that I can't bring myself to check-in:

@Test
@Ignore("Intended only for local testing, not automated testing")
public void testMSSQLIntColumns() throws SQLException, 
InitializationException {
QueryDatabaseTable qdbProcessor = new MockQueryDatabaseTable();
TestRunner qdbRunner = TestRunners.newTestRunner(qdbProcessor);

final DBCPConnectionPool service = new DBCPConnectionPool();
runner.addControllerService("dbcp", service);

// set embedded Derby database connection url
runner.setProperty(service, DBCPConnectionPool.DATABASE_URL, 
"jdbc:sqlserver://localhost;DatabaseName=nifiTest;portNumber=1433;instanceName=./SQLEXPRESS");
runner.setProperty(service, DBCPConnectionPool.DB_USER, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_PASSWORD, "nifi");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVERNAME, 
"com.microsoft.sqlserver.jdbc.SQLServerDriver");
runner.setProperty(service, DBCPConnectionPool.DB_DRIVER_LOCATION, 
"C:/Users/pwicks/Downloads/sqljdbc_6.0/enu/sqljdbc41.jar");

runner.enableControllerService(service);

runner.setIncomingConnection(false);
runner.setProperty(QueryDatabaseTable.TABLE_NAME, "testtable");
runner.run();

runner.assertAllFlowFilesTransferred(QueryDatabaseTable.REL_SUCCESS, 1);

runner.getFlowFilesForRelationship(QueryDatabaseTable.REL_SUCCESS).get(0).assertAttributeEquals(QueryDatabaseTable.RESULT_ROW_COUNT,
 "3");
}

> JdbcCommon Avro Can't Process Java Short Types
> --
>
> Key: NIFI-2764
> URL: https://issues.apache.org/jira/browse/NIFI-2764
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Peter Wicks
>
> Microsoft SQL Server returns TINYINT values as Java Short's.  Avro is unable 
> to write datum's of this type and throws an exception when trying to.
> This currently breaks QueryDatabaseTable at the very least when querying MS 
> SQL Server with TINYINT's in the ResultSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-2764) JdbcCommon Avro Can't Process Java Short Types

2016-09-12 Thread Peter Wicks (JIRA)
Peter Wicks created NIFI-2764:
-

 Summary: JdbcCommon Avro Can't Process Java Short Types
 Key: NIFI-2764
 URL: https://issues.apache.org/jira/browse/NIFI-2764
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.0.0
Reporter: Peter Wicks


Microsoft SQL Server returns TINYINT values as Java Short's.  Avro is unable to 
write datum's of this type and throws an exception when trying to.

This currently breaks QueryDatabaseTable at the very least when querying MS SQL 
Server with TINYINT's in the ResultSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1008: NIFI-2763 S3 processors do not work with older S3-c...

2016-09-12 Thread baank
GitHub user baank opened a pull request:

https://github.com/apache/nifi/pull/1008

NIFI-2763 S3 processors do not work with older S3-compatible object s…

…tores

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/baank/nifi master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1008.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1008


commit d62123a3285bdb5d7566799e737a95e7c1b29995
Author: d810146 
Date:   2016-09-13T03:30:12Z

NIFI-2763 S3 processors do not work with older S3-compatible object stores




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2763) S3 processors do not work with older S3-compatible object stores

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486114#comment-15486114
 ] 

ASF GitHub Bot commented on NIFI-2763:
--

GitHub user baank opened a pull request:

https://github.com/apache/nifi/pull/1008

NIFI-2763 S3 processors do not work with older S3-compatible object s…

…tores

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/baank/nifi master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1008.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1008


commit d62123a3285bdb5d7566799e737a95e7c1b29995
Author: d810146 
Date:   2016-09-13T03:30:12Z

NIFI-2763 S3 processors do not work with older S3-compatible object stores




> S3 processors do not work with older S3-compatible object stores
> 
>
> Key: NIFI-2763
> URL: https://issues.apache.org/jira/browse/NIFI-2763
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Franco
>  Labels: easyfix
> Fix For: 1.1.0
>
>
> In the 1.0.0 release of NiFi it is using the AWS library for connecting to S3 
> and S3-compatible object stores.
> This library by default expects V4 signer support which if you are using an 
> older object store will not be available and so NiFi will be unusable.
> The fix is simple:
> Allow the user to specify (if they wish) the signer type that AWS library 
> should use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2763) S3 processors do not work with older S3-compatible object stores

2016-09-12 Thread Franco (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Franco updated NIFI-2763:
-
Fix Version/s: 1.1.0

> S3 processors do not work with older S3-compatible object stores
> 
>
> Key: NIFI-2763
> URL: https://issues.apache.org/jira/browse/NIFI-2763
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Franco
>  Labels: easyfix
> Fix For: 1.1.0
>
>
> In the 1.0.0 release of NiFi it is using the AWS library for connecting to S3 
> and S3-compatible object stores.
> This library by default expects V4 signer support which if you are using an 
> older object store will not be available and so NiFi will be unusable.
> The fix is simple:
> Allow the user to specify (if they wish) the signer type that AWS library 
> should use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-2762) S3 processors do not work with older S3-compatible object stores

2016-09-12 Thread Franco (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Franco resolved NIFI-2762.
--
Resolution: Fixed

> S3 processors do not work with older S3-compatible object stores
> 
>
> Key: NIFI-2762
> URL: https://issues.apache.org/jira/browse/NIFI-2762
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0
>Reporter: Franco
>
> In the 1.0.0 release of NiFi it is using the AWS library for connecting to S3 
> and S3-compatible object stores.
> This library by default expects V4 signer support which if you are using an 
> older object store will not be available and so NiFi will be unusable.
> The fix is simple:
> Allow the user to specify (if they wish) the signer type that AWS library 
> should use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-2763) S3 processors do not work with older S3-compatible object stores

2016-09-12 Thread Naden Franciscus (JIRA)
Naden Franciscus created NIFI-2763:
--

 Summary: S3 processors do not work with older S3-compatible object 
stores
 Key: NIFI-2763
 URL: https://issues.apache.org/jira/browse/NIFI-2763
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.0.0
Reporter: Naden Franciscus


In the 1.0.0 release of NiFi it is using the AWS library for connecting to S3 
and S3-compatible object stores.

This library by default expects V4 signer support which if you are using an 
older object store will not be available and so NiFi will be unusable.

The fix is simple:
Allow the user to specify (if they wish) the signer type that AWS library 
should use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-2762) S3 processors do not work with older S3-compatible object stores

2016-09-12 Thread Naden Franciscus (JIRA)
Naden Franciscus created NIFI-2762:
--

 Summary: S3 processors do not work with older S3-compatible object 
stores
 Key: NIFI-2762
 URL: https://issues.apache.org/jira/browse/NIFI-2762
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.0.0
Reporter: Naden Franciscus


In the 1.0.0 release of NiFi it is using the AWS library for connecting to S3 
and S3-compatible object stores.

This library by default expects V4 signer support which if you are using an 
older object store will not be available and so NiFi will be unusable.

The fix is simple:
Allow the user to specify (if they wish) the signer type that AWS library 
should use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485721#comment-15485721
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/858
  
@pvillard31 can you confirm if this was generated by Split or Regex capture?

Note that the regex is a multiline match, therefore user can remove the 
newline via regular expression.


> Create a batch capable pseudo-whois ("netcat") enrichment Processor
> ---
>
> Key: NIFI-1971
> URL: https://issues.apache.org/jira/browse/NIFI-1971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> While the QueryDNS created can be used on low to medium volume enrichment and 
> to licensed DNS based lookups (e.g. commercial use of SpamHaus) many 
> enrichment providers prefer the use of bulk queries using pseudo whois API 
> (a.k.a. netcat interface).
> as documented 
> [here|https://www.shadowserver.org/wiki/pmwiki.php/Services/IP-BGP#toc6] the 
> bulk interfaces work by connecting to port 43/TCP and sending a payload like:
> {code}
> begin origin
> 4.5.4.3
> 17.112.152.32
> 208.77.188.166
> end
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread trixpan
Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/858
  
@pvillard31 can you confirm if this was generated by Split or Regex capture?

Note that the regex is a multiline match, therefore user can remove the 
newline via regular expression.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485691#comment-15485691
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78476147
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -65,7 +70,16 @@
 .description("Choice between a splitter and regex matcher used 
to parse the results of the query into attribute groups")
 .expressionLanguageSupported(false)
 .required(false)
-.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor KEY_GROUP = new 
PropertyDescriptor.Builder()
+.name("KEY_GROUP")
+.displayName("Key lookup group (multiline / batch)")
+.description("When performing a batched lookup, the following 
RegEx named capture group or Column number will be used to match" +
+"the whois server response with the lookup field")
--- End diff --

good catch. addressed


> Create a batch capable pseudo-whois ("netcat") enrichment Processor
> ---
>
> Key: NIFI-1971
> URL: https://issues.apache.org/jira/browse/NIFI-1971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> While the QueryDNS created can be used on low to medium volume enrichment and 
> to licensed DNS based lookups (e.g. commercial use of SpamHaus) many 
> enrichment providers prefer the use of bulk queries using pseudo whois API 
> (a.k.a. netcat interface).
> as documented 
> [here|https://www.shadowserver.org/wiki/pmwiki.php/Services/IP-BGP#toc6] the 
> bulk interfaces work by connecting to port 43/TCP and sending a payload like:
> {code}
> begin origin
> 4.5.4.3
> 17.112.152.32
> 208.77.188.166
> end
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread trixpan
Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78476147
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -65,7 +70,16 @@
 .description("Choice between a splitter and regex matcher used 
to parse the results of the query into attribute groups")
 .expressionLanguageSupported(false)
 .required(false)
-.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor KEY_GROUP = new 
PropertyDescriptor.Builder()
+.name("KEY_GROUP")
+.displayName("Key lookup group (multiline / batch)")
+.description("When performing a batched lookup, the following 
RegEx named capture group or Column number will be used to match" +
+"the whois server response with the lookup field")
--- End diff --

good catch. addressed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485685#comment-15485685
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78476100
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -152,10 +166,68 @@
 // Fails to NONE
 default:
 // NONE was chosen, just appending the record result as 
group0 without further splitting
-results.put("enrich." + schema + ".record" + 
String.valueOf(recordPosition) + ".group0", rawResult);
+results.put("enrich." + schema + ".record" + 
recordPosition + ".group0", rawResult);
 break;
 }
 return results;
 }
 
+/**
+ * This method returns the parsed record string in the form of
+ * a map of two strings, consisting of a iteration aware attribute
+ * names and its values
+ *
+
+ * @param  rawResult the raw query results to be parsed
+ * @param queryParser The parsing mechanism being used to parse the 
data into groups
+ * @param queryRegex The regex to be used to split the query results 
into groups. The regex MUST implement at least on named capture group "KEY" to 
be used to populate the table rows
+ * @param lookupKey The regular expression named capture group or 
number of the column of a split to be used for matching
+ * @return  Table with attribute names and values where each Table row 
uses the value of the KEY named capture group specified in @param queryRegex
+ */
+protected Table parseBatchResponse(String 
rawResult, String queryParser, String queryRegex, String lookupKey, String 
schema) {
+// Note the hardcoded record0.
+//  Since iteration is done within the parser and Multimap is 
used, the record number here will always be 0.
+// Consequentially, 0 is hardcoded so that batched and non batched 
attributes follow the same naming
+// conventions
+final String recordPosition = ".record0";
+
+final Table results = 
HashBasedTable.create();
+
+switch (queryParser) {
+case "Split":
+Scanner scanner = new Scanner(rawResult);
+while (scanner.hasNextLine()) {
+String line = scanner.nextLine();
+// Time to Split the results...
+String[] splitResult = line.split(queryRegex);
+
+for (int r = 0; r < splitResult.length; r++) {
+results.put(splitResult[ 
Integer.valueOf(lookupKey) - 1 ], "enrich." + schema + recordPosition + 
".group" + String.valueOf(r), splitResult[r]);
+
+}
+}
+break;
+case "RegEx":
+// prepare the regex
+Pattern p;
+// Regex is multiline. Each line should include a KEY for 
lookup
+p = Pattern.compile(queryRegex, Pattern.MULTILINE);
+
+Matcher matcher = p.matcher(rawResult);
+while (matcher.find()) {
+// Note that RegEx matches capture group 0 is usually 
broad but starting with it anyway
+// for the sake of purity
+for (int r = 0; r <= matcher.groupCount(); r++) {
+if (!StringUtils.isEmpty(matcher.group("KEY"))) {
+results.put(matcher.group(lookupKey), "enrich." + 
schema + recordPosition + ".group" + String.valueOf(r), matcher.group(r));
+} else {
+getLogger().warn("Could not find group {} while 
processing result. Ignoring row", new Object[] {lookupKey});
--- End diff --

great idea. Addressed


> Create a batch capable pseudo-whois ("netcat") enrichment Processor
> ---
>
> Key: NIFI-1971
> URL: https://issues.apache.org/jira/browse/NIFI-1971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> While the QueryDNS created can be used on low to medium volume enrichment and 
> to licensed DNS based lookups (e.g. commercial use of SpamHaus) many 
> enrichment providers prefer the use of bulk queries using pseudo whois API 
> (a.k.a. netcat interface).
> as 

[GitHub] nifi pull request #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread trixpan
Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78476100
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -152,10 +166,68 @@
 // Fails to NONE
 default:
 // NONE was chosen, just appending the record result as 
group0 without further splitting
-results.put("enrich." + schema + ".record" + 
String.valueOf(recordPosition) + ".group0", rawResult);
+results.put("enrich." + schema + ".record" + 
recordPosition + ".group0", rawResult);
 break;
 }
 return results;
 }
 
+/**
+ * This method returns the parsed record string in the form of
+ * a map of two strings, consisting of a iteration aware attribute
+ * names and its values
+ *
+
+ * @param  rawResult the raw query results to be parsed
+ * @param queryParser The parsing mechanism being used to parse the 
data into groups
+ * @param queryRegex The regex to be used to split the query results 
into groups. The regex MUST implement at least on named capture group "KEY" to 
be used to populate the table rows
+ * @param lookupKey The regular expression named capture group or 
number of the column of a split to be used for matching
+ * @return  Table with attribute names and values where each Table row 
uses the value of the KEY named capture group specified in @param queryRegex
+ */
+protected Table parseBatchResponse(String 
rawResult, String queryParser, String queryRegex, String lookupKey, String 
schema) {
+// Note the hardcoded record0.
+//  Since iteration is done within the parser and Multimap is 
used, the record number here will always be 0.
+// Consequentially, 0 is hardcoded so that batched and non batched 
attributes follow the same naming
+// conventions
+final String recordPosition = ".record0";
+
+final Table results = 
HashBasedTable.create();
+
+switch (queryParser) {
+case "Split":
+Scanner scanner = new Scanner(rawResult);
+while (scanner.hasNextLine()) {
+String line = scanner.nextLine();
+// Time to Split the results...
+String[] splitResult = line.split(queryRegex);
+
+for (int r = 0; r < splitResult.length; r++) {
+results.put(splitResult[ 
Integer.valueOf(lookupKey) - 1 ], "enrich." + schema + recordPosition + 
".group" + String.valueOf(r), splitResult[r]);
+
+}
+}
+break;
+case "RegEx":
+// prepare the regex
+Pattern p;
+// Regex is multiline. Each line should include a KEY for 
lookup
+p = Pattern.compile(queryRegex, Pattern.MULTILINE);
+
+Matcher matcher = p.matcher(rawResult);
+while (matcher.find()) {
+// Note that RegEx matches capture group 0 is usually 
broad but starting with it anyway
+// for the sake of purity
+for (int r = 0; r <= matcher.groupCount(); r++) {
+if (!StringUtils.isEmpty(matcher.group("KEY"))) {
+results.put(matcher.group(lookupKey), "enrich." + 
schema + recordPosition + ".group" + String.valueOf(r), matcher.group(r));
+} else {
+getLogger().warn("Could not find group {} while 
processing result. Ignoring row", new Object[] {lookupKey});
--- End diff --

great idea. Addressed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485671#comment-15485671
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78475823
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -152,10 +166,68 @@
 // Fails to NONE
 default:
 // NONE was chosen, just appending the record result as 
group0 without further splitting
-results.put("enrich." + schema + ".record" + 
String.valueOf(recordPosition) + ".group0", rawResult);
+results.put("enrich." + schema + ".record" + 
recordPosition + ".group0", rawResult);
 break;
 }
 return results;
 }
 
+/**
+ * This method returns the parsed record string in the form of
+ * a map of two strings, consisting of a iteration aware attribute
+ * names and its values
+ *
+
+ * @param  rawResult the raw query results to be parsed
+ * @param queryParser The parsing mechanism being used to parse the 
data into groups
+ * @param queryRegex The regex to be used to split the query results 
into groups. The regex MUST implement at least on named capture group "KEY" to 
be used to populate the table rows
+ * @param lookupKey The regular expression named capture group or 
number of the column of a split to be used for matching
+ * @return  Table with attribute names and values where each Table row 
uses the value of the KEY named capture group specified in @param queryRegex
+ */
+protected Table parseBatchResponse(String 
rawResult, String queryParser, String queryRegex, String lookupKey, String 
schema) {
+// Note the hardcoded record0.
+//  Since iteration is done within the parser and Multimap is 
used, the record number here will always be 0.
+// Consequentially, 0 is hardcoded so that batched and non batched 
attributes follow the same naming
+// conventions
+final String recordPosition = ".record0";
+
+final Table results = 
HashBasedTable.create();
+
+switch (queryParser) {
+case "Split":
+Scanner scanner = new Scanner(rawResult);
+while (scanner.hasNextLine()) {
+String line = scanner.nextLine();
+// Time to Split the results...
+String[] splitResult = line.split(queryRegex);
+
+for (int r = 0; r < splitResult.length; r++) {
+results.put(splitResult[ 
Integer.valueOf(lookupKey) - 1 ], "enrich." + schema + recordPosition + 
".group" + String.valueOf(r), splitResult[r]);
+
+}
+}
+break;
+case "RegEx":
+// prepare the regex
+Pattern p;
+// Regex is multiline. Each line should include a KEY for 
lookup
+p = Pattern.compile(queryRegex, Pattern.MULTILINE);
+
+Matcher matcher = p.matcher(rawResult);
+while (matcher.find()) {
+// Note that RegEx matches capture group 0 is usually 
broad but starting with it anyway
+// for the sake of purity
+for (int r = 0; r <= matcher.groupCount(); r++) {
+if (!StringUtils.isEmpty(matcher.group("KEY"))) {
--- End diff --

Pierre, 

This KEY reference should have been removed so I guess I stuffed up 
something on my git... I rewrote the code to unify on regex capture group 
numbers.

I also add code to catch IndexOutOfBoundsException (the equivalent to the 
IllegalArgumentException you mentioned above)


> Create a batch capable pseudo-whois ("netcat") enrichment Processor
> ---
>
> Key: NIFI-1971
> URL: https://issues.apache.org/jira/browse/NIFI-1971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> While the QueryDNS created can be used on low to medium volume enrichment and 
> to licensed DNS based lookups (e.g. commercial use of SpamHaus) many 
> enrichment providers prefer the use of bulk queries using pseudo whois API 
> (a.k.a. netcat interface).
> as documented 
> 

[GitHub] nifi pull request #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread trixpan
Github user trixpan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78475823
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -152,10 +166,68 @@
 // Fails to NONE
 default:
 // NONE was chosen, just appending the record result as 
group0 without further splitting
-results.put("enrich." + schema + ".record" + 
String.valueOf(recordPosition) + ".group0", rawResult);
+results.put("enrich." + schema + ".record" + 
recordPosition + ".group0", rawResult);
 break;
 }
 return results;
 }
 
+/**
+ * This method returns the parsed record string in the form of
+ * a map of two strings, consisting of a iteration aware attribute
+ * names and its values
+ *
+
+ * @param  rawResult the raw query results to be parsed
+ * @param queryParser The parsing mechanism being used to parse the 
data into groups
+ * @param queryRegex The regex to be used to split the query results 
into groups. The regex MUST implement at least on named capture group "KEY" to 
be used to populate the table rows
+ * @param lookupKey The regular expression named capture group or 
number of the column of a split to be used for matching
+ * @return  Table with attribute names and values where each Table row 
uses the value of the KEY named capture group specified in @param queryRegex
+ */
+protected Table parseBatchResponse(String 
rawResult, String queryParser, String queryRegex, String lookupKey, String 
schema) {
+// Note the hardcoded record0.
+//  Since iteration is done within the parser and Multimap is 
used, the record number here will always be 0.
+// Consequentially, 0 is hardcoded so that batched and non batched 
attributes follow the same naming
+// conventions
+final String recordPosition = ".record0";
+
+final Table results = 
HashBasedTable.create();
+
+switch (queryParser) {
+case "Split":
+Scanner scanner = new Scanner(rawResult);
+while (scanner.hasNextLine()) {
+String line = scanner.nextLine();
+// Time to Split the results...
+String[] splitResult = line.split(queryRegex);
+
+for (int r = 0; r < splitResult.length; r++) {
+results.put(splitResult[ 
Integer.valueOf(lookupKey) - 1 ], "enrich." + schema + recordPosition + 
".group" + String.valueOf(r), splitResult[r]);
+
+}
+}
+break;
+case "RegEx":
+// prepare the regex
+Pattern p;
+// Regex is multiline. Each line should include a KEY for 
lookup
+p = Pattern.compile(queryRegex, Pattern.MULTILINE);
+
+Matcher matcher = p.matcher(rawResult);
+while (matcher.find()) {
+// Note that RegEx matches capture group 0 is usually 
broad but starting with it anyway
+// for the sake of purity
+for (int r = 0; r <= matcher.groupCount(); r++) {
+if (!StringUtils.isEmpty(matcher.group("KEY"))) {
--- End diff --

Pierre, 

This KEY reference should have been removed so I guess I stuffed up 
something on my git... I rewrote the code to unify on regex capture group 
numbers.

I also add code to catch IndexOutOfBoundsException (the equivalent to the 
IllegalArgumentException you mentioned above)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2266) GetHTTP and PutHTTP use hard-coded TLS protocol version

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485276#comment-15485276
 ] 

ASF GitHub Bot commented on NIFI-2266:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/999
  
Hey @alopresto,

I have this unit test failing:

Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.999 sec 
<<< FAILURE! - in org.apache.nifi.processors.standard.TestGetHTTPGroovy

testGetHTTPShouldConnectToServerWithTLSv1(org.apache.nifi.processors.standard.TestGetHTTPGroovy)
  Time elapsed: 0.094 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:318)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertAllFlowFilesTransferred(StandardProcessorTestRunner.java:313)
at 
org.apache.nifi.util.TestRunner$assertAllFlowFilesTransferred$5.call(Unknown 
Source)
at 
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at 
org.apache.nifi.processors.standard.TestGetHTTPGroovy$_testGetHTTPShouldConnectToServerWithTLSv1_closure7.doCall(TestGetHTTPGroovy.groovy:331)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at 
org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019)
at groovy.lang.Closure.call(Closure.java:426)
at groovy.lang.Closure.call(Closure.java:442)
at 
org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2030)
at 
org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2015)
at 
org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2056)
at org.codehaus.groovy.runtime.dgm$162.invoke(Unknown Source)
at 
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274)
at 
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
at 
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at 
org.apache.nifi.processors.standard.TestGetHTTPGroovy.testGetHTTPShouldConnectToServerWithTLSv1(TestGetHTTPGroovy.groovy:324)


And the logs I have when only running this test in Eclipse:

[main] INFO org.eclipse.jetty.util.log - Logging initialized @1147ms
[main] INFO org.apache.nifi.processors.standard.TestGetHTTPGroovy - Created 
server with supported protocols: [TLSv1, TLSv1.1, TLSv1.2]
[main] INFO org.apache.nifi.processors.standard.TestGetHTTPGroovy - JCE 
unlimited strength installed: false
[main] INFO org.apache.nifi.processors.standard.TestGetHTTPGroovy - 
Supported client cipher suites: [...]
[main] INFO org.apache.nifi.processors.standard.TestGetHTTPGroovy - Created 
server with supported protocols: [TLSv1]
[main] INFO org.eclipse.jetty.server.Server - jetty-9.3.9.v20160517
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started 
o.e.j.s.ServletContextHandler@1a914089{/,file:///.../nifi/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/TestGetHTTP/,AVAILABLE}
[main] INFO org.eclipse.jetty.util.ssl.SslContextFactory - 
x509=X509@2b999ee8(localhost,h=[],w=[]) for 
SslContextFactory@31ab1e67(file:///.../nifi/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/localhost-ks.jks,null)
[main] INFO org.eclipse.jetty.util.ssl.SslContextFactory - 
x509=X509@29bbc391(mykey,h=[],w=[]) for 

[GitHub] nifi issue #999: NIFI-2266 Enabled TLSv1.1 and TLSv1.2 protocols for GetHTTP...

2016-09-12 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/999
  
Hey @alopresto,

I have this unit test failing:

Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.999 sec 
<<< FAILURE! - in org.apache.nifi.processors.standard.TestGetHTTPGroovy

testGetHTTPShouldConnectToServerWithTLSv1(org.apache.nifi.processors.standard.TestGetHTTPGroovy)
  Time elapsed: 0.094 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:318)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertAllFlowFilesTransferred(StandardProcessorTestRunner.java:313)
at 
org.apache.nifi.util.TestRunner$assertAllFlowFilesTransferred$5.call(Unknown 
Source)
at 
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at 
org.apache.nifi.processors.standard.TestGetHTTPGroovy$_testGetHTTPShouldConnectToServerWithTLSv1_closure7.doCall(TestGetHTTPGroovy.groovy:331)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at 
org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019)
at groovy.lang.Closure.call(Closure.java:426)
at groovy.lang.Closure.call(Closure.java:442)
at 
org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2030)
at 
org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2015)
at 
org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2056)
at org.codehaus.groovy.runtime.dgm$162.invoke(Unknown Source)
at 
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274)
at 
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
at 
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at 
org.apache.nifi.processors.standard.TestGetHTTPGroovy.testGetHTTPShouldConnectToServerWithTLSv1(TestGetHTTPGroovy.groovy:324)


And the logs I have when only running this test in Eclipse:

[main] INFO org.eclipse.jetty.util.log - Logging initialized @1147ms
[main] INFO org.apache.nifi.processors.standard.TestGetHTTPGroovy - Created 
server with supported protocols: [TLSv1, TLSv1.1, TLSv1.2]
[main] INFO org.apache.nifi.processors.standard.TestGetHTTPGroovy - JCE 
unlimited strength installed: false
[main] INFO org.apache.nifi.processors.standard.TestGetHTTPGroovy - 
Supported client cipher suites: [...]
[main] INFO org.apache.nifi.processors.standard.TestGetHTTPGroovy - Created 
server with supported protocols: [TLSv1]
[main] INFO org.eclipse.jetty.server.Server - jetty-9.3.9.v20160517
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started 
o.e.j.s.ServletContextHandler@1a914089{/,file:///.../nifi/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/TestGetHTTP/,AVAILABLE}
[main] INFO org.eclipse.jetty.util.ssl.SslContextFactory - 
x509=X509@2b999ee8(localhost,h=[],w=[]) for 
SslContextFactory@31ab1e67(file:///.../nifi/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/localhost-ks.jks,null)
[main] INFO org.eclipse.jetty.util.ssl.SslContextFactory - 
x509=X509@29bbc391(mykey,h=[],w=[]) for 
SslContextFactory@31ab1e67(file:///.../nifi/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/localhost-ks.jks,null)
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started 
ServerConnector@5bb8e6fc{SSL,[ssl, http/1.1]}{localhost:8456}
[main] INFO org.eclipse.jetty.server.Server - Started @2219ms
 

[jira] [Updated] (NIFI-2752) Correct ReplaceText default pattern and unit tests

2016-09-12 Thread Joe Skora (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Skora updated NIFI-2752:

Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi/pull/1007

> Correct ReplaceText default pattern and unit tests
> --
>
> Key: NIFI-2752
> URL: https://issues.apache.org/jira/browse/NIFI-2752
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 0.8.0, 0.7.1
>Reporter: Joe Skora
>Assignee: Joe Skora
>
> [{{ReplaceText.DEFAULT_REGEX}}|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java#L87]
>  is defined as "(?s:\^.\*$)", which is valid PCRE but must be expressed as 
> "(?s)(\^.\*$)" in Java.
> The Java [Pattern class|https://docs.oracle.com/javase/8/docs/api/index.html] 
> specifies that patterns like "(?idmsux-idmsux:X)" are _non-capturing_, so 
> anything but the default pattern and replacement value result in empty 
> output.  This isn't caught by unit tests because the code short circuits if 
> the default pattern and replacement are found in 
> [ReplaceText.onTrigger()|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java#L217].
>   This hides the capture group problem from the unit tests and the default 
> processor configuration, but causes the processor to produce empty output if 
> using non-trivial patterns and replacements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2761) BootstrapCodec can throw exceptions with unintended message

2016-09-12 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-2761:
--
Summary: BootstrapCodec can throw exceptions with unintended message  (was: 
BootstrapCodec can throw cryptic exceptions)

> BootstrapCodec can throw exceptions with unintended message
> ---
>
> Key: NIFI-2761
> URL: https://issues.apache.org/jira/browse/NIFI-2761
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Jeff Storck
>Priority: Minor
>
> In BootstrapCodec.java, when the following tertiary expression gets 
> evaluated, it never will be null, due to the string concatination.  This 
> causes the IOException message to always be "Details: " with the 
> InvalidCommandException's toString result.
> {code}
> try {
> processRequest(cmd, args);
> } catch (final InvalidCommandException ice) {
> throw new IOException("Received invalid command from NiFi: " + line + " : 
> " + ice.getMessage() == null ? "" : "Details: " + ice.toString());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2603) Bringing Some UI Color Back

2016-09-12 Thread Peter Wicks (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485145#comment-15485145
 ] 

Peter Wicks commented on NIFI-2603:
---

I like your train of thought.
Any color suggestions?

> Bringing Some UI Color Back
> ---
>
> Key: NIFI-2603
> URL: https://issues.apache.org/jira/browse/NIFI-2603
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Peter Wicks
>Priority: Minor
>
> In the new 1.0 UI all of the colors associated with status (except the orange 
> triangle) are gone; replaced with a dark gray color.
> I propose bringing back color.  The screenshots are in the format of before 
> on the top and after on the bottom, except were labeled in the picture itself:
>  - Top Status Menu: https://goo.gl/photos/se1JnvhRwU7N4Fap7
>  - Process Group: https://goo.gl/photos/dqjG4KvC6xqxQfgT7
>  - Processes (Running/Stopped/Invalid): 
> https://goo.gl/photos/dSS8vgE2RkrXtc77A
>  - Operate Play/Stop buttons (only on mouse hover): 
> https://goo.gl/photos/Am5SUEEn7G9RjmMe6
>  - Processor/Processor Group Context Menu: 
> https://goo.gl/photos/Jq3qFg4ezaN91qms5
> This is not a "100% done, I've covered everything" before/after list.  I know 
> I need to do the NiFi summary page also at the minimum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1170) TailFile "File to Tail" property should support Wildcards

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485049#comment-15485049
 ] 

ASF GitHub Bot commented on NIFI-1170:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/980#discussion_r78438361
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java
 ---
@@ -117,31 +173,78 @@
 .allowableValues(LOCATION_LOCAL, LOCATION_REMOTE)
 .defaultValue(LOCATION_LOCAL.getValue())
 .build();
+
 static final PropertyDescriptor START_POSITION = new 
PropertyDescriptor.Builder()
 .name("Initial Start Position")
-.description("When the Processor first begins to tail data, 
this property specifies where the Processor should begin reading data. Once 
data has been ingested from the file, "
+.description("When the Processor first begins to tail data, 
this property specifies where the Processor should begin reading data. Once 
data has been ingested from a file, "
 + "the Processor will continue from the last point 
from which it has received data.")
 .allowableValues(START_BEGINNING_OF_TIME, START_CURRENT_FILE, 
START_CURRENT_TIME)
 .defaultValue(START_CURRENT_FILE.getValue())
 .required(true)
 .build();
 
+static final PropertyDescriptor RECURSIVE = new 
PropertyDescriptor.Builder()
+.name("tailfile-recursive-lookup")
+.displayName("Recursive lookup")
+.description("When using Multiple files mode, this property 
defines if files must be listed recursively or not"
++ " in the base directory.")
+.allowableValues("true", "false")
+.defaultValue("true")
+.required(true)
+.build();
+
+static final PropertyDescriptor ROLLING_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("tailfile-rolling-strategy")
+.displayName("Rolling Strategy")
+.description("Specifies if the files to tail have a fixed name 
or not.")
+.required(true)
+.allowableValues(FIXED_NAME, CHANGING_NAME)
+.defaultValue(FIXED_NAME.getValue())
+.build();
+
+static final PropertyDescriptor LOOKUP_FREQUENCY = new 
PropertyDescriptor.Builder()
+.name("tailfile-lookup-frequency")
+.displayName("Lookup frequency")
+.description("Only used in Multiple files mode and Changing 
name rolling strategy, it specifies the minimum "
++ "duration the processor will wait before listing 
again the files to tail.")
+.required(false)
+.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR)
+.defaultValue("10 minutes")
--- End diff --

You are definitely right, it really depends of the use case and I am in no 
position to assume that those values are a good fit in most situations. I 
removed default values and I'll assume that additional details in the 
documentation will give a good sense of what is the role of those values.


> TailFile "File to Tail" property should support Wildcards
> -
>
> Key: NIFI-1170
> URL: https://issues.apache.org/jira/browse/NIFI-1170
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.4.0
>Reporter: Andre
>
> Because of challenges around log rotation of high volume syslog and app 
> producers, it is customary to logging platform developers to promote file 
> variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as 
> alternatives to getting SIGHUPs being sent to the syslog daemon upon every 
> file rotation.
> (To certain extent, used even NiFi's has similar patterns, like for example, 
> when one uses Expression Language to set PutHDFS destination file).
> The current TailFile strategy suggests rotation patterns like:
> {code}
> log_folder/app.log
> log_folder/app.log.1
> log_folder/app.log.2
> log_folder/app.log.3
> {code}
> It is possible to fool the system to accept wildcards by simply using a 
> strategy like:
> {code}
> log_folder/test1
> log_folder/server1
> log_folder/server2
> log_folder/server3
> {code}
> And configure *Rolling Filename Pattern* to * but it feels like a hack, 
> rather than catering for an ever increasingly prevalent use case 
> (DynaFile/macros/etc).
> It would be great if instead, TailFile had the ability to use wildcards on 
> File to Tail property



--
This message 

[GitHub] nifi pull request #980: NIFI-1170 - Improved TailFile processor to support m...

2016-09-12 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/980#discussion_r78438361
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java
 ---
@@ -117,31 +173,78 @@
 .allowableValues(LOCATION_LOCAL, LOCATION_REMOTE)
 .defaultValue(LOCATION_LOCAL.getValue())
 .build();
+
 static final PropertyDescriptor START_POSITION = new 
PropertyDescriptor.Builder()
 .name("Initial Start Position")
-.description("When the Processor first begins to tail data, 
this property specifies where the Processor should begin reading data. Once 
data has been ingested from the file, "
+.description("When the Processor first begins to tail data, 
this property specifies where the Processor should begin reading data. Once 
data has been ingested from a file, "
 + "the Processor will continue from the last point 
from which it has received data.")
 .allowableValues(START_BEGINNING_OF_TIME, START_CURRENT_FILE, 
START_CURRENT_TIME)
 .defaultValue(START_CURRENT_FILE.getValue())
 .required(true)
 .build();
 
+static final PropertyDescriptor RECURSIVE = new 
PropertyDescriptor.Builder()
+.name("tailfile-recursive-lookup")
+.displayName("Recursive lookup")
+.description("When using Multiple files mode, this property 
defines if files must be listed recursively or not"
++ " in the base directory.")
+.allowableValues("true", "false")
+.defaultValue("true")
+.required(true)
+.build();
+
+static final PropertyDescriptor ROLLING_STRATEGY = new 
PropertyDescriptor.Builder()
+.name("tailfile-rolling-strategy")
+.displayName("Rolling Strategy")
+.description("Specifies if the files to tail have a fixed name 
or not.")
+.required(true)
+.allowableValues(FIXED_NAME, CHANGING_NAME)
+.defaultValue(FIXED_NAME.getValue())
+.build();
+
+static final PropertyDescriptor LOOKUP_FREQUENCY = new 
PropertyDescriptor.Builder()
+.name("tailfile-lookup-frequency")
+.displayName("Lookup frequency")
+.description("Only used in Multiple files mode and Changing 
name rolling strategy, it specifies the minimum "
++ "duration the processor will wait before listing 
again the files to tail.")
+.required(false)
+.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR)
+.defaultValue("10 minutes")
--- End diff --

You are definitely right, it really depends of the use case and I am in no 
position to assume that those values are a good fit in most situations. I 
removed default values and I'll assume that additional details in the 
documentation will give a good sense of what is the role of those values.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2752) Correct ReplaceText default pattern and unit tests

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485043#comment-15485043
 ] 

ASF GitHub Bot commented on NIFI-2752:
--

GitHub user jskora opened a pull request:

https://github.com/apache/nifi/pull/1007

NIFI-2752 Correct ReplaceText default pattern and unit tests

 * Corrected the DEFAULT_REGEX pattern.
 * Added tests to isolate regex capture group problem and verify corrected 
functionality.
 * Removed short circuit logic that masked configuration errors and created 
inconsistent processor behavior.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jskora/nifi NIFI-2752

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1007.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1007






> Correct ReplaceText default pattern and unit tests
> --
>
> Key: NIFI-2752
> URL: https://issues.apache.org/jira/browse/NIFI-2752
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 0.8.0, 0.7.1
>Reporter: Joe Skora
>Assignee: Joe Skora
>
> [{{ReplaceText.DEFAULT_REGEX}}|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java#L87]
>  is defined as "(?s:\^.\*$)", which is valid PCRE but must be expressed as 
> "(?s)(\^.\*$)" in Java.
> The Java [Pattern class|https://docs.oracle.com/javase/8/docs/api/index.html] 
> specifies that patterns like "(?idmsux-idmsux:X)" are _non-capturing_, so 
> anything but the default pattern and replacement value result in empty 
> output.  This isn't caught by unit tests because the code short circuits if 
> the default pattern and replacement are found in 
> [ReplaceText.onTrigger()|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ReplaceText.java#L217].
>   This hides the capture group problem from the unit tests and the default 
> processor configuration, but causes the processor to produce empty output if 
> using non-trivial patterns and replacements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1007: NIFI-2752 Correct ReplaceText default pattern and u...

2016-09-12 Thread jskora
GitHub user jskora opened a pull request:

https://github.com/apache/nifi/pull/1007

NIFI-2752 Correct ReplaceText default pattern and unit tests

 * Corrected the DEFAULT_REGEX pattern.
 * Added tests to isolate regex capture group problem and verify corrected 
functionality.
 * Removed short circuit logic that masked configuration errors and created 
inconsistent processor behavior.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jskora/nifi NIFI-2752

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1007.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1007






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1170) TailFile "File to Tail" property should support Wildcards

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485040#comment-15485040
 ] 

ASF GitHub Bot commented on NIFI-1170:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/980#discussion_r78437746
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java
 ---
@@ -245,58 +490,82 @@ private void recoverState(final ProcessContext 
context, final Map 
context.getProperty(LOOKUP_FREQUENCY).asTimePeriod(TimeUnit.MILLISECONDS)) {
+try {
+recoverState(context);
+} catch (IOException e) {
+getLogger().warn("Exception raised while looking up 
for new files", e);
--- End diff --

Good catch.


> TailFile "File to Tail" property should support Wildcards
> -
>
> Key: NIFI-1170
> URL: https://issues.apache.org/jira/browse/NIFI-1170
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.4.0
>Reporter: Andre
>
> Because of challenges around log rotation of high volume syslog and app 
> producers, it is customary to logging platform developers to promote file 
> variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as 
> alternatives to getting SIGHUPs being sent to the syslog daemon upon every 
> file rotation.
> (To certain extent, used even NiFi's has similar patterns, like for example, 
> when one uses Expression Language to set PutHDFS destination file).
> The current TailFile strategy suggests rotation patterns like:
> {code}
> 

[GitHub] nifi pull request #980: NIFI-1170 - Improved TailFile processor to support m...

2016-09-12 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/980#discussion_r78437746
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java
 ---
@@ -245,58 +490,82 @@ private void recoverState(final ProcessContext 
context, final Map 
context.getProperty(LOOKUP_FREQUENCY).asTimePeriod(TimeUnit.MILLISECONDS)) {
+try {
+recoverState(context);
+} catch (IOException e) {
+getLogger().warn("Exception raised while looking up 
for new files", e);
--- End diff --

Good catch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485000#comment-15485000
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/858
  
That's a detail, but regarding your last fix, I have now the following 
result:

--
Standard FlowFile Attributes
Key: 'entryDate'
Value: 'Mon Sep 12 21:05:00 CEST 2016'
Key: 'lineageStartDate'
Value: 'Mon Sep 12 21:05:00 CEST 2016'
Key: 'fileSize'
Value: '0'
FlowFile Attribute Map Content
Key: 'enrich.whois.record0.group0'
Value: '9394 | 123.69.0.0/16 | CTTNET | CN | chinatietong.com | China 
Tietong Telecommunications Corporation
'
Key: 'filename'
Value: '2645792035303317'
Key: 'path'
Value: './'
Key: 'src.ip'
Value: '123.69.123.40'
Key: 'uuid'
Value: '25d3744c-b5d4-4147-9cc5-47f4f47ff046'
--


I'd try to remove the unnecessary carriage return at the end of the value.


> Create a batch capable pseudo-whois ("netcat") enrichment Processor
> ---
>
> Key: NIFI-1971
> URL: https://issues.apache.org/jira/browse/NIFI-1971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> While the QueryDNS created can be used on low to medium volume enrichment and 
> to licensed DNS based lookups (e.g. commercial use of SpamHaus) many 
> enrichment providers prefer the use of bulk queries using pseudo whois API 
> (a.k.a. netcat interface).
> as documented 
> [here|https://www.shadowserver.org/wiki/pmwiki.php/Services/IP-BGP#toc6] the 
> bulk interfaces work by connecting to port 43/TCP and sending a payload like:
> {code}
> begin origin
> 4.5.4.3
> 17.112.152.32
> 208.77.188.166
> end
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/858
  
That's a detail, but regarding your last fix, I have now the following 
result:

--
Standard FlowFile Attributes
Key: 'entryDate'
Value: 'Mon Sep 12 21:05:00 CEST 2016'
Key: 'lineageStartDate'
Value: 'Mon Sep 12 21:05:00 CEST 2016'
Key: 'fileSize'
Value: '0'
FlowFile Attribute Map Content
Key: 'enrich.whois.record0.group0'
Value: '9394 | 123.69.0.0/16 | CTTNET | CN | chinatietong.com | China 
Tietong Telecommunications Corporation
'
Key: 'filename'
Value: '2645792035303317'
Key: 'path'
Value: './'
Key: 'src.ip'
Value: '123.69.123.40'
Key: 'uuid'
Value: '25d3744c-b5d4-4147-9cc5-47f4f47ff046'
--


I'd try to remove the unnecessary carriage return at the end of the value.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484990#comment-15484990
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78434063
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -65,7 +70,16 @@
 .description("Choice between a splitter and regex matcher used 
to parse the results of the query into attribute groups")
 .expressionLanguageSupported(false)
 .required(false)
-.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor KEY_GROUP = new 
PropertyDescriptor.Builder()
+.name("KEY_GROUP")
+.displayName("Key lookup group (multiline / batch)")
+.description("When performing a batched lookup, the following 
RegEx named capture group or Column number will be used to match" +
+"the whois server response with the lookup field")
--- End diff --

white space missing


> Create a batch capable pseudo-whois ("netcat") enrichment Processor
> ---
>
> Key: NIFI-1971
> URL: https://issues.apache.org/jira/browse/NIFI-1971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> While the QueryDNS created can be used on low to medium volume enrichment and 
> to licensed DNS based lookups (e.g. commercial use of SpamHaus) many 
> enrichment providers prefer the use of bulk queries using pseudo whois API 
> (a.k.a. netcat interface).
> as documented 
> [here|https://www.shadowserver.org/wiki/pmwiki.php/Services/IP-BGP#toc6] the 
> bulk interfaces work by connecting to port 43/TCP and sending a payload like:
> {code}
> begin origin
> 4.5.4.3
> 17.112.152.32
> 208.77.188.166
> end
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78434063
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -65,7 +70,16 @@
 .description("Choice between a splitter and regex matcher used 
to parse the results of the query into attribute groups")
 .expressionLanguageSupported(false)
 .required(false)
-.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor KEY_GROUP = new 
PropertyDescriptor.Builder()
+.name("KEY_GROUP")
+.displayName("Key lookup group (multiline / batch)")
+.description("When performing a batched lookup, the following 
RegEx named capture group or Column number will be used to match" +
+"the whois server response with the lookup field")
--- End diff --

white space missing


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484988#comment-15484988
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78433995
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -152,10 +166,68 @@
 // Fails to NONE
 default:
 // NONE was chosen, just appending the record result as 
group0 without further splitting
-results.put("enrich." + schema + ".record" + 
String.valueOf(recordPosition) + ".group0", rawResult);
+results.put("enrich." + schema + ".record" + 
recordPosition + ".group0", rawResult);
 break;
 }
 return results;
 }
 
+/**
+ * This method returns the parsed record string in the form of
+ * a map of two strings, consisting of a iteration aware attribute
+ * names and its values
+ *
+
+ * @param  rawResult the raw query results to be parsed
+ * @param queryParser The parsing mechanism being used to parse the 
data into groups
+ * @param queryRegex The regex to be used to split the query results 
into groups. The regex MUST implement at least on named capture group "KEY" to 
be used to populate the table rows
+ * @param lookupKey The regular expression named capture group or 
number of the column of a split to be used for matching
+ * @return  Table with attribute names and values where each Table row 
uses the value of the KEY named capture group specified in @param queryRegex
+ */
+protected Table parseBatchResponse(String 
rawResult, String queryParser, String queryRegex, String lookupKey, String 
schema) {
+// Note the hardcoded record0.
+//  Since iteration is done within the parser and Multimap is 
used, the record number here will always be 0.
+// Consequentially, 0 is hardcoded so that batched and non batched 
attributes follow the same naming
+// conventions
+final String recordPosition = ".record0";
+
+final Table results = 
HashBasedTable.create();
+
+switch (queryParser) {
+case "Split":
+Scanner scanner = new Scanner(rawResult);
+while (scanner.hasNextLine()) {
+String line = scanner.nextLine();
+// Time to Split the results...
+String[] splitResult = line.split(queryRegex);
+
+for (int r = 0; r < splitResult.length; r++) {
+results.put(splitResult[ 
Integer.valueOf(lookupKey) - 1 ], "enrich." + schema + recordPosition + 
".group" + String.valueOf(r), splitResult[r]);
+
+}
+}
+break;
+case "RegEx":
+// prepare the regex
+Pattern p;
+// Regex is multiline. Each line should include a KEY for 
lookup
+p = Pattern.compile(queryRegex, Pattern.MULTILINE);
+
+Matcher matcher = p.matcher(rawResult);
+while (matcher.find()) {
+// Note that RegEx matches capture group 0 is usually 
broad but starting with it anyway
+// for the sake of purity
+for (int r = 0; r <= matcher.groupCount(); r++) {
+if (!StringUtils.isEmpty(matcher.group("KEY"))) {
+results.put(matcher.group(lookupKey), "enrich." + 
schema + recordPosition + ".group" + String.valueOf(r), matcher.group(r));
+} else {
+getLogger().warn("Could not find group {} while 
processing result. Ignoring row", new Object[] {lookupKey});
--- End diff --

I would add ``rawResult`` in the log to help users to debug the processor 
configuration. Thoughts?


> Create a batch capable pseudo-whois ("netcat") enrichment Processor
> ---
>
> Key: NIFI-1971
> URL: https://issues.apache.org/jira/browse/NIFI-1971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> While the QueryDNS created can be used on low to medium volume enrichment and 
> to licensed DNS based lookups (e.g. commercial use of SpamHaus) many 
> enrichment providers prefer the use of 

[GitHub] nifi pull request #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78433995
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -152,10 +166,68 @@
 // Fails to NONE
 default:
 // NONE was chosen, just appending the record result as 
group0 without further splitting
-results.put("enrich." + schema + ".record" + 
String.valueOf(recordPosition) + ".group0", rawResult);
+results.put("enrich." + schema + ".record" + 
recordPosition + ".group0", rawResult);
 break;
 }
 return results;
 }
 
+/**
+ * This method returns the parsed record string in the form of
+ * a map of two strings, consisting of a iteration aware attribute
+ * names and its values
+ *
+
+ * @param  rawResult the raw query results to be parsed
+ * @param queryParser The parsing mechanism being used to parse the 
data into groups
+ * @param queryRegex The regex to be used to split the query results 
into groups. The regex MUST implement at least on named capture group "KEY" to 
be used to populate the table rows
+ * @param lookupKey The regular expression named capture group or 
number of the column of a split to be used for matching
+ * @return  Table with attribute names and values where each Table row 
uses the value of the KEY named capture group specified in @param queryRegex
+ */
+protected Table parseBatchResponse(String 
rawResult, String queryParser, String queryRegex, String lookupKey, String 
schema) {
+// Note the hardcoded record0.
+//  Since iteration is done within the parser and Multimap is 
used, the record number here will always be 0.
+// Consequentially, 0 is hardcoded so that batched and non batched 
attributes follow the same naming
+// conventions
+final String recordPosition = ".record0";
+
+final Table results = 
HashBasedTable.create();
+
+switch (queryParser) {
+case "Split":
+Scanner scanner = new Scanner(rawResult);
+while (scanner.hasNextLine()) {
+String line = scanner.nextLine();
+// Time to Split the results...
+String[] splitResult = line.split(queryRegex);
+
+for (int r = 0; r < splitResult.length; r++) {
+results.put(splitResult[ 
Integer.valueOf(lookupKey) - 1 ], "enrich." + schema + recordPosition + 
".group" + String.valueOf(r), splitResult[r]);
+
+}
+}
+break;
+case "RegEx":
+// prepare the regex
+Pattern p;
+// Regex is multiline. Each line should include a KEY for 
lookup
+p = Pattern.compile(queryRegex, Pattern.MULTILINE);
+
+Matcher matcher = p.matcher(rawResult);
+while (matcher.find()) {
+// Note that RegEx matches capture group 0 is usually 
broad but starting with it anyway
+// for the sake of purity
+for (int r = 0; r <= matcher.groupCount(); r++) {
+if (!StringUtils.isEmpty(matcher.group("KEY"))) {
+results.put(matcher.group(lookupKey), "enrich." + 
schema + recordPosition + ".group" + String.valueOf(r), matcher.group(r));
+} else {
+getLogger().warn("Could not find group {} while 
processing result. Ignoring row", new Object[] {lookupKey});
--- End diff --

I would add ``rawResult`` in the log to help users to debug the processor 
configuration. Thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78433868
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -152,10 +166,68 @@
 // Fails to NONE
 default:
 // NONE was chosen, just appending the record result as 
group0 without further splitting
-results.put("enrich." + schema + ".record" + 
String.valueOf(recordPosition) + ".group0", rawResult);
+results.put("enrich." + schema + ".record" + 
recordPosition + ".group0", rawResult);
 break;
 }
 return results;
 }
 
+/**
+ * This method returns the parsed record string in the form of
+ * a map of two strings, consisting of a iteration aware attribute
+ * names and its values
+ *
+
+ * @param  rawResult the raw query results to be parsed
+ * @param queryParser The parsing mechanism being used to parse the 
data into groups
+ * @param queryRegex The regex to be used to split the query results 
into groups. The regex MUST implement at least on named capture group "KEY" to 
be used to populate the table rows
+ * @param lookupKey The regular expression named capture group or 
number of the column of a split to be used for matching
+ * @return  Table with attribute names and values where each Table row 
uses the value of the KEY named capture group specified in @param queryRegex
+ */
+protected Table parseBatchResponse(String 
rawResult, String queryParser, String queryRegex, String lookupKey, String 
schema) {
+// Note the hardcoded record0.
+//  Since iteration is done within the parser and Multimap is 
used, the record number here will always be 0.
+// Consequentially, 0 is hardcoded so that batched and non batched 
attributes follow the same naming
+// conventions
+final String recordPosition = ".record0";
+
+final Table results = 
HashBasedTable.create();
+
+switch (queryParser) {
+case "Split":
+Scanner scanner = new Scanner(rawResult);
+while (scanner.hasNextLine()) {
+String line = scanner.nextLine();
+// Time to Split the results...
+String[] splitResult = line.split(queryRegex);
+
+for (int r = 0; r < splitResult.length; r++) {
+results.put(splitResult[ 
Integer.valueOf(lookupKey) - 1 ], "enrich." + schema + recordPosition + 
".group" + String.valueOf(r), splitResult[r]);
+
+}
+}
+break;
+case "RegEx":
+// prepare the regex
+Pattern p;
+// Regex is multiline. Each line should include a KEY for 
lookup
+p = Pattern.compile(queryRegex, Pattern.MULTILINE);
+
+Matcher matcher = p.matcher(rawResult);
+while (matcher.find()) {
+// Note that RegEx matches capture group 0 is usually 
broad but starting with it anyway
+// for the sake of purity
+for (int r = 0; r <= matcher.groupCount(); r++) {
+if (!StringUtils.isEmpty(matcher.group("KEY"))) {
--- End diff --

With this configuration:
https://cloud.githubusercontent.com/assets/11541012/18448541/dc06e528-792b-11e6-9c87-a3b7b81bd921.png;>


2016-09-12 20:55:35,606 WARN [Timer-Driven Process Thread-6] 
o.a.nifi.processors.enrich.QueryWhois 
QueryWhois[id=9108305e-0156-1000-ebb0-99ae7aeaff21] Processor Administratively 
Yielded for 1 sec due to processing failure
2016-09-12 20:55:35,606 WARN [Timer-Driven Process Thread-6] 
o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding 
QueryWhois[id=9108305e-0156-1000-ebb0-99ae7aeaff21] due to uncaught Exception: 
java.lang.IllegalArgumentException: No group with name 
2016-09-12 20:55:35,607 WARN [Timer-Driven Process Thread-6] 
o.a.n.c.t.ContinuallyRunProcessorTask 
java.lang.IllegalArgumentException: No group with name 
at java.util.regex.Matcher.getMatchedGroupIndex(Matcher.java:1316) 
~[na:1.8.0_77]
at java.util.regex.Matcher.group(Matcher.java:572) ~[na:1.8.0_77]
at 
org.apache.nifi.processors.enrich.AbstractEnrichProcessor.parseBatchResponse(AbstractEnrichProcessor.java:221)
 ~[na:na]
at 
org.apache.nifi.processors.enrich.QueryWhois.onTrigger(QueryWhois.java:276) 
~[na:na]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 

[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484985#comment-15484985
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/858#discussion_r78433868
  
--- Diff: 
nifi-nar-bundles/nifi-enrich-bundle/nifi-enrich-processors/src/main/java/org/apache/nifi/processors/enrich/AbstractEnrichProcessor.java
 ---
@@ -152,10 +166,68 @@
 // Fails to NONE
 default:
 // NONE was chosen, just appending the record result as 
group0 without further splitting
-results.put("enrich." + schema + ".record" + 
String.valueOf(recordPosition) + ".group0", rawResult);
+results.put("enrich." + schema + ".record" + 
recordPosition + ".group0", rawResult);
 break;
 }
 return results;
 }
 
+/**
+ * This method returns the parsed record string in the form of
+ * a map of two strings, consisting of a iteration aware attribute
+ * names and its values
+ *
+
+ * @param  rawResult the raw query results to be parsed
+ * @param queryParser The parsing mechanism being used to parse the 
data into groups
+ * @param queryRegex The regex to be used to split the query results 
into groups. The regex MUST implement at least on named capture group "KEY" to 
be used to populate the table rows
+ * @param lookupKey The regular expression named capture group or 
number of the column of a split to be used for matching
+ * @return  Table with attribute names and values where each Table row 
uses the value of the KEY named capture group specified in @param queryRegex
+ */
+protected Table parseBatchResponse(String 
rawResult, String queryParser, String queryRegex, String lookupKey, String 
schema) {
+// Note the hardcoded record0.
+//  Since iteration is done within the parser and Multimap is 
used, the record number here will always be 0.
+// Consequentially, 0 is hardcoded so that batched and non batched 
attributes follow the same naming
+// conventions
+final String recordPosition = ".record0";
+
+final Table results = 
HashBasedTable.create();
+
+switch (queryParser) {
+case "Split":
+Scanner scanner = new Scanner(rawResult);
+while (scanner.hasNextLine()) {
+String line = scanner.nextLine();
+// Time to Split the results...
+String[] splitResult = line.split(queryRegex);
+
+for (int r = 0; r < splitResult.length; r++) {
+results.put(splitResult[ 
Integer.valueOf(lookupKey) - 1 ], "enrich." + schema + recordPosition + 
".group" + String.valueOf(r), splitResult[r]);
+
+}
+}
+break;
+case "RegEx":
+// prepare the regex
+Pattern p;
+// Regex is multiline. Each line should include a KEY for 
lookup
+p = Pattern.compile(queryRegex, Pattern.MULTILINE);
+
+Matcher matcher = p.matcher(rawResult);
+while (matcher.find()) {
+// Note that RegEx matches capture group 0 is usually 
broad but starting with it anyway
+// for the sake of purity
+for (int r = 0; r <= matcher.groupCount(); r++) {
+if (!StringUtils.isEmpty(matcher.group("KEY"))) {
--- End diff --

With this configuration:
https://cloud.githubusercontent.com/assets/11541012/18448541/dc06e528-792b-11e6-9c87-a3b7b81bd921.png;>


2016-09-12 20:55:35,606 WARN [Timer-Driven Process Thread-6] 
o.a.nifi.processors.enrich.QueryWhois 
QueryWhois[id=9108305e-0156-1000-ebb0-99ae7aeaff21] Processor Administratively 
Yielded for 1 sec due to processing failure
2016-09-12 20:55:35,606 WARN [Timer-Driven Process Thread-6] 
o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding 
QueryWhois[id=9108305e-0156-1000-ebb0-99ae7aeaff21] due to uncaught Exception: 
java.lang.IllegalArgumentException: No group with name 
2016-09-12 20:55:35,607 WARN [Timer-Driven Process Thread-6] 
o.a.n.c.t.ContinuallyRunProcessorTask 
java.lang.IllegalArgumentException: No group with name 
at java.util.regex.Matcher.getMatchedGroupIndex(Matcher.java:1316) 
~[na:1.8.0_77]
at java.util.regex.Matcher.group(Matcher.java:572) ~[na:1.8.0_77]
at 

[jira] [Created] (NIFI-2761) BootstrapCodec can throw cryptic exceptions

2016-09-12 Thread Jeff Storck (JIRA)
Jeff Storck created NIFI-2761:
-

 Summary: BootstrapCodec can throw cryptic exceptions
 Key: NIFI-2761
 URL: https://issues.apache.org/jira/browse/NIFI-2761
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.0.0
Reporter: Jeff Storck
Priority: Minor


In BootstrapCodec.java, when the following tertiary expression gets evaluated, 
it never will be null, due to the string concatination.  This causes the 
IOException message to always be "Details: " with the InvalidCommandException's 
toString result.
{code}
try {
processRequest(cmd, args);
} catch (final InvalidCommandException ice) {
throw new IOException("Received invalid command from NiFi: " + line + " : " 
+ ice.getMessage() == null ? "" : "Details: " + ice.toString());
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1768) Add SSL Support for Solr Processors

2016-09-12 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-1768:
--
Status: Patch Available  (was: In Progress)

> Add SSL Support for Solr Processors
> ---
>
> Key: NIFI-1768
> URL: https://issues.apache.org/jira/browse/NIFI-1768
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> Currently the Solr processors do not support communicating with a Solr 
> instance that is secured with SSL. 
> We should be able to add the SSLContextService to the processor and pass an 
> SSLContext to the underlying HttpClient used by the SolrClient in the 
> SolrProcessor base class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1980) Add 'commit within' value to PutSolrContentStream

2016-09-12 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-1980:
--
Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi/pull/1005

> Add 'commit within' value to PutSolrContentStream
> -
>
> Key: NIFI-1980
> URL: https://issues.apache.org/jira/browse/NIFI-1980
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.1
> Environment: Solr 6.0.1 in **standalone** mode (in a docker 
> container).
>Reporter: Andrew Grande
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> Here's a default docker image for Solr with some instructions to create a 
> core: https://github.com/docker-solr/docker-solr
> Have NiFi send to that Solr instance. Everything seems ok, but the number of 
> documents in the core never increases, no commit happens. *Commit Within* 
> must be configured (number of milliseconds) in case of a standalone Solr.
> Often Solr server is configured with auto-commit, but apparently not this 
> default docker image.
> Proposal: update the processor to have a default value for Commit Within 
> (e.g. match Solr's default of 15 seconds or less). Update its description to 
> hint the user to remove the value in case they configure auto-commit in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-2760) Apache Ranger Authorizer using wrong version of jersey-bundle

2016-09-12 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende resolved NIFI-2760.
---
Resolution: Fixed

Merged to master.

> Apache Ranger Authorizer using wrong version of jersey-bundle
> -
>
> Key: NIFI-2760
> URL: https://issues.apache.org/jira/browse/NIFI-2760
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> The Apache Ranger authorizer has a dependency on ranger-plugins-common which 
> ends up bringing in the following Jersey JARs:
> jersey-bundle-1.17.1.jar
> jersey-core-1.19.jar
> jersey-json-1.19.jar
> This can cause classpath issues depending the order the classes are loaded:
> {code}
> Caused by: java.lang.IncompatibleClassChangeError: 
> com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider and 
> com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$Wadl 
> disagree on InnerClasses attribute
> at java.lang.Class.getDeclaringClass0(Native Method) ~[na:1.8.0_77]
> at java.lang.Class.getDeclaringClass(Class.java:1235) ~[na:1.8.0_77]
> at java.lang.Class.getEnclosingClass(Class.java:1277) ~[na:1.8.0_77]
> at 
> com.sun.jersey.core.spi.component.ComponentConstructor.getInstance(ComponentConstructor.java:170)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderFactory.__getComponentProvider(ProviderFactory.java:166)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:137)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:283)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderServices.getServices(ProviderServices.java:163)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:176)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
>  ~[jersey-core-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.init(Client.java:342) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.access$000(Client.java:118) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client$1.f(Client.java:191) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client$1.f(Client.java:187) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193) 
> ~[jersey-core-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.(Client.java:187) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.(Client.java:170) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.create(Client.java:679) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.buildClient(RangerRESTClient.java:212)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.getClient(RangerRESTClient.java:177)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.getResource(RangerRESTClient.java:157)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.createWebResource(RangerAdminRESTClient.java:242)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.access$200(RangerAdminRESTClient.java:41)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:101)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:99)
>  ~[na:na]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_77]
> at javax.security.auth.Subject.doAs(Subject.java:360) ~[na:1.8.0_77]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:107)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:217)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:185)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.startRefresher(PolicyRefresher.java:136)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.service.RangerBasePlugin.init(RangerBasePlugin.java:128)
>  ~[na:na]
> at 
> 

[jira] [Commented] (NIFI-2760) Apache Ranger Authorizer using wrong version of jersey-bundle

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484869#comment-15484869
 ] 

ASF GitHub Bot commented on NIFI-2760:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1006


> Apache Ranger Authorizer using wrong version of jersey-bundle
> -
>
> Key: NIFI-2760
> URL: https://issues.apache.org/jira/browse/NIFI-2760
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> The Apache Ranger authorizer has a dependency on ranger-plugins-common which 
> ends up bringing in the following Jersey JARs:
> jersey-bundle-1.17.1.jar
> jersey-core-1.19.jar
> jersey-json-1.19.jar
> This can cause classpath issues depending the order the classes are loaded:
> {code}
> Caused by: java.lang.IncompatibleClassChangeError: 
> com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider and 
> com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$Wadl 
> disagree on InnerClasses attribute
> at java.lang.Class.getDeclaringClass0(Native Method) ~[na:1.8.0_77]
> at java.lang.Class.getDeclaringClass(Class.java:1235) ~[na:1.8.0_77]
> at java.lang.Class.getEnclosingClass(Class.java:1277) ~[na:1.8.0_77]
> at 
> com.sun.jersey.core.spi.component.ComponentConstructor.getInstance(ComponentConstructor.java:170)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderFactory.__getComponentProvider(ProviderFactory.java:166)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:137)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:283)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderServices.getServices(ProviderServices.java:163)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:176)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
>  ~[jersey-core-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.init(Client.java:342) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.access$000(Client.java:118) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client$1.f(Client.java:191) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client$1.f(Client.java:187) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193) 
> ~[jersey-core-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.(Client.java:187) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.(Client.java:170) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.create(Client.java:679) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.buildClient(RangerRESTClient.java:212)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.getClient(RangerRESTClient.java:177)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.getResource(RangerRESTClient.java:157)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.createWebResource(RangerAdminRESTClient.java:242)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.access$200(RangerAdminRESTClient.java:41)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:101)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:99)
>  ~[na:na]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_77]
> at javax.security.auth.Subject.doAs(Subject.java:360) ~[na:1.8.0_77]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:107)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:217)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:185)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.startRefresher(PolicyRefresher.java:136)
>  ~[na:na]
> at 
> 

[GitHub] nifi pull request #1006: NIFI-2760 Specifying jersey-bundle 1.19 for Ranger ...

2016-09-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1006


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2760) Apache Ranger Authorizer using wrong version of jersey-bundle

2016-09-12 Thread Oleg Zhurakousky (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484829#comment-15484829
 ] 

Oleg Zhurakousky commented on NIFI-2760:


+1

> Apache Ranger Authorizer using wrong version of jersey-bundle
> -
>
> Key: NIFI-2760
> URL: https://issues.apache.org/jira/browse/NIFI-2760
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> The Apache Ranger authorizer has a dependency on ranger-plugins-common which 
> ends up bringing in the following Jersey JARs:
> jersey-bundle-1.17.1.jar
> jersey-core-1.19.jar
> jersey-json-1.19.jar
> This can cause classpath issues depending the order the classes are loaded:
> {code}
> Caused by: java.lang.IncompatibleClassChangeError: 
> com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider and 
> com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$Wadl 
> disagree on InnerClasses attribute
> at java.lang.Class.getDeclaringClass0(Native Method) ~[na:1.8.0_77]
> at java.lang.Class.getDeclaringClass(Class.java:1235) ~[na:1.8.0_77]
> at java.lang.Class.getEnclosingClass(Class.java:1277) ~[na:1.8.0_77]
> at 
> com.sun.jersey.core.spi.component.ComponentConstructor.getInstance(ComponentConstructor.java:170)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderFactory.__getComponentProvider(ProviderFactory.java:166)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:137)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:283)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderServices.getServices(ProviderServices.java:163)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:176)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
>  ~[jersey-core-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.init(Client.java:342) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.access$000(Client.java:118) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client$1.f(Client.java:191) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client$1.f(Client.java:187) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193) 
> ~[jersey-core-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.(Client.java:187) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.(Client.java:170) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.create(Client.java:679) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.buildClient(RangerRESTClient.java:212)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.getClient(RangerRESTClient.java:177)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.getResource(RangerRESTClient.java:157)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.createWebResource(RangerAdminRESTClient.java:242)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.access$200(RangerAdminRESTClient.java:41)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:101)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:99)
>  ~[na:na]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_77]
> at javax.security.auth.Subject.doAs(Subject.java:360) ~[na:1.8.0_77]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:107)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:217)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:185)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.PolicyRefresher.startRefresher(PolicyRefresher.java:136)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.service.RangerBasePlugin.init(RangerBasePlugin.java:128)
>  ~[na:na]
>

[jira] [Commented] (NIFI-2760) Apache Ranger Authorizer using wrong version of jersey-bundle

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484819#comment-15484819
 ] 

ASF GitHub Bot commented on NIFI-2760:
--

GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1006

NIFI-2760 Specifying jersey-bundle 1.19 for Ranger plugin



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi ranger-jersey-bundle

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1006.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1006


commit 01ba208f53e4ce2690c1c9d758822b4cf27dfdbf
Author: Bryan Bende 
Date:   2016-09-09T15:03:04Z

NIFI-2760 Specifying jersey-bundle 1.19 for Ranger plugin




> Apache Ranger Authorizer using wrong version of jersey-bundle
> -
>
> Key: NIFI-2760
> URL: https://issues.apache.org/jira/browse/NIFI-2760
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> The Apache Ranger authorizer has a dependency on ranger-plugins-common which 
> ends up bringing in the following Jersey JARs:
> jersey-bundle-1.17.1.jar
> jersey-core-1.19.jar
> jersey-json-1.19.jar
> This can cause classpath issues depending the order the classes are loaded:
> {code}
> Caused by: java.lang.IncompatibleClassChangeError: 
> com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider and 
> com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$Wadl 
> disagree on InnerClasses attribute
> at java.lang.Class.getDeclaringClass0(Native Method) ~[na:1.8.0_77]
> at java.lang.Class.getDeclaringClass(Class.java:1235) ~[na:1.8.0_77]
> at java.lang.Class.getEnclosingClass(Class.java:1277) ~[na:1.8.0_77]
> at 
> com.sun.jersey.core.spi.component.ComponentConstructor.getInstance(ComponentConstructor.java:170)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderFactory.__getComponentProvider(ProviderFactory.java:166)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:137)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:283)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.component.ProviderServices.getServices(ProviderServices.java:163)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:176)
>  ~[jersey-core-1.19.jar:1.19]
> at 
> com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
>  ~[jersey-core-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.init(Client.java:342) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.access$000(Client.java:118) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client$1.f(Client.java:191) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client$1.f(Client.java:187) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193) 
> ~[jersey-core-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.(Client.java:187) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.(Client.java:170) 
> ~[jersey-client-1.19.jar:1.19]
> at com.sun.jersey.api.client.Client.create(Client.java:679) 
> ~[jersey-client-1.19.jar:1.19]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.buildClient(RangerRESTClient.java:212)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.getClient(RangerRESTClient.java:177)
>  ~[na:na]
> at 
> org.apache.ranger.plugin.util.RangerRESTClient.getResource(RangerRESTClient.java:157)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.createWebResource(RangerAdminRESTClient.java:242)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient.access$200(RangerAdminRESTClient.java:41)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:101)
>  ~[na:na]
> at 
> org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:99)
>  ~[na:na]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_77]
> at 

[GitHub] nifi pull request #1006: NIFI-2760 Specifying jersey-bundle 1.19 for Ranger ...

2016-09-12 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1006

NIFI-2760 Specifying jersey-bundle 1.19 for Ranger plugin



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi ranger-jersey-bundle

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1006.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1006


commit 01ba208f53e4ce2690c1c9d758822b4cf27dfdbf
Author: Bryan Bende 
Date:   2016-09-09T15:03:04Z

NIFI-2760 Specifying jersey-bundle 1.19 for Ranger plugin




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-2760) Apache Ranger Authorizer using wrong version of jersey-bundle

2016-09-12 Thread Bryan Bende (JIRA)
Bryan Bende created NIFI-2760:
-

 Summary: Apache Ranger Authorizer using wrong version of 
jersey-bundle
 Key: NIFI-2760
 URL: https://issues.apache.org/jira/browse/NIFI-2760
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bryan Bende
Assignee: Bryan Bende
Priority: Minor
 Fix For: 1.1.0


The Apache Ranger authorizer has a dependency on ranger-plugins-common which 
ends up bringing in the following Jersey JARs:

jersey-bundle-1.17.1.jar
jersey-core-1.19.jar
jersey-json-1.19.jar

This can cause classpath issues depending the order the classes are loaded:

{code}
Caused by: java.lang.IncompatibleClassChangeError: 
com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider and 
com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$Wadl disagree 
on InnerClasses attribute
at java.lang.Class.getDeclaringClass0(Native Method) ~[na:1.8.0_77]
at java.lang.Class.getDeclaringClass(Class.java:1235) ~[na:1.8.0_77]
at java.lang.Class.getEnclosingClass(Class.java:1277) ~[na:1.8.0_77]
at 
com.sun.jersey.core.spi.component.ComponentConstructor.getInstance(ComponentConstructor.java:170)
 ~[jersey-core-1.19.jar:1.19]
at 
com.sun.jersey.core.spi.component.ProviderFactory.__getComponentProvider(ProviderFactory.java:166)
 ~[jersey-core-1.19.jar:1.19]
at 
com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:137)
 ~[jersey-core-1.19.jar:1.19]
at 
com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:283)
 ~[jersey-core-1.19.jar:1.19]
at 
com.sun.jersey.core.spi.component.ProviderServices.getServices(ProviderServices.java:163)
 ~[jersey-core-1.19.jar:1.19]
at 
com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:176)
 ~[jersey-core-1.19.jar:1.19]
at 
com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
 ~[jersey-core-1.19.jar:1.19]
at com.sun.jersey.api.client.Client.init(Client.java:342) 
~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.Client.access$000(Client.java:118) 
~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.Client$1.f(Client.java:191) 
~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.Client$1.f(Client.java:187) 
~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193) 
~[jersey-core-1.19.jar:1.19]
at com.sun.jersey.api.client.Client.(Client.java:187) 
~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.Client.(Client.java:170) 
~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.Client.create(Client.java:679) 
~[jersey-client-1.19.jar:1.19]
at 
org.apache.ranger.plugin.util.RangerRESTClient.buildClient(RangerRESTClient.java:212)
 ~[na:na]
at 
org.apache.ranger.plugin.util.RangerRESTClient.getClient(RangerRESTClient.java:177)
 ~[na:na]
at 
org.apache.ranger.plugin.util.RangerRESTClient.getResource(RangerRESTClient.java:157)
 ~[na:na]
at 
org.apache.ranger.admin.client.RangerAdminRESTClient.createWebResource(RangerAdminRESTClient.java:242)
 ~[na:na]
at 
org.apache.ranger.admin.client.RangerAdminRESTClient.access$200(RangerAdminRESTClient.java:41)
 ~[na:na]
at 
org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:101)
 ~[na:na]
at 
org.apache.ranger.admin.client.RangerAdminRESTClient$3.run(RangerAdminRESTClient.java:99)
 ~[na:na]
at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.8.0_77]
at javax.security.auth.Subject.doAs(Subject.java:360) ~[na:1.8.0_77]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
 ~[na:na]
at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:107)
 ~[na:na]
at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:217)
 ~[na:na]
at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:185)
 ~[na:na]
at 
org.apache.ranger.plugin.util.PolicyRefresher.startRefresher(PolicyRefresher.java:136)
 ~[na:na]
at 
org.apache.ranger.plugin.service.RangerBasePlugin.init(RangerBasePlugin.java:128)
 ~[na:na]
at 
org.apache.nifi.ranger.authorization.RangerNiFiAuthorizer.onConfigured(RangerNiFiAuthorizer.java:118)
 ~[na:na]
{code}

The jersey-bundle JAR should be version 1.19.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1005: NIFI-1768 Adding TLS/SSL support to Solr processors

2016-09-12 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1005

NIFI-1768 Adding TLS/SSL support to Solr processors

NIFI-1980 Added a default value for PutSolrContentStream commitWithIn
NIFI-2568 Added Kerberos support to Solr processors

Upgrading SolrJ to 6.2.

Relevant links for setting up Solr with SSL or Kerberos:

https://cwiki.apache.org/confluence/display/solr/Authentication+and+Authorization+Plugins
https://cwiki.apache.org/confluence/display/solr/Enabling+SSL

https://cwiki.apache.org/confluence/display/solr/Kerberos+Authentication+Plugin

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi solr-security

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1005.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1005


commit a11c65c7a3d4bbd8f04a0f0c35ee035c24799368
Author: Bryan Bende 
Date:   2016-09-08T02:11:10Z

NIFI-1768 Adding TLS/SSL support to Solr processors
NIFI-1980 Added a default value for PutSolrContentStream commitWithIn
NIFI-2568 Added Kerberos support to Solr processors

Upgrading SolrJ to 6.2.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-2759) TestListDatabaseTables.testListTablesMultipleRefresh consistently failing

2016-09-12 Thread Joe Skora (JIRA)
Joe Skora created NIFI-2759:
---

 Summary: TestListDatabaseTables.testListTablesMultipleRefresh 
consistently failing
 Key: NIFI-2759
 URL: https://issues.apache.org/jira/browse/NIFI-2759
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.0.0
Reporter: Joe Skora
Priority: Critical


Running this test repeatedly produces errors.

{code}
Running org.apache.nifi.processors.standard.TestListDatabaseTables
Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.114 sec <<< 
FAILURE! - in org.apache.nifi.processors.standard.TestListDatabaseTables
testListTablesMultipleRefresh(org.apache.nifi.processors.standard.TestListDatabaseTables)
  Time elapsed: 0.794 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:318)
at 
org.apache.nifi.processors.standard.TestListDatabaseTables.testListTablesMultipleRefresh(TestListDatabaseTables.java:216)
{code}

followed by 

{code}
Failed tests:
  TestListDatabaseTables.testListTablesMultipleRefresh:216 expected:<1> but 
was:<2>
{code}

I tried removing the 
[runner.clearTransferState();|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestListDatabaseTables.java#L210]
 thinking the reset was causing both tables to show but that did not make a 
difference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (NIFI-2440) Add last modified time & timestamp attributes to flow files generated by ListSFTP processor

2016-09-12 Thread Kirk Tarou (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Tarou closed NIFI-2440.


This closes #913

> Add last modified time & timestamp attributes to flow files generated by 
> ListSFTP processor
> ---
>
> Key: NIFI-2440
> URL: https://issues.apache.org/jira/browse/NIFI-2440
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Kirk Tarou
>Assignee: Joe Skora
>Priority: Trivial
> Fix For: 1.1.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The timestamp & last modified time attributes are not exposed in ListSFTP so 
> there's no way to preserve the timestamp of the remotely collected files when 
> writing them out to a file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-1980) Add 'commit within' value to PutSolrContentStream

2016-09-12 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende reassigned NIFI-1980:
-

Assignee: Bryan Bende

> Add 'commit within' value to PutSolrContentStream
> -
>
> Key: NIFI-1980
> URL: https://issues.apache.org/jira/browse/NIFI-1980
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.1
> Environment: Solr 6.0.1 in **standalone** mode (in a docker 
> container).
>Reporter: Andrew Grande
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> Here's a default docker image for Solr with some instructions to create a 
> core: https://github.com/docker-solr/docker-solr
> Have NiFi send to that Solr instance. Everything seems ok, but the number of 
> documents in the core never increases, no commit happens. *Commit Within* 
> must be configured (number of milliseconds) in case of a standalone Solr.
> Often Solr server is configured with auto-commit, but apparently not this 
> default docker image.
> Proposal: update the processor to have a default value for Commit Within 
> (e.g. match Solr's default of 15 seconds or less). Update its description to 
> hint the user to remove the value in case they configure auto-commit in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1768) Add SSL Support for Solr Processors

2016-09-12 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-1768:
--
Fix Version/s: 1.1.0

> Add SSL Support for Solr Processors
> ---
>
> Key: NIFI-1768
> URL: https://issues.apache.org/jira/browse/NIFI-1768
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> Currently the Solr processors do not support communicating with a Solr 
> instance that is secured with SSL. 
> We should be able to add the SSLContextService to the processor and pass an 
> SSLContext to the underlying HttpClient used by the SolrClient in the 
> SolrProcessor base class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1980) Add 'commit within' value to PutSolrContentStream

2016-09-12 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-1980:
--
Fix Version/s: 1.1.0

> Add 'commit within' value to PutSolrContentStream
> -
>
> Key: NIFI-1980
> URL: https://issues.apache.org/jira/browse/NIFI-1980
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.1
> Environment: Solr 6.0.1 in **standalone** mode (in a docker 
> container).
>Reporter: Andrew Grande
>Priority: Minor
> Fix For: 1.1.0
>
>
> Here's a default docker image for Solr with some instructions to create a 
> core: https://github.com/docker-solr/docker-solr
> Have NiFi send to that Solr instance. Everything seems ok, but the number of 
> documents in the core never increases, no commit happens. *Commit Within* 
> must be configured (number of milliseconds) in case of a standalone Solr.
> Often Solr server is configured with auto-commit, but apparently not this 
> default docker image.
> Proposal: update the processor to have a default value for Commit Within 
> (e.g. match Solr's default of 15 seconds or less). Update its description to 
> hint the user to remove the value in case they configure auto-commit in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-2568) Add Kerberos Support to Solr Processors

2016-09-12 Thread Bryan Bende (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende reassigned NIFI-2568:
-

Assignee: Bryan Bende

> Add Kerberos Support to Solr Processors
> ---
>
> Key: NIFI-2568
> URL: https://issues.apache.org/jira/browse/NIFI-2568
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> We should add support to the Solr processors for interacting with a 
> kerberized Solr cloud instance. The following pages described how Kerberos 
> works in Solr:
> https://cwiki.apache.org/confluence/display/solr/Kerberos+Authentication+Plugin
> From a client perspective it says you need the following:
> {code}
> System.setProperty("java.security.auth.login.config", 
> "/home/foo/jaas-client.conf");
> HttpClientUtil.setConfigurer(new Krb5HttpClientConfigurer());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2722) UI - Stats on canvas no longer updating

2016-09-12 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-2722:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> UI - Stats on canvas no longer updating
> ---
>
> Key: NIFI-2722
> URL: https://issues.apache.org/jira/browse/NIFI-2722
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.1.0
>
>
> Recently, the UI was updated to only update components when the revision has 
> changed. This had the unintended side effect of preventing the stats from 
> updating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-2719) UI - Request race condition

2016-09-12 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reassigned NIFI-2719:
-

Assignee: Matt Gilman

> UI - Request race condition
> ---
>
> Key: NIFI-2719
> URL: https://issues.apache.org/jira/browse/NIFI-2719
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.1.0
>
>
> There exists a race condition where during a request to get the components in 
> the current group another request to create or delete a component may execute.
> This results in the component being incorrectly added/removed from the canvas 
> temporarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread trixpan
Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/858
  
@pvillard31 feedback addressed. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2754) FlowFiles Queue into Swap Only

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484132#comment-15484132
 ] 

ASF GitHub Bot commented on NIFI-2754:
--

Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/1000
  
@markap14 I changed the code per your suggestion and the test I wrote still 
worked as expected.
PR updated.


> FlowFiles Queue into Swap Only
> --
>
> Key: NIFI-2754
> URL: https://issues.apache.org/jira/browse/NIFI-2754
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Peter Wicks
>
> If the Active queue is empty and the number of FlowFiles added to the queue 
> is perfectly splitable by the current Swap size (10 Flow Files / 2 
> files per swap file = 5 with no remainder), then no FlowFiles will move to 
> Active and all will remain in Swap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1000: NIFI-2754

2016-09-12 Thread patricker
Github user patricker commented on the issue:

https://github.com/apache/nifi/pull/1000
  
@markap14 I changed the code per your suggestion and the test I wrote still 
worked as expected.
PR updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (NIFI-2758) Prevent moving Processor whose Controller Service reference to move out of scope

2016-09-12 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-2758:
-

 Summary: Prevent moving Processor whose Controller Service 
reference to move out of scope
 Key: NIFI-2758
 URL: https://issues.apache.org/jira/browse/NIFI-2758
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
 Fix For: 1.1.0


In Apache NiFi 1.0.0, Controller Services became scoped by Process Group. We 
need to add in a check to ensure that a Processor isn't moved out of a Process 
Group resulting in a Controller Service that is out of scope.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-2440) Add last modified time & timestamp attributes to flow files generated by ListSFTP processor

2016-09-12 Thread Joe Skora (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Skora resolved NIFI-2440.
-
   Resolution: Implemented
Fix Version/s: 1.1.0

Completed GitHub [PR913|https://github.com/apache/nifi/pull/913] on ASF commit 
[e258856|https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=e25885650a169ae2639355f2ba69a0405071188b].

> Add last modified time & timestamp attributes to flow files generated by 
> ListSFTP processor
> ---
>
> Key: NIFI-2440
> URL: https://issues.apache.org/jira/browse/NIFI-2440
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Kirk Tarou
>Assignee: Joe Skora
>Priority: Trivial
> Fix For: 1.1.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The timestamp & last modified time attributes are not exposed in ListSFTP so 
> there's no way to preserve the timestamp of the remotely collected files when 
> writing them out to a file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2754) FlowFiles Queue into Swap Only

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484018#comment-15484018
 ] 

ASF GitHub Bot commented on NIFI-2754:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1000
  
@patricker @mcgilman looking at the PR, I do think that this will resolve 
the specific use case that you're running into, Peter. I think, though, that it 
is addressing a very specific corner case and could actually be solved a little 
more generally, though. Instead of changing the number of swap files based on 
the specific condition of `activeQueue.size() == 0 && numSwapFiles > 0 && 
swapQueue.size() % SWAP_RECORD_POLL_SIZE == 0` what do you think of just 
calling `migrateSwapToActive()` at the beginning of the 
`writeSwapFilesIfNecessary()`? This should also ensure that we keep with the 
same ordering guarantees that we have provided throughout the rest of the class.


> FlowFiles Queue into Swap Only
> --
>
> Key: NIFI-2754
> URL: https://issues.apache.org/jira/browse/NIFI-2754
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Peter Wicks
>
> If the Active queue is empty and the number of FlowFiles added to the queue 
> is perfectly splitable by the current Swap size (10 Flow Files / 2 
> files per swap file = 5 with no remainder), then no FlowFiles will move to 
> Active and all will remain in Swap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1000: NIFI-2754

2016-09-12 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1000
  
@patricker @mcgilman looking at the PR, I do think that this will resolve 
the specific use case that you're running into, Peter. I think, though, that it 
is addressing a very specific corner case and could actually be solved a little 
more generally, though. Instead of changing the number of swap files based on 
the specific condition of `activeQueue.size() == 0 && numSwapFiles > 0 && 
swapQueue.size() % SWAP_RECORD_POLL_SIZE == 0` what do you think of just 
calling `migrateSwapToActive()` at the beginning of the 
`writeSwapFilesIfNecessary()`? This should also ensure that we keep with the 
same ordering guarantees that we have provided throughout the rest of the class.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-1756) RuntimeDelegate.class confliction.

2016-09-12 Thread Brandon Zachary (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Zachary updated NIFI-1756:
--
Affects Version/s: 0.6.1

> RuntimeDelegate.class confliction.
> --
>
> Key: NIFI-1756
> URL: https://issues.apache.org/jira/browse/NIFI-1756
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.6.1
>Reporter: Brandon Zachary
>Priority: Minor
>
> Hello,
> I'm creating a controller service that uses a Client object from the 
> javax.ws.rs-api-2.0.1.jar. Whenever, I go to enable my controller service I 
> get a ClassCastException where there is a jsr311-api-1.1.1.jar in the 
> nifi-framework-nar  that has a javax.ws.rs.ext.RuntimeDelegate.class that is 
> trying to cast to the RuntimeDelegate that's in my javax.jar. Clearly the two 
> don't mesh as I feel the jsr311 version is out of date. I would prefer to 
> have the client instantiated in the controller service but for now I'm 
> creating it in the processor. Apparently the processor doesn't go through the 
> same nifi-framework calls as controller services do. Can that jar be updated 
> to a more recent implementation of javax.ws.rs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-1756) RuntimeDelegate.class confliction.

2016-09-12 Thread Brandon Zachary (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Zachary updated NIFI-1756:
--
Priority: Minor  (was: Major)

> RuntimeDelegate.class confliction.
> --
>
> Key: NIFI-1756
> URL: https://issues.apache.org/jira/browse/NIFI-1756
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 0.6.1
>Reporter: Brandon Zachary
>Priority: Minor
>
> Hello,
> I'm creating a controller service that uses a Client object from the 
> javax.ws.rs-api-2.0.1.jar. Whenever, I go to enable my controller service I 
> get a ClassCastException where there is a jsr311-api-1.1.1.jar in the 
> nifi-framework-nar  that has a javax.ws.rs.ext.RuntimeDelegate.class that is 
> trying to cast to the RuntimeDelegate that's in my javax.jar. Clearly the two 
> don't mesh as I feel the jsr311 version is out of date. I would prefer to 
> have the client instantiated in the controller service but for now I'm 
> creating it in the processor. Apparently the processor doesn't go through the 
> same nifi-framework calls as controller services do. Can that jar be updated 
> to a more recent implementation of javax.ws.rs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1971) Create a batch capable pseudo-whois ("netcat") enrichment Processor

2016-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483843#comment-15483843
 ] 

ASF GitHub Bot commented on NIFI-1971:
--

Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/858
  
@pvillard31 it is a bug caused by an extra newline. I missed due to some 
simplifications done within the jUnit. 

It is caused by the new line present in here:


https://github.com/apache/nifi/pull/858/files#diff-5c34d310642cc536b1c8b2f6a87c7043R223

This newline should only be added if the processor is operating in batch 
mode.

In summary the whois input should be:

`origin 123.36.123.1`

instead of 

```
origin
123.36.123.1
```

You can reproduce by doing a telnet to port 43 and sending the payloads 
above.

Will fix and re-upload soon


> Create a batch capable pseudo-whois ("netcat") enrichment Processor
> ---
>
> Key: NIFI-1971
> URL: https://issues.apache.org/jira/browse/NIFI-1971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andre
>Assignee: Andre
> Fix For: 1.1.0
>
>
> While the QueryDNS created can be used on low to medium volume enrichment and 
> to licensed DNS based lookups (e.g. commercial use of SpamHaus) many 
> enrichment providers prefer the use of bulk queries using pseudo whois API 
> (a.k.a. netcat interface).
> as documented 
> [here|https://www.shadowserver.org/wiki/pmwiki.php/Services/IP-BGP#toc6] the 
> bulk interfaces work by connecting to port 43/TCP and sending a payload like:
> {code}
> begin origin
> 4.5.4.3
> 17.112.152.32
> 208.77.188.166
> end
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2381) Connection Pooling Service -Drop invalid connections and create new ones

2016-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/NIFI-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483763#comment-15483763
 ] 

Carlos Manuel António Fernandes commented on NIFI-2381:
---

Toivo, I Will test and send feedback soon.

Carlos


> Connection Pooling Service -Drop invalid connections and create new ones 
> -
>
> Key: NIFI-2381
> URL: https://issues.apache.org/jira/browse/NIFI-2381
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.0
> Environment: all
>Reporter: Carlos Manuel António Fernandes
>Assignee: Toivo Adams
>
> The connections  in Connection Pooling Service become invalid for several 
> reasons : session timeout, firewalls block idle connections, outages of 
> backend server, etc.
> In the current niif releases this connections rest in the pool as good  but 
> when the user use one of them is triggered an errror  for the backend 
> database. 
> Ex: org.netezza.error.NzSQLException: FATAL 1:  Connection Terminated - 
> session timeout exceeded
> With this improvement we pretend periodicaly to test all the connections , 
> drop the invalid ,create new ones and  mantain all the pool healthy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #858: NIFI-1971 - Introduce QueryWhois processor

2016-09-12 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/858
  
Hey @trixpan 

Few remarks, regarding the ``customValidate``, I'd suggest the following 
example:
java
results.add(new ValidationResult.Builder()

.input(validationContext.getProperty(BATCH_SIZE).getValue())
.subject(QUERY_PARSER.getDisplayName())
.explanation("NONE parser does not support batching. 
Configure Batch Size to 1 or use another parser.")
.valid(false)
.build());


This way instead of:
'' validated against 'QUERY_PARSER' is invalid because NONE parser...

I have:
'500' validated against 'Results Parser' is invalid because NONE parser...

Thoughts?

Also, when I set the following configuration:
https://cloud.githubusercontent.com/assets/11541012/18432639/5e696a8e-78e3-11e6-932c-a128eea3cc55.png;>

I have the following result:

--
Standard FlowFile Attributes
Key: 'entryDate'
Value: 'Mon Sep 12 12:14:31 CEST 2016'
Key: 'lineageStartDate'
Value: 'Mon Sep 12 12:14:31 CEST 2016'
Key: 'fileSize'
Value: '0'
FlowFile Attribute Map Content
Key: 'enrich.whois.record0.group0'
Value: 'Request must be in the form of 'origin a.b.c.d' or 'peer 
a.b.c.d' where a.b.c.d is a valid IPv4 address.
'
Key: 'filename'
Value: '2616257872984957'
Key: 'path'
Value: './'
Key: 'src.ip'
Value: '123.36.123.1'
Key: 'uuid'
Value: '0bc829ab-3423-4f8d-bed3-dd3c8ee7db23'
--


Is that expected? I'd assume (with the processor configuration) that the 
result is not parsed in the same way but that the request is still performed 
correctly, no?

Will continue reviewing later today.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---