[jira] [Commented] (NIFI-4848) Update HttpComponents version

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355079#comment-16355079
 ] 

ASF GitHub Bot commented on NIFI-4848:
--

GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/2453

NIFI-4848: Update HttpComponents version

Tested with unsecured/secured NiFi cluster, site-to-site, GetHTTP and 
DebugFlow. No regression was found.

- httpclient 4.5.3 -> 4.5.5
- httpcore 4.4.4 -> 4.4.9
  - ThreadSafe annotation is removed since 4.4.5, HTTPCLIENT-1743.
Removed the annotation from DebugFlow processor.
- httpasyncclient 4.1.2 -> 4.1.3

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-4848

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2453.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2453


commit 36a8bd4bbeca4c640a8caf5ce091d0ba39796f0c
Author: Koji Kawamura 
Date:   2018-02-07T06:42:54Z

NIFI-4848: Update HttpComponents version

- httpclient 4.5.3 -> 4.5.5
- httpcore 4.4.4 -> 4.4.9
  - ThreadSafe annotation is removed since 4.4.5, HTTPCLIENT-1743.
Removed the annotation from DebugFlow processor.
- httpasyncclient 4.1.2 -> 4.1.3




> Update HttpComponents version
> -
>
> Key: NIFI-4848
> URL: https://issues.apache.org/jira/browse/NIFI-4848
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> Following dependencies should be updated to the latest GA:
> httpclient 4.5.3 -> 4.5.5
> httpcore 4.4.4 -> 4.4.9
> httpasyncclient 4.1.2 -> 4.1.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2453: NIFI-4848: Update HttpComponents version

2018-02-06 Thread ijokarumawak
GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/2453

NIFI-4848: Update HttpComponents version

Tested with unsecured/secured NiFi cluster, site-to-site, GetHTTP and 
DebugFlow. No regression was found.

- httpclient 4.5.3 -> 4.5.5
- httpcore 4.4.4 -> 4.4.9
  - ThreadSafe annotation is removed since 4.4.5, HTTPCLIENT-1743.
Removed the annotation from DebugFlow processor.
- httpasyncclient 4.1.2 -> 4.1.3

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-4848

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2453.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2453


commit 36a8bd4bbeca4c640a8caf5ce091d0ba39796f0c
Author: Koji Kawamura 
Date:   2018-02-07T06:42:54Z

NIFI-4848: Update HttpComponents version

- httpclient 4.5.3 -> 4.5.5
- httpcore 4.4.4 -> 4.4.9
  - ThreadSafe annotation is removed since 4.4.5, HTTPCLIENT-1743.
Removed the annotation from DebugFlow processor.
- httpasyncclient 4.1.2 -> 4.1.3




---


[jira] [Updated] (NIFI-4848) Update HttpComponents version

2018-02-06 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-4848:

Summary: Update HttpComponents version  (was: Bump HttpComponents version)

> Update HttpComponents version
> -
>
> Key: NIFI-4848
> URL: https://issues.apache.org/jira/browse/NIFI-4848
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> Following dependencies should be updated to the latest GA:
> httpclient 4.5.3 -> 4.5.5
> httpcore 4.4.4 -> 4.4.9
> httpasyncclient 4.1.2 -> 4.1.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4848) Bump HttpComponents version

2018-02-06 Thread Koji Kawamura (JIRA)
Koji Kawamura created NIFI-4848:
---

 Summary: Bump HttpComponents version
 Key: NIFI-4848
 URL: https://issues.apache.org/jira/browse/NIFI-4848
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Koji Kawamura
Assignee: Koji Kawamura


Following dependencies should be updated to the latest GA:

httpclient 4.5.3 -> 4.5.5
httpcore 4.4.4 -> 4.4.9
httpasyncclient 4.1.2 -> 4.1.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-4573) Improve error messaging when users do not enter password for flow encryption migration

2018-02-06 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto reassigned NIFI-4573:
---

Assignee: Andy LoPresto

> Improve error messaging when users do not enter password for flow encryption 
> migration
> --
>
> Key: NIFI-4573
> URL: https://issues.apache.org/jira/browse/NIFI-4573
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: ambari, configuration, encryption, error, security, ux
>
> Multiple users have reported performing an upgrade via Apache Ambari and 
> getting a stacktrace with "pad block corrupted" during the encrypted 
> configuration tool operation. This underlying exception indicates the key 
> used to perform decryption of some cipher text is not correct. We should 
> improve the error messaging to direct users to the probable cause (in this 
> case, not entering the correct decryption key in the Ambari configuration 
> page). The code is technically "works as expected" but the user experience 
> can be improved. 
> {code}
> The error says "pad block corrupted"
> 2017/10/07 12:30:39 ERROR main 
> org.apache.nifi.properties.ConfigEncryptionTool: Encountered an error 
> javax.crypto.BadPaddingException: pad block corrupted 
> at 
> org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher$BufferedGenericBlockCipher.doFinal(Unknown
>  Source) 
> at 
> org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(Unknown
>  Source) 
> at javax.crypto.Cipher.doFinal(Cipher.java:2165) 
> at javax.crypto.Cipher$doFinal$2.call(Unknown Source) 
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>  
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>  
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
>  
> at 
> org.apache.nifi.properties.ConfigEncryptionTool.decryptFlowElement(ConfigEncryptionTool.groovy:541)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:498) 
> at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) 
> at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) 
> at 
> org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:384)
>  
> at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019) 
> at 
> org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:69)
>  
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:52)
>  
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:154)
>  
> at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:190)
>  
> at 
> org.apache.nifi.properties.ConfigEncryptionTool$_migrateFlowXmlContent_closure4.doCall(ConfigEncryptionTool.groovy:636)
>  
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:498) 
> at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) 
> at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) 
> at 
> org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
>  
> at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019) 
> at groovy.lang.Closure.call(Closure.java:426) 
> at groovy.lang.Closure.call(Closure.java:442) 
> at 
> org.codehaus.groovy.runtime.StringGroovyMethods.getReplacement(StringGroovyMethods.java:1543)
>  
> at 
> org.codehaus.groovy.runtime.StringGroovyMethods.replaceAll(StringGroovyMethods.java:2580)
>  
> at 
> org.codehaus.groovy.runtime.StringGroovyMethods.replaceAll(StringGroovyMethods.java:2506)
>  
> at org.codehaus.groovy.runtime.dgm$1127.invoke(Unknown Source) 
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274)
>  
> at 
> org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
>  
> at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>  
> at 
> 

[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354881#comment-16354881
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166504892
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
--- End diff --

I agree with you to avoid adding any logic to build SQL statement. How 
about adding one more condition at [AbstractDatabaseFetchProcessor.setup where 
it populates 
columnTypeMap](https://github.com/apache/nifi/blob/90d7926907b87a832407573ce20bd7ac5ba56bf9/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java#L295
), in order to filter out columns those are not specified by 'Max Value 
Columns'? This way, we don't have to modify SQL statement, but can minimize the 
number of columns to be stored in state. How do you think?

Specifically, following lines of code:
```
// This part adds all columns into columnTypeMap for custom query. We want 
to capture maxValueColumns only. The maxValueColumnNameList below can be used 
to do so.
for (int i = 1; i <= numCols; i++) {
String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
String colKey = getStateKey(tableName, colName);
int colType = resultSetMetaData.getColumnType(i);
columnTypeMap.putIfAbsent(colKey, colType);
}

List maxValueColumnNameList = 
Arrays.asList(maxValueColumnNames.split(","));


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166504892
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
--- End diff --

I agree with you to avoid adding any logic to build SQL statement. How 
about adding one more condition at [AbstractDatabaseFetchProcessor.setup where 
it populates 
columnTypeMap](https://github.com/apache/nifi/blob/90d7926907b87a832407573ce20bd7ac5ba56bf9/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java#L295
), in order to filter out columns those are not specified by 'Max Value 
Columns'? This way, we don't have to modify SQL statement, but can minimize the 
number of columns to be stored in state. How do you think?

Specifically, following lines of code:
```
// This part adds all columns into columnTypeMap for custom query. We want 
to capture maxValueColumns only. The maxValueColumnNameList below can be used 
to do so.
for (int i = 1; i <= numCols; i++) {
String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
String colKey = getStateKey(tableName, colName);
int colType = resultSetMetaData.getColumnType(i);
columnTypeMap.putIfAbsent(colKey, colType);
}

List maxValueColumnNameList = 
Arrays.asList(maxValueColumnNames.split(","));

for(String maxValueColumn:maxValueColumnNameList){
String colKey = getStateKey(tableName, 
maxValueColumn.trim().toLowerCase());

[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354859#comment-16354859
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166500502
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
--- End diff --

I've checked in fixes for everything except this change. I don't want to 
put in any more SQL building logic than I already have hard coded into QDB. 
What if I added a new method to `DatabaseAdapter` for wrapping a `SELECT` 
statement as a sub query. Input parameters would be similar to the existing 
method for building a SELECT statement; column list, where clause, order by 
clause.


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>Priority: Major
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name 

[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166500502
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
--- End diff --

I've checked in fixes for everything except this change. I don't want to 
put in any more SQL building logic than I already have hard coded into QDB. 
What if I added a new method to `DatabaseAdapter` for wrapping a `SELECT` 
statement as a sub query. Input parameters would be similar to the existing 
method for building a SELECT statement; column list, where clause, order by 
clause.


---


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354854#comment-16354854
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166499522
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -150,8 +151,13 @@ public QueryDatabaseTable() {
 final List pds = new ArrayList<>();
 pds.add(DBCP_SERVICE);
 pds.add(DB_TYPE);
-pds.add(TABLE_NAME);
+pds.add(new PropertyDescriptor.Builder()
+.fromPropertyDescriptor(TABLE_NAME)
+.description("The name of the database table to be 
queried. When a custom query is used, this property is used to alias the query 
and appears as an attribute on the FlowFile.")
+.build());
--- End diff --

Good catch.


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>Priority: Major
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread patricker
Github user patricker commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166499522
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -150,8 +151,13 @@ public QueryDatabaseTable() {
 final List pds = new ArrayList<>();
 pds.add(DBCP_SERVICE);
 pds.add(DB_TYPE);
-pds.add(TABLE_NAME);
+pds.add(new PropertyDescriptor.Builder()
+.fromPropertyDescriptor(TABLE_NAME)
+.description("The name of the database table to be 
queried. When a custom query is used, this property is used to alias the query 
and appears as an attribute on the FlowFile.")
+.build());
--- End diff --

Good catch.


---


[GitHub] nifi pull request #2425: Emit failures array

2018-02-06 Thread martin-mucha
Github user martin-mucha commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2425#discussion_r166457681
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateRecord.java
 ---
@@ -242,11 +279,12 @@ public void onTrigger(final ProcessContext context, 
final ProcessSession session
 final boolean allowExtraFields = 
context.getProperty(ALLOW_EXTRA_FIELDS).asBoolean();
 final boolean strictTypeChecking = 
context.getProperty(STRICT_TYPE_CHECKING).asBoolean();
 
-RecordSetWriter validWriter = null;
-RecordSetWriter invalidWriter = null;
 FlowFile validFlowFile = null;
 FlowFile invalidFlowFile = null;
 
+final List validRecords = new LinkedList<>();
--- End diff --

Understood, but one question. I did all this refactoring to get rid of 
'surprising' complexity of code. Now, if I do "writer.write(record);" given 
record won't be held in heap before completeFlowFile is called? Where is the 
FlowFile stored until 'completed'? If it's held outside of heap, then all this 
refactoring is invalid, indeed. If it's also in heap ...


---


[jira] [Commented] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354555#comment-16354555
 ] 

ASF GitHub Bot commented on NIFI-4816:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2452
  
The idea for storing off the name for all reporting tasks is to support 
#2431 , in fact that's where this behavior was discovered.

I like the @OnScheduled setName() idea, I may call it something more 
specific (rather than a bean method) since it will take a context. Will make 
the change, thanks!


> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2452: NIFI-4816: Allow name to be updated for ReportingTasks

2018-02-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2452
  
The idea for storing off the name for all reporting tasks is to support 
#2431 , in fact that's where this behavior was discovered.

I like the @OnScheduled setName() idea, I may call it something more 
specific (rather than a bean method) since it will take a context. Will make 
the change, thanks!


---


[jira] [Commented] (NIFI-528) Add support to specify a timeout on ExecuteStreamCommand, ExecuteScript, and ExecuteProcess

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354511#comment-16354511
 ] 

ASF GitHub Bot commented on NIFI-528:
-

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1755
  
I wonder if this was done on Windows with the Git options not being set to 
"checkout w/ Windows line encoding, check in with Unix."


> Add support to specify a timeout on ExecuteStreamCommand, ExecuteScript, and 
> ExecuteProcess 
> 
>
> Key: NIFI-528
> URL: https://issues.apache.org/jira/browse/NIFI-528
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Michael Wagner
>Priority: Major
>  Labels: beginner, processor
>
> Should be a way to specify a timeout on ExecuteStreamCommand, ExecuteScript, 
> and ExecuteProcess. If the script hung for some reason, kill the process and 
> move on, send the timeout flowfile to a failure route. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #1755: NIFI-528 add support to specify timeout in ExecuteProcess

2018-02-06 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/1755
  
I wonder if this was done on Windows with the Git options not being set to 
"checkout w/ Windows line encoding, check in with Unix."


---


[jira] [Updated] (NIFI-4834) ConsumeJMS does not scale when given more than 1 thread

2018-02-06 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4834:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ConsumeJMS does not scale when given more than 1 thread
> ---
>
> Key: NIFI-4834
> URL: https://issues.apache.org/jira/browse/NIFI-4834
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.6.0
>
>
> When I run ConsumeJMS against a local broker, the performance is great. 
> However, if I run against a broker that is running remotely with a 75 ms 
> round trip time (i.e., somewhat high latency), then the performance is pretty 
> poor, allowing me to receive only about 30-40 msgs/sec (1-2 MB/sec).
> Increasing the number of threads should result in multiple connections to the 
> JMS Broker, which would provide better throughput. However, when I increase 
> the number of Concurrent Tasks to 10, I see 10 consumers but only a single 
> connection being created, so the throughput is no better (in fact it's a bit 
> slower due to added lock contention).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4841) NPE when reverting local modifications to a versioned process group

2018-02-06 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354372#comment-16354372
 ] 

Bryan Bende commented on NIFI-4841:
---

Was able to reproduce the NPE only if the RPG has transmission enabled, testing 
a fix now and will hopefully have a PR up soon.

The other issue I mentioned above is a separate issue which I will created a 
different Jira for and will not be part of this fix.

> NPE when reverting local modifications to a versioned process group
> ---
>
> Key: NIFI-4841
> URL: https://issues.apache.org/jira/browse/NIFI-4841
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Charlie Meyer
>Priority: Major
> Attachments: NIFI-4841.xml
>
>
> I created a process group via importing from the registry. I then made a few 
> modifications including settings properties and connecting some components. I 
> then attempted to revert my local changes so I could update the flow to a 
> newer version. When reverting the local changes, NiFi threw a NPE with the 
> following stack trace:
> {noformat}
> 2018-02-05 17:18:52,356 INFO [Version Control Update Thread-1] 
> org.apache.nifi.web.api.VersionsResource Stopping 1 Processors
> 2018-02-05 17:18:52,477 ERROR [Version Control Update Thread-1] 
> org.apache.nifi.web.api.VersionsResource Failed to update flow to new version
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.web.dao.impl.StandardProcessGroupDAO.scheduleComponents(StandardProcessGroupDAO.java:179)
>   at 
> org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$FastClassBySpringCGLIB$$10a99b47.invoke()
>   at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
>   at 
> org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$EnhancerBySpringCGLIB$$bc287b8b.scheduleComponents()
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade$3.update(StandardNiFiServiceFacade.java:981)
>   at 
> org.apache.nifi.web.revision.NaiveRevisionManager.updateRevision(NaiveRevisionManager.java:120)
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade.scheduleComponents(StandardNiFiServiceFacade.java:976)
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
>   at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
>   at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
>   at 
> org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithWriteLock(NiFiServiceFacadeLock.java:173)
>   at 
> org.apache.nifi.web.NiFiServiceFacadeLock.scheduleLock(NiFiServiceFacadeLock.java:102)
>   at sun.reflect.GeneratedMethodAccessor557.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:629)
>   at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:618)
>   at 
> org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
>   at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
>   at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
>   at 
> org.apache.nifi.web.StandardNiFiServiceFacade$$EnhancerBySpringCGLIB$$8a758fa4.scheduleComponents()
>   at 
> org.apache.nifi.web.util.LocalComponentLifecycle.stopComponents(LocalComponentLifecycle.java:125)
>   at 
> 

[jira] [Commented] (NIFIREG-140) Nifi Registry not able to start - NoClassDefFoundError org/apache/nifi/registry/util/FileUtils

2018-02-06 Thread Gaurang Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354348#comment-16354348
 ] 

Gaurang Shah commented on NIFIREG-140:
--

[~bende] it's totally empty. 

> Nifi Registry not able to start - NoClassDefFoundError 
> org/apache/nifi/registry/util/FileUtils
> --
>
> Key: NIFIREG-140
> URL: https://issues.apache.org/jira/browse/NIFIREG-140
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Gaurang Shah
>Priority: Major
>
> while trying to start the nifi registry I am getting following error.
> nifi registry version: 0.1.0
>  
> {code:java}
> 2018-02-06 00:11:52,665 INFO [main] org.apache.nifi.registry.NiFiRegistry 
> Launching NiFi Registry...
> 2018-02-06 00:11:52,676 INFO [main] org.apache.nifi.registry.NiFiRegistry 
> Read property protection key from conf/bootstrap.conf
> 2018-02-06 00:11:52,799 INFO [main] o.a.n.r.security.crypto.CryptoKeyLoader 
> No encryption key present in the bootstrap.conf file at 
> C:\nifi-registry-0.1.0-bin\nifi-registry-0.1.0\conf\bootstrap.conf
> 2018-02-06 00:11:52,807 INFO [main] o.a.n.r.p.NiFiRegistryPropertiesLoader 
> Loaded 26 properties from 
> C:\nifi-registry-0.1.0-bin\nifi-registry-0.1.0\conf\nifi-registry.properties
> 2018-02-06 00:11:52,811 INFO [main] org.apache.nifi.registry.NiFiRegistry 
> Loaded 26 properties
> 2018-02-06 00:11:52,813 INFO [main] org.apache.nifi.registry.NiFiRegistry 
> NiFi Registry started without Bootstrap Port information provided; will not 
> listen for requests from Bootstrap
> 2018-02-06 00:11:52,820 ERROR [main] org.apache.nifi.registry.NiFiRegistry 
> Failure to launch NiFi Registry due to java.lang.NoClassDefFoundError: 
> org/apache/nifi/registry/util/FileUtils
> java.lang.NoClassDefFoundError: org/apache/nifi/registry/util/FileUtils
> at org.apache.nifi.registry.NiFiRegistry.(NiFiRegistry.java:97) 
> ~[nifi-registry-runtime-0.1.0.jar:0.1.0]
> at org.apache.nifi.registry.NiFiRegistry.main(NiFiRegistry.java:158) 
> ~[nifi-registry-runtime-0.1.0.jar:0.1.0]
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.nifi.registry.util.FileUtils
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_161]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_161]
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) 
> ~[na:1.8.0_161]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_161]
> ... 2 common frames omitted
> 2018-02-06 00:11:52,824 INFO [Thread-1] org.apache.nifi.registry.NiFiRegistry 
> Initiating shutdown of Jetty web server...
> 2018-02-06 00:11:52,824 INFO [Thread-1] org.apache.nifi.registry.NiFiRegistry 
> Jetty web server shutdown completed (nicely or otherwise).
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-382) Add SUSE support to bootstrap process.

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354340#comment-16354340
 ] 

ASF GitHub Bot commented on MINIFICPP-382:
--

GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/260

MINIFICPP-382: Implement SUSE release support for SUSE and SLES12

Signed-off-by: Marc Parisi 

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-382

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/260.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #260


commit 8e7b2b7c477c9da28a71b0663c6834cadaf2e38a
Author: Marc Parisi 
Date:   2018-01-24T14:05:19Z

MINIFICPP-382: Implement SUSE release support for SUSE and SLES12

Signed-off-by: Marc Parisi 




> Add SUSE support to bootstrap process. 
> ---
>
> Key: MINIFICPP-382
> URL: https://issues.apache.org/jira/browse/MINIFICPP-382
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>
> Add support to bootstrap process. 
>  
> Currently have tested on OpenSUSE and SLES12. 
>  
> SLES12/OpenSUSE – build and tested, verifying SiteToSite
> SLES11 – TBD



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #260: MINIFICPP-382: Implement SUSE release sup...

2018-02-06 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/260

MINIFICPP-382: Implement SUSE release support for SUSE and SLES12

Signed-off-by: Marc Parisi 

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-382

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/260.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #260


commit 8e7b2b7c477c9da28a71b0663c6834cadaf2e38a
Author: Marc Parisi 
Date:   2018-01-24T14:05:19Z

MINIFICPP-382: Implement SUSE release support for SUSE and SLES12

Signed-off-by: Marc Parisi 




---


[jira] [Commented] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354314#comment-16354314
 ] 

ASF GitHub Bot commented on NIFI-4816:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2452
  
@mattyb149 thanks for the update. This is definitely something that I think 
we should have added at the start. I'm curious about why the update to all of 
the reporting tasks, though. None of them make use of the name, as far as I can 
tell, so why both storing it off? Moreover, if we do want to store it off, then 
why not just have the setName() method have an @OnScheduled annotation and take 
in the ConfigurationContext. Then the abstract impl will automatically have the 
name set. It would not necessarily be available if being called from other 
@OnScheduled methods, but I think that's okay as long as it is documented - in 
such a case, you should call the method on the ReportingContext itself.


> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2452: NIFI-4816: Allow name to be updated for ReportingTasks

2018-02-06 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2452
  
@mattyb149 thanks for the update. This is definitely something that I think 
we should have added at the start. I'm curious about why the update to all of 
the reporting tasks, though. None of them make use of the name, as far as I can 
tell, so why both storing it off? Moreover, if we do want to store it off, then 
why not just have the setName() method have an @OnScheduled annotation and take 
in the ConfigurationContext. Then the abstract impl will automatically have the 
name set. It would not necessarily be available if being called from other 
@OnScheduled methods, but I think that's okay as long as it is documented - in 
such a case, you should call the method on the ReportingContext itself.


---


[jira] [Commented] (MINIFICPP-394) Implement MQTT C2 protocol

2018-02-06 Thread marco polo (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354273#comment-16354273
 ] 

marco polo commented on MINIFICPP-394:
--

[https://cwiki.apache.org/confluence/display/MINIFI/C2+Design+Proposal] should 
be updated with details. 

> Implement MQTT C2 protocol 
> ---
>
> Key: MINIFICPP-394
> URL: https://issues.apache.org/jira/browse/MINIFICPP-394
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>Priority: Major
>
> Implement MQTT C2 protocol as we likely won't want to use the REST protocol 
> long term



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-393) add security support for MQTT

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354238#comment-16354238
 ] 

ASF GitHub Bot commented on MINIFICPP-393:
--

Github user minifirocks commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/259#discussion_r16631
  
--- Diff: extensions/mqtt/ConsumeMQTT.cpp ---
@@ -35,7 +35,7 @@ namespace nifi {
 namespace minifi {
 namespace processors {
 
-core::Property ConsumeMQTT::MaxQueueSize("Max Flow Segment Size", "Maximum 
flow content payload segment size for the MQTT record", "");
+core::Property ConsumeMQTT::MaxQueueSize("Max Queue Size", "Maximum 
receive queue size for the MQTT record", "");
--- End diff --

Fixed in latest commit


> add security support for MQTT
> -
>
> Key: MINIFICPP-393
> URL: https://issues.apache.org/jira/browse/MINIFICPP-393
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: bqiu
>Assignee: bqiu
>Priority: Minor
> Fix For: 1.0.0
>
>
> add security support for MQTT



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #259: MINIFICPP-393: Add security support for M...

2018-02-06 Thread minifirocks
Github user minifirocks commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/259#discussion_r16631
  
--- Diff: extensions/mqtt/ConsumeMQTT.cpp ---
@@ -35,7 +35,7 @@ namespace nifi {
 namespace minifi {
 namespace processors {
 
-core::Property ConsumeMQTT::MaxQueueSize("Max Flow Segment Size", "Maximum 
flow content payload segment size for the MQTT record", "");
+core::Property ConsumeMQTT::MaxQueueSize("Max Queue Size", "Maximum 
receive queue size for the MQTT record", "");
--- End diff --

Fixed in latest commit


---


[jira] [Updated] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-06 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4816:
---
Status: Patch Available  (was: In Progress)

> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354204#comment-16354204
 ] 

ASF GitHub Bot commented on NIFI-4816:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2452

NIFI-4816: Allow name to be updated for ReportingTasks

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4816

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2452.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2452


commit 32d8069a053e2468bd8d6bf7232954572732e391
Author: Matthew Burgess 
Date:   2018-02-06T17:11:45Z

NIFI-4816: Allow name to be updated for ReportingTasks




> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2452: NIFI-4816: Allow name to be updated for ReportingTa...

2018-02-06 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2452

NIFI-4816: Allow name to be updated for ReportingTasks

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-4816

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2452.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2452


commit 32d8069a053e2468bd8d6bf7232954572732e391
Author: Matthew Burgess 
Date:   2018-02-06T17:11:45Z

NIFI-4816: Allow name to be updated for ReportingTasks




---


[jira] [Assigned] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-02-06 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-4816:
--

Assignee: Matt Burgess

> Changes to ReportingTask name are not available to the ReportingTask
> 
>
> Key: NIFI-4816
> URL: https://issues.apache.org/jira/browse/NIFI-4816
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The Reporting Task name is only set on the ReportingTask itself during 
> initialize(), which is only called the first time the ReportingTask is 
> instantiated. This means if you change the name of the ReportingTask and 
> restart it, The ReportingTask has its original name and the current name is 
> inaccessible via the ConfigurationContext it is passed later. If you restart 
> NiFi, the new name is set and stays that way.
> Rather than calling initialize() more than once, it is proposed to make the 
> current name (and any other appropriate properties) available perhaps via 
> ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-394) Implement MQTT C2 protocol

2018-02-06 Thread marco polo (JIRA)
marco polo created MINIFICPP-394:


 Summary: Implement MQTT C2 protocol 
 Key: MINIFICPP-394
 URL: https://issues.apache.org/jira/browse/MINIFICPP-394
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: marco polo


Implement MQTT C2 protocol as we likely won't want to use the REST protocol 
long term



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4080) ValidateCSV - Add support for Expression Language

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354111#comment-16354111
 ] 

ASF GitHub Bot commented on NIFI-4080:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2226
  
We don't need to get rid of those tests; rather I think if no EL is 
present, then we should try parsing the schema for validity. It will be 
reparsed in onTrigger() anyway (whether it has EL or not).


> ValidateCSV - Add support for Expression Language 
> --
>
> Key: NIFI-4080
> URL: https://issues.apache.org/jira/browse/NIFI-4080
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> The ValidateCSV processor could benefit if the following fields supported 
> Expression Language evaluation:
> - Schema
> - Quote character
> - Delimiter character
> - End of line symbols



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2226: NIFI-4080: Added EL support to fields in ValidateCSV

2018-02-06 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2226
  
We don't need to get rid of those tests; rather I think if no EL is 
present, then we should try parsing the schema for validity. It will be 
reparsed in onTrigger() anyway (whether it has EL or not).


---


[jira] [Commented] (NIFIREG-140) Nifi Registry not able to start - NoClassDefFoundError org/apache/nifi/registry/util/FileUtils

2018-02-06 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353993#comment-16353993
 ] 

Bryan Bende commented on NIFIREG-140:
-

[~gaurangnshah] Currently NiFi Registry is not officially supported on Windows 
yet, so I'm not sure if that is an issue 
(https://nifi.apache.org/docs/nifi-registry-docs/html/administration-guide.html#system-requirements)

Can you look in nifi-registry-bootstrap.log and copy & paste the part that 
shows something like this?
{code:java}
2018-02-06 10:13:47,307 INFO [main] o.apache.nifi.registry.bootstrap.Command 
Command:{code}
Including the entire list of libraries.

> Nifi Registry not able to start - NoClassDefFoundError 
> org/apache/nifi/registry/util/FileUtils
> --
>
> Key: NIFIREG-140
> URL: https://issues.apache.org/jira/browse/NIFIREG-140
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Gaurang Shah
>Priority: Major
>
> while trying to start the nifi registry I am getting following error.
> nifi registry version: 0.1.0
>  
> {code:java}
> 2018-02-06 00:11:52,665 INFO [main] org.apache.nifi.registry.NiFiRegistry 
> Launching NiFi Registry...
> 2018-02-06 00:11:52,676 INFO [main] org.apache.nifi.registry.NiFiRegistry 
> Read property protection key from conf/bootstrap.conf
> 2018-02-06 00:11:52,799 INFO [main] o.a.n.r.security.crypto.CryptoKeyLoader 
> No encryption key present in the bootstrap.conf file at 
> C:\nifi-registry-0.1.0-bin\nifi-registry-0.1.0\conf\bootstrap.conf
> 2018-02-06 00:11:52,807 INFO [main] o.a.n.r.p.NiFiRegistryPropertiesLoader 
> Loaded 26 properties from 
> C:\nifi-registry-0.1.0-bin\nifi-registry-0.1.0\conf\nifi-registry.properties
> 2018-02-06 00:11:52,811 INFO [main] org.apache.nifi.registry.NiFiRegistry 
> Loaded 26 properties
> 2018-02-06 00:11:52,813 INFO [main] org.apache.nifi.registry.NiFiRegistry 
> NiFi Registry started without Bootstrap Port information provided; will not 
> listen for requests from Bootstrap
> 2018-02-06 00:11:52,820 ERROR [main] org.apache.nifi.registry.NiFiRegistry 
> Failure to launch NiFi Registry due to java.lang.NoClassDefFoundError: 
> org/apache/nifi/registry/util/FileUtils
> java.lang.NoClassDefFoundError: org/apache/nifi/registry/util/FileUtils
> at org.apache.nifi.registry.NiFiRegistry.(NiFiRegistry.java:97) 
> ~[nifi-registry-runtime-0.1.0.jar:0.1.0]
> at org.apache.nifi.registry.NiFiRegistry.main(NiFiRegistry.java:158) 
> ~[nifi-registry-runtime-0.1.0.jar:0.1.0]
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.nifi.registry.util.FileUtils
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_161]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_161]
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) 
> ~[na:1.8.0_161]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_161]
> ... 2 common frames omitted
> 2018-02-06 00:11:52,824 INFO [Thread-1] org.apache.nifi.registry.NiFiRegistry 
> Initiating shutdown of Jetty web server...
> 2018-02-06 00:11:52,824 INFO [Thread-1] org.apache.nifi.registry.NiFiRegistry 
> Jetty web server shutdown completed (nicely or otherwise).
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4847) Ldap authorization problem in secure cluster

2018-02-06 Thread Georgy (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgy updated NIFI-4847:
-
Description: 
Hi guys,

Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
cluster mode.

My DN in AD looks so:
 CN=Lastname Firstname Middlename, OU=..., ... 
 where CN consists of cyrillic chars (russian alphabet)

After successful ldap auth and applying specified mappings NiFi passes CN only 
(only 1st, last, middle name) to ldap authorizer. In single mode I have no 
problems, my CN successfully passes authorization. But in cluster mode I have 
such error:
 Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
ÐеоÑгийÐеннадÑевиÑ'. 
Returning Forbidden response.
 See attached screenshot with error message in UI.

It seems that there is ISO-8859-1 chars but NiFi tries to implement it as UTF-8 
sequence. Can't understand what is the reason of this transformation in cluster 
mode.

I've tried ldap auth with "Identity Strategy = USE_USERNAME" witthout any 
mappings and specified my sAMAccountName in file-user-group-provider as Initial 
User Identity. Such workaround works but I have to create other ldap users 
manually. So I would prefer ldap authorization.

Can you help me find solution?

You can find conf & logs in attachment.

 

Env:
 2 node cluster
 NiFi 1.5.0
 RHEL 7.3
 Windows AD

 

  was:
Hi guys,

Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
cluster mode.

My DN in AD looks so:
 CN=Lastname Firstname Middlename, OU=..., ... 
 where CN consists of cyrillic chars (russian alphabet)

After successful ldap auth and applying specified mappings NiFi passes CN only 
(only 1st, last, middle name) to ldap authorizer. In single mode I have no 
problems, my CN successfully passes authorization. But in cluster mode I have 
such error:
 Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
ÐеоÑгийÐеннадÑевиÑ'. 
Returning Forbidden response.
 See attached screenshot with error message in UI.

It seems that there is ISO-8859-1 chars but NiFi tries to implement it as UTF-8 
sequence. Can't understand what is the reason of this transformation in cluster 
mode.

I've tried ldap auth with "Identity Strategy = USE_USERNAME" witthout any 
mappings and specified my sAMAccountName in file-user-group-provider as Initial 
User Identity. Such workaround works but I have to create other ldap users 
manually. So I would prefer ldap authorization.

Can you help me to find solution?

You can find conf & logs in attachment.

 

Env:
 2 node cluster
 NiFi 1.5.0
 RHEL 7.3
 Windows AD

 


> Ldap authorization problem in secure cluster
> 
>
> Key: NIFI-4847
> URL: https://issues.apache.org/jira/browse/NIFI-4847
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: 2 node cluster
> RHEL 7.3
> NiFi 1.5.0
> Windows AD
>Reporter: Georgy
>Priority: Major
> Attachments: nifi.zip, nifi_error.PNG
>
>
> Hi guys,
> Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
> cluster mode.
> My DN in AD looks so:
>  CN=Lastname Firstname Middlename, OU=..., ... 
>  where CN consists of cyrillic chars (russian alphabet)
> After successful ldap auth and applying specified mappings NiFi passes CN 
> only (only 1st, last, middle name) to ldap authorizer. In single mode I have 
> no problems, my CN successfully passes authorization. But in cluster mode I 
> have such error:
>  Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
> ÐеоÑгийÐеннадÑевиÑ'. 
> Returning Forbidden response.
>  See attached screenshot with error message in UI.
> It seems that there is ISO-8859-1 chars but NiFi tries to implement it as 
> UTF-8 sequence. Can't understand what is the reason of this transformation in 
> cluster mode.
> I've tried ldap auth with "Identity Strategy = USE_USERNAME" witthout any 
> mappings and specified my sAMAccountName in file-user-group-provider as 
> Initial User Identity. Such workaround works but I have to create other ldap 
> users manually. So I would prefer ldap authorization.
> Can you help me find solution?
> You can find conf & logs in attachment.
>  
> Env:
>  2 node cluster
>  NiFi 1.5.0
>  RHEL 7.3
>  Windows AD
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4847) Ldap authorization problem in secure cluster

2018-02-06 Thread Georgy (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgy updated NIFI-4847:
-
Description: 
Hi guys,

Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
cluster mode.

My DN in AD looks so:
 CN=Lastname Firstname Middlename, OU=..., ... 
 where CN consists of cyrillic chars (russian alphabet)

After successful ldap auth and applying specified mappings NiFi passes CN only 
(only 1st, last, middle name) to ldap authorizer. In single mode I have no 
problems, my CN successfully passes authorization. But in cluster mode I have 
such error:
 Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
ÐеоÑгийÐеннадÑевиÑ'. 
Returning Forbidden response.
 See attached screenshot with error message in UI.

It seems that there is ISO-8859-1 chars but NiFi tries to implement it as UTF-8 
sequence. Can't understand what is the reason of this transformation in cluster 
mode.

I've tried ldap auth with "Identity Strategy = USE_USERNAME" witthout any 
mappings and specified my sAMAccountName in file-user-group-provider as Initial 
User Identity. Such workaround works but I have to create other ldap users 
manually. So I would prefer ldap authorization.

Can you help me to find solution?

You can find conf & logs in attachment.

 

Env:
 2 node cluster
 NiFi 1.5.0
 RHEL 7.3
 Windows AD

 

  was:
Hi guys,

Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
cluster mode.

My DN in AD looks so:
 CN=Lastname Firstname Middlename, OU=..., ... 
 where CN consists of cyrillic chars (russian alphabet)

After successful ldap auth and applying specified mappings NiFi passes CN only 
(only 1st, last, middle name) to ldap authorizer. In single mode I have no 
problems, my CN successfully passes authorization. But in cluster mode I have 
such error:
 Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
ÐеоÑгийÐеннадÑевиÑ'. 
Returning Forbidden response.
 See attached screenshot with error message in UI.

It seems that there is ISO-8859-1 chars but NiFi tries to implement it as UTF-8 
sequence. Can't understand what is the reason of this transformation in cluster 
mode.

I've tried ldap auth with "Identity Strategy = USE_USERNAME" witthout any 
mappings and specified my sAMAccountName in file-user-group-provider as Initial 
User Identity. Such workaround works but I have to create other ldap users 
manually. So I would prefer ldap authorization.

Can you help me to find out a solution?

You can find conf & logs in attachment.

 

Env:
 2 node cluster
 NiFi 1.5.0
 RHEL 7.3
 Windows AD

 


> Ldap authorization problem in secure cluster
> 
>
> Key: NIFI-4847
> URL: https://issues.apache.org/jira/browse/NIFI-4847
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: 2 node cluster
> RHEL 7.3
> NiFi 1.5.0
> Windows AD
>Reporter: Georgy
>Priority: Major
> Attachments: nifi.zip, nifi_error.PNG
>
>
> Hi guys,
> Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
> cluster mode.
> My DN in AD looks so:
>  CN=Lastname Firstname Middlename, OU=..., ... 
>  where CN consists of cyrillic chars (russian alphabet)
> After successful ldap auth and applying specified mappings NiFi passes CN 
> only (only 1st, last, middle name) to ldap authorizer. In single mode I have 
> no problems, my CN successfully passes authorization. But in cluster mode I 
> have such error:
>  Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
> ÐеоÑгийÐеннадÑевиÑ'. 
> Returning Forbidden response.
>  See attached screenshot with error message in UI.
> It seems that there is ISO-8859-1 chars but NiFi tries to implement it as 
> UTF-8 sequence. Can't understand what is the reason of this transformation in 
> cluster mode.
> I've tried ldap auth with "Identity Strategy = USE_USERNAME" witthout any 
> mappings and specified my sAMAccountName in file-user-group-provider as 
> Initial User Identity. Such workaround works but I have to create other ldap 
> users manually. So I would prefer ldap authorization.
> Can you help me to find solution?
> You can find conf & logs in attachment.
>  
> Env:
>  2 node cluster
>  NiFi 1.5.0
>  RHEL 7.3
>  Windows AD
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353936#comment-16353936
 ] 

ASF GitHub Bot commented on NIFI-4838:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r166320282
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -221,22 +258,33 @@ private void configureMapper(String setting) {
 }
 }
 
-private ObjectWriter getObjectWriter(ObjectMapper mapper, String 
ppSetting) {
-return ppSetting.equals(YES_PP.getValue()) ? 
mapper.writerWithDefaultPrettyPrinter()
+private ObjectWriter getObjectWriter(ObjectMapper mapper, boolean 
ppSetting) {
+return ppSetting ? mapper.writerWithDefaultPrettyPrinter()
 : mapper.writer();
 }
 
-private void writeBatch(String payload, ProcessContext context, 
ProcessSession session) {
+private void writeBatch(String payload, ProcessContext context, 
ProcessSession session, boolean doCommit, Long count, long index, int 
batchSize) {
 FlowFile flowFile = session.create();
 flowFile = session.write(flowFile, new OutputStreamCallback() {
 @Override
 public void process(OutputStream out) throws IOException {
 out.write(payload.getBytes("UTF-8"));
 }
 });
-flowFile = session.putAttribute(flowFile, 
CoreAttributes.MIME_TYPE.key(), "application/json");
+Map attrs = new HashMap<>();
+attrs.put(CoreAttributes.MIME_TYPE.key(), "application/json");
+if (count != null) {
+attrs.put(PROGRESS_START, String.valueOf(index - batchSize));
+attrs.put(PROGRESS_END, String.valueOf(index));
+attrs.put(PROGRESS_ESTIMATE, String.valueOf(count));
+}
+flowFile = session.putAllAttributes(flowFile, attrs);
 session.getProvenanceReporter().receive(flowFile, getURI(context));
+
 session.transfer(flowFile, REL_SUCCESS);
+if (doCommit) {
+session.commit();
+}
--- End diff --

Yeah, that's what I wanted to understand. Along with a flowfile per batch, 
it writes the progress (if enabled)? LGTM. +1


> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...

2018-02-06 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2448#discussion_r166320282
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java
 ---
@@ -221,22 +258,33 @@ private void configureMapper(String setting) {
 }
 }
 
-private ObjectWriter getObjectWriter(ObjectMapper mapper, String 
ppSetting) {
-return ppSetting.equals(YES_PP.getValue()) ? 
mapper.writerWithDefaultPrettyPrinter()
+private ObjectWriter getObjectWriter(ObjectMapper mapper, boolean 
ppSetting) {
+return ppSetting ? mapper.writerWithDefaultPrettyPrinter()
 : mapper.writer();
 }
 
-private void writeBatch(String payload, ProcessContext context, 
ProcessSession session) {
+private void writeBatch(String payload, ProcessContext context, 
ProcessSession session, boolean doCommit, Long count, long index, int 
batchSize) {
 FlowFile flowFile = session.create();
 flowFile = session.write(flowFile, new OutputStreamCallback() {
 @Override
 public void process(OutputStream out) throws IOException {
 out.write(payload.getBytes("UTF-8"));
 }
 });
-flowFile = session.putAttribute(flowFile, 
CoreAttributes.MIME_TYPE.key(), "application/json");
+Map attrs = new HashMap<>();
+attrs.put(CoreAttributes.MIME_TYPE.key(), "application/json");
+if (count != null) {
+attrs.put(PROGRESS_START, String.valueOf(index - batchSize));
+attrs.put(PROGRESS_END, String.valueOf(index));
+attrs.put(PROGRESS_ESTIMATE, String.valueOf(count));
+}
+flowFile = session.putAllAttributes(flowFile, attrs);
 session.getProvenanceReporter().receive(flowFile, getURI(context));
+
 session.transfer(flowFile, REL_SUCCESS);
+if (doCommit) {
+session.commit();
+}
--- End diff --

Yeah, that's what I wanted to understand. Along with a flowfile per batch, 
it writes the progress (if enabled)? LGTM. +1


---


[jira] [Created] (NIFIREG-140) Nifi Registry not able to start - NoClassDefFoundError org/apache/nifi/registry/util/FileUtils

2018-02-06 Thread Gaurang Shah (JIRA)
Gaurang Shah created NIFIREG-140:


 Summary: Nifi Registry not able to start - NoClassDefFoundError 
org/apache/nifi/registry/util/FileUtils
 Key: NIFIREG-140
 URL: https://issues.apache.org/jira/browse/NIFIREG-140
 Project: NiFi Registry
  Issue Type: Bug
Affects Versions: 0.1.0
Reporter: Gaurang Shah


while trying to start the nifi registry I am getting following error.

nifi registry version: 0.1.0

 
{code:java}
2018-02-06 00:11:52,665 INFO [main] org.apache.nifi.registry.NiFiRegistry 
Launching NiFi Registry...
2018-02-06 00:11:52,676 INFO [main] org.apache.nifi.registry.NiFiRegistry Read 
property protection key from conf/bootstrap.conf
2018-02-06 00:11:52,799 INFO [main] o.a.n.r.security.crypto.CryptoKeyLoader No 
encryption key present in the bootstrap.conf file at 
C:\nifi-registry-0.1.0-bin\nifi-registry-0.1.0\conf\bootstrap.conf
2018-02-06 00:11:52,807 INFO [main] o.a.n.r.p.NiFiRegistryPropertiesLoader 
Loaded 26 properties from 
C:\nifi-registry-0.1.0-bin\nifi-registry-0.1.0\conf\nifi-registry.properties
2018-02-06 00:11:52,811 INFO [main] org.apache.nifi.registry.NiFiRegistry 
Loaded 26 properties
2018-02-06 00:11:52,813 INFO [main] org.apache.nifi.registry.NiFiRegistry NiFi 
Registry started without Bootstrap Port information provided; will not listen 
for requests from Bootstrap
2018-02-06 00:11:52,820 ERROR [main] org.apache.nifi.registry.NiFiRegistry 
Failure to launch NiFi Registry due to java.lang.NoClassDefFoundError: 
org/apache/nifi/registry/util/FileUtils
java.lang.NoClassDefFoundError: org/apache/nifi/registry/util/FileUtils
at org.apache.nifi.registry.NiFiRegistry.(NiFiRegistry.java:97) 
~[nifi-registry-runtime-0.1.0.jar:0.1.0]
at org.apache.nifi.registry.NiFiRegistry.main(NiFiRegistry.java:158) 
~[nifi-registry-runtime-0.1.0.jar:0.1.0]
Caused by: java.lang.ClassNotFoundException: 
org.apache.nifi.registry.util.FileUtils
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_161]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_161]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) ~[na:1.8.0_161]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_161]
... 2 common frames omitted
2018-02-06 00:11:52,824 INFO [Thread-1] org.apache.nifi.registry.NiFiRegistry 
Initiating shutdown of Jetty web server...
2018-02-06 00:11:52,824 INFO [Thread-1] org.apache.nifi.registry.NiFiRegistry 
Jetty web server shutdown completed (nicely or otherwise).
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353875#comment-16353875
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166296034
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -150,8 +151,13 @@ public QueryDatabaseTable() {
 final List pds = new ArrayList<>();
 pds.add(DBCP_SERVICE);
 pds.add(DB_TYPE);
-pds.add(TABLE_NAME);
+pds.add(new PropertyDescriptor.Builder()
+.fromPropertyDescriptor(TABLE_NAME)
+.description("The name of the database table to be 
queried. When a custom query is used, this property is used to alias the query 
and appears as an attribute on the FlowFile.")
+.build());
--- End diff --

Please update `AbstractDatabaseFetchProcessor.onPropertyModified` so that 
it clears `setupComplete` flag when this TABLE_NAME property is updated, too. 
Without that, when a custom query is used, `columnTypeMap` will not be 
populated again if the processor is reconfigured with different table name 
alias and restarted, that prevent the maxValueColumn to be captured correctly, 
and produce duplicated query result over and over.


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>Priority: Major
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353874#comment-16353874
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166280838
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
+sbQuery.append(" WHERE 1=0");
+
+query = sbQuery.toString();
+}
+
+ResultSet resultSet = st.executeQuery(query);
+ResultSetMetaData resultSetMetaData = resultSet.getMetaData();
+int numCols = resultSetMetaData.getColumnCount();
+if (numCols > 0) {
+if (shouldCleanCache){
+columnTypeMap.clear();
+}
+for (int i = 1; i <= numCols; i++) {
+String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
+String colKey = getStateKey(tableName, colName);
+int colType = resultSetMetaData.getColumnType(i);
+columnTypeMap.putIfAbsent(colKey, colType);
+}
+
+List maxValueColumnNameList = 
org.apache.commons.lang3.StringUtils.isEmpty(maxValueColumnNames)
+? null
+: 
Arrays.asList(maxValueColumnNames.split("\\s*,\\s*"));
--- End diff --

I guess the aim of the regex `\s*,\s` is to split and trim whitespaces. But 
it leaves whitespaces at the head and tail. I'd suggest simply split with `,` 
and trim it in the for loop below before `toLowerCase()`.


> Extend QueryDatabaseTable to 

[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353878#comment-16353878
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166294484
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
--- End diff --

Since `getWrappedQuery` fetches all available columns in the sub-query by 
'*', subsequent loop stores all column values into the managed state. We should 
avoid storing unnecessary values into the state by either specifying the 
maxValueColumns to this query, or do some filtering in the loop below.


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>Priority: Major
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the 

[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353876#comment-16353876
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166278036
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
+sbQuery.append(" WHERE 1=0");
+
+query = sbQuery.toString();
+}
+
+ResultSet resultSet = st.executeQuery(query);
+ResultSetMetaData resultSetMetaData = resultSet.getMetaData();
+int numCols = resultSetMetaData.getColumnCount();
+if (numCols > 0) {
+if (shouldCleanCache){
+columnTypeMap.clear();
+}
+for (int i = 1; i <= numCols; i++) {
+String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
+String colKey = getStateKey(tableName, colName);
+int colType = resultSetMetaData.getColumnType(i);
+columnTypeMap.putIfAbsent(colKey, colType);
+}
+
+List maxValueColumnNameList = 
org.apache.commons.lang3.StringUtils.isEmpty(maxValueColumnNames)
--- End diff --

I think we can use `org.apache.nifi.util.StringUtils` here instead, which 
is already imported. Moreover, we can remove this emptiness check because it's 
already checked at the beginning of this method.


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
>

[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353873#comment-16353873
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166277383
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
--- End diff --

Wrong indent.


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>Priority: Major
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-1706) Extend QueryDatabaseTable to support arbitrary queries

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353877#comment-16353877
 ] 

ASF GitHub Bot commented on NIFI-1706:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166277436
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
--- End diff --

Wrong indent.


> Extend QueryDatabaseTable to support arbitrary queries
> --
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Paul Bormans
>Assignee: Peter Wicks
>Priority: Major
>  Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new 
> rows and yield these into the flowfile. The model of an rdbms however is 
> often (if not always) normalized so you would need to join various tables in 
> order to "flatten" the data into useful events for a processing pipeline as 
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query 
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of 
> rows returned per run. Not just because of bandwidth issue's from the nifi 
> pipeline onwards but mainly because huge databases may not be able to return 
> so many records within a reasonable time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166280838
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
+sbQuery.append(" WHERE 1=0");
+
+query = sbQuery.toString();
+}
+
+ResultSet resultSet = st.executeQuery(query);
+ResultSetMetaData resultSetMetaData = resultSet.getMetaData();
+int numCols = resultSetMetaData.getColumnCount();
+if (numCols > 0) {
+if (shouldCleanCache){
+columnTypeMap.clear();
+}
+for (int i = 1; i <= numCols; i++) {
+String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
+String colKey = getStateKey(tableName, colName);
+int colType = resultSetMetaData.getColumnType(i);
+columnTypeMap.putIfAbsent(colKey, colType);
+}
+
+List maxValueColumnNameList = 
org.apache.commons.lang3.StringUtils.isEmpty(maxValueColumnNames)
+? null
+: 
Arrays.asList(maxValueColumnNames.split("\\s*,\\s*"));
--- End diff --

I guess the aim of the regex `\s*,\s` is to split and trim whitespaces. But 
it leaves whitespaces at the head and tail. I'd suggest simply split with `,` 
and trim it in the for loop below before `toLowerCase()`.


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166277383
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
--- End diff --

Wrong indent.


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166296034
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
@@ -150,8 +151,13 @@ public QueryDatabaseTable() {
 final List pds = new ArrayList<>();
 pds.add(DBCP_SERVICE);
 pds.add(DB_TYPE);
-pds.add(TABLE_NAME);
+pds.add(new PropertyDescriptor.Builder()
+.fromPropertyDescriptor(TABLE_NAME)
+.description("The name of the database table to be 
queried. When a custom query is used, this property is used to alias the query 
and appears as an attribute on the FlowFile.")
+.build());
--- End diff --

Please update `AbstractDatabaseFetchProcessor.onPropertyModified` so that 
it clears `setupComplete` flag when this TABLE_NAME property is updated, too. 
Without that, when a custom query is used, `columnTypeMap` will not be 
populated again if the processor is reconfigured with different table name 
alias and restarted, that prevent the maxValueColumn to be captured correctly, 
and produce duplicated query result over and over.


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166294484
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
--- End diff --

Since `getWrappedQuery` fetches all available columns in the sub-query by 
'*', subsequent loop stores all column values into the managed state. We should 
avoid storing unnecessary values into the state by either specifying the 
maxValueColumns to this query, or do some filtering in the loop below.


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166277436
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
--- End diff --

Wrong indent.


---


[GitHub] nifi pull request #2162: NIFI-1706 Extend QueryDatabaseTable to support arbi...

2018-02-06 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2162#discussion_r166278036
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -249,34 +260,56 @@ public void setup(final ProcessContext context, 
boolean shouldCleanCache, FlowFi
 return;
 }
 
-// Try to fill the columnTypeMap with the types of the desired 
max-value columns
-final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
-final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+// Try to fill the columnTypeMap with the types of the desired 
max-value columns
+final DBCPService dbcpService = 
context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
+final String tableName = 
context.getProperty(TABLE_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String sqlQuery = 
context.getProperty(SQL_QUERY).evaluateAttributeExpressions().getValue();
 
 final DatabaseAdapter dbAdapter = 
dbAdapters.get(context.getProperty(DB_TYPE).getValue());
 try (final Connection con = dbcpService.getConnection();
  final Statement st = con.createStatement()) {
 
-// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
-// to use DatabaseMetaData.getColumns(), but not all 
drivers support this, notably the schema-on-read
-// approach as in Apache Drill
-String query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
-ResultSet resultSet = st.executeQuery(query);
-ResultSetMetaData resultSetMetaData = 
resultSet.getMetaData();
-int numCols = resultSetMetaData.getColumnCount();
-if (numCols > 0) {
-if (shouldCleanCache) {
-columnTypeMap.clear();
-}
-for (int i = 1; i <= numCols; i++) {
-String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
-String colKey = getStateKey(tableName, colName);
-int colType = resultSetMetaData.getColumnType(i);
-columnTypeMap.putIfAbsent(colKey, colType);
+// Try a query that returns no rows, for the purposes of 
getting metadata about the columns. It is possible
+// to use DatabaseMetaData.getColumns(), but not all drivers 
support this, notably the schema-on-read
+// approach as in Apache Drill
+String query;
+
+if(StringUtils.isEmpty(sqlQuery)) {
+query = dbAdapter.getSelectStatement(tableName, 
maxValueColumnNames, "1 = 0", null, null, null);
+} else {
+StringBuilder sbQuery = getWrappedQuery(sqlQuery, 
tableName);
+sbQuery.append(" WHERE 1=0");
+
+query = sbQuery.toString();
+}
+
+ResultSet resultSet = st.executeQuery(query);
+ResultSetMetaData resultSetMetaData = resultSet.getMetaData();
+int numCols = resultSetMetaData.getColumnCount();
+if (numCols > 0) {
+if (shouldCleanCache){
+columnTypeMap.clear();
+}
+for (int i = 1; i <= numCols; i++) {
+String colName = 
resultSetMetaData.getColumnName(i).toLowerCase();
+String colKey = getStateKey(tableName, colName);
+int colType = resultSetMetaData.getColumnType(i);
+columnTypeMap.putIfAbsent(colKey, colType);
+}
+
+List maxValueColumnNameList = 
org.apache.commons.lang3.StringUtils.isEmpty(maxValueColumnNames)
--- End diff --

I think we can use `org.apache.nifi.util.StringUtils` here instead, which 
is already imported. Moreover, we can remove this emptiness check because it's 
already checked at the beginning of this method.


---


[jira] [Commented] (NIFI-4745) Emit validation failure description in attribute from ValidateRecord processor

2018-02-06 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353776#comment-16353776
 ] 

Koji Kawamura commented on NIFI-4745:
-

Hi [~alfonz], I've added you as a contributor of NiFi JIRA project and assigned 
you to this one. Mark has posted several comments on your 
[PR|https://github.com/apache/nifi/pull/2425], have you seen those already? If 
you have any questions, please let us know. Thanks!

> Emit validation failure description in attribute from ValidateRecord processor
> --
>
> Key: NIFI-4745
> URL: https://issues.apache.org/jira/browse/NIFI-4745
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Martin Mucha
>Assignee: Martin Mucha
>Priority: Minor
>
> We need to pass description of validation failure further in
> processing chain, and eventually pass it back to calling system.
> Therefore having failure description logged in logs and issued as provenance
> route event is not sufficient for us. 
> It should be easy to emit same data, which are being sent in provenance route 
> event, from ValidateRecord as new attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4847) Ldap authorization problem in secure cluster

2018-02-06 Thread Georgy (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgy updated NIFI-4847:
-
Description: 
Hi guys,

Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
cluster mode.

My DN in AD looks so:
 CN=Lastname Firstname Middlename, OU=..., ... 
 where CN consists of cyrillic chars (russian alphabet)

After successful ldap auth and applying specified mappings NiFi passes CN only 
(only 1st, last, middle name) to ldap authorizer. In single mode I have no 
problems, my CN successfully passes authorization. But in cluster mode I have 
such error:
 Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
ÐеоÑгийÐеннадÑевиÑ'. 
Returning Forbidden response.
 See attached screenshot with error message in UI.

It seems that there is ISO-8859-1 chars but NiFi tries to implement it as UTF-8 
sequence. Can't understand what is the reason of this transformation in cluster 
mode.

I've tried ldap auth with "Identity Strategy = USE_USERNAME" witthout any 
mappings and specified my sAMAccountName in file-user-group-provider as Initial 
User Identity. Such workaround works but I have to create other ldap users 
manually. So I would prefer ldap authorization.

Can you help me to find out a solution?

You can find conf & logs in attachment.

 

Env:
 2 node cluster
 NiFi 1.5.0
 RHEL 7.3
 Windows AD

 

  was:
Hi guys,

Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
cluster mode.

My DN in AD looks so:
CN=Lastname Firstname Middlename, OU=..., ... 
where CN consists of cyrillic chars (russian alphabet)

After successful ldap auth and applying specified mappings NiFi passes CN only 
(only 1st, last, middle name) to ldap authorizer. In single mode I have no 
problems, my CN successfully passes authorization. But in cluster mode I have 
such error:
Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
ÐеоÑгийÐеннадÑевиÑ'. 
Returning Forbidden response.
See attached screenshot with error message in UI.

It seems that there is ISO-8859-1 chars but NiFi tries to implement it as UTF-8 
sequence. Can't understand what is the reason of this transformation in cluster 
mode.

I've tried ldap auth with "Identity Strategy = USE_DN" witthout any mappings 
and specified my sAMAccountName in file-user-group-provider as Initial User 
Identity. Such workaround works but I have to create other ldap users manually. 
So I would prefer ldap authorization.

Can you help me to find out a solution?

You can find conf & logs in attachment.

 

Env:
2 node cluster
NiFi 1.5.0
RHEL 7.3
Windows AD

 


> Ldap authorization problem in secure cluster
> 
>
> Key: NIFI-4847
> URL: https://issues.apache.org/jira/browse/NIFI-4847
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: 2 node cluster
> RHEL 7.3
> NiFi 1.5.0
> Windows AD
>Reporter: Georgy
>Priority: Major
> Attachments: nifi.zip, nifi_error.PNG
>
>
> Hi guys,
> Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
> cluster mode.
> My DN in AD looks so:
>  CN=Lastname Firstname Middlename, OU=..., ... 
>  where CN consists of cyrillic chars (russian alphabet)
> After successful ldap auth and applying specified mappings NiFi passes CN 
> only (only 1st, last, middle name) to ldap authorizer. In single mode I have 
> no problems, my CN successfully passes authorization. But in cluster mode I 
> have such error:
>  Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
> ÐеоÑгийÐеннадÑевиÑ'. 
> Returning Forbidden response.
>  See attached screenshot with error message in UI.
> It seems that there is ISO-8859-1 chars but NiFi tries to implement it as 
> UTF-8 sequence. Can't understand what is the reason of this transformation in 
> cluster mode.
> I've tried ldap auth with "Identity Strategy = USE_USERNAME" witthout any 
> mappings and specified my sAMAccountName in file-user-group-provider as 
> Initial User Identity. Such workaround works but I have to create other ldap 
> users manually. So I would prefer ldap authorization.
> Can you help me to find out a solution?
> You can find conf & logs in attachment.
>  
> Env:
>  2 node cluster
>  NiFi 1.5.0
>  RHEL 7.3
>  Windows AD
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4745) Emit validation failure description in attribute from ValidateRecord processor

2018-02-06 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-4745:

Status: Patch Available  (was: Open)

> Emit validation failure description in attribute from ValidateRecord processor
> --
>
> Key: NIFI-4745
> URL: https://issues.apache.org/jira/browse/NIFI-4745
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Martin Mucha
>Assignee: Martin Mucha
>Priority: Minor
>
> We need to pass description of validation failure further in
> processing chain, and eventually pass it back to calling system.
> Therefore having failure description logged in logs and issued as provenance
> route event is not sufficient for us. 
> It should be easy to emit same data, which are being sent in provenance route 
> event, from ValidateRecord as new attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-4745) Emit validation failure description in attribute from ValidateRecord processor

2018-02-06 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura reassigned NIFI-4745:
---

Assignee: Martin Mucha

> Emit validation failure description in attribute from ValidateRecord processor
> --
>
> Key: NIFI-4745
> URL: https://issues.apache.org/jira/browse/NIFI-4745
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Martin Mucha
>Assignee: Martin Mucha
>Priority: Minor
>
> We need to pass description of validation failure further in
> processing chain, and eventually pass it back to calling system.
> Therefore having failure description logged in logs and issued as provenance
> route event is not sufficient for us. 
> It should be easy to emit same data, which are being sent in provenance route 
> event, from ValidateRecord as new attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4847) Ldap authorization problem in secure cluster

2018-02-06 Thread Georgy (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgy updated NIFI-4847:
-
Attachment: (was: nifi_error.PNG)

> Ldap authorization problem in secure cluster
> 
>
> Key: NIFI-4847
> URL: https://issues.apache.org/jira/browse/NIFI-4847
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: 2 node cluster
> RHEL 7.3
> NiFi 1.5.0
> Windows AD
>Reporter: Georgy
>Priority: Major
> Attachments: nifi.zip, nifi_error.PNG
>
>
> Hi guys,
> Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
> cluster mode.
> My DN in AD looks so:
> CN=Lastname Firstname Middlename, OU=..., ... 
> where CN consists of cyrillic chars (russian alphabet)
> After successful ldap auth and applying specified mappings NiFi passes CN 
> only (only 1st, last, middle name) to ldap authorizer. In single mode I have 
> no problems, my CN successfully passes authorization. But in cluster mode I 
> have such error:
> Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
> ÐеоÑгийÐеннадÑевиÑ'. 
> Returning Forbidden response.
> See attached screenshot with error message in UI.
> It seems that there is ISO-8859-1 chars but NiFi tries to implement it as 
> UTF-8 sequence. Can't understand what is the reason of this transformation in 
> cluster mode.
> I've tried ldap auth with "Identity Strategy = USE_DN" witthout any mappings 
> and specified my sAMAccountName in file-user-group-provider as Initial User 
> Identity. Such workaround works but I have to create other ldap users 
> manually. So I would prefer ldap authorization.
> Can you help me to find out a solution?
> You can find conf & logs in attachment.
>  
> Env:
> 2 node cluster
> NiFi 1.5.0
> RHEL 7.3
> Windows AD
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4847) Ldap authorization problem in secure cluster

2018-02-06 Thread Georgy (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgy updated NIFI-4847:
-
Attachment: nifi_error.PNG

> Ldap authorization problem in secure cluster
> 
>
> Key: NIFI-4847
> URL: https://issues.apache.org/jira/browse/NIFI-4847
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: 2 node cluster
> RHEL 7.3
> NiFi 1.5.0
> Windows AD
>Reporter: Georgy
>Priority: Major
> Attachments: nifi.zip, nifi_error.PNG
>
>
> Hi guys,
> Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
> cluster mode.
> My DN in AD looks so:
> CN=Lastname Firstname Middlename, OU=..., ... 
> where CN consists of cyrillic chars (russian alphabet)
> After successful ldap auth and applying specified mappings NiFi passes CN 
> only (only 1st, last, middle name) to ldap authorizer. In single mode I have 
> no problems, my CN successfully passes authorization. But in cluster mode I 
> have such error:
> Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
> ÐеоÑгийÐеннадÑевиÑ'. 
> Returning Forbidden response.
> See attached screenshot with error message in UI.
> It seems that there is ISO-8859-1 chars but NiFi tries to implement it as 
> UTF-8 sequence. Can't understand what is the reason of this transformation in 
> cluster mode.
> I've tried ldap auth with "Identity Strategy = USE_DN" witthout any mappings 
> and specified my sAMAccountName in file-user-group-provider as Initial User 
> Identity. Such workaround works but I have to create other ldap users 
> manually. So I would prefer ldap authorization.
> Can you help me to find out a solution?
> You can find conf & logs in attachment.
>  
> Env:
> 2 node cluster
> NiFi 1.5.0
> RHEL 7.3
> Windows AD
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4847) Ldap authorization problem in secure cluster

2018-02-06 Thread Georgy (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgy updated NIFI-4847:
-
Attachment: nifi_error.PNG

> Ldap authorization problem in secure cluster
> 
>
> Key: NIFI-4847
> URL: https://issues.apache.org/jira/browse/NIFI-4847
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: 2 node cluster
> RHEL 7.3
> NiFi 1.5.0
> Windows AD
>Reporter: Georgy
>Priority: Major
> Attachments: nifi.zip, nifi_error.PNG
>
>
> Hi guys,
> Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
> cluster mode.
> My DN in AD looks so:
> CN=Lastname Firstname Middlename, OU=..., ... 
> where CN consists of cyrillic chars (russian alphabet)
> After successful ldap auth and applying specified mappings NiFi passes CN 
> only (only 1st, last, middle name) to ldap authorizer. In single mode I have 
> no problems, my CN successfully passes authorization. But in cluster mode I 
> have such error:
> Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
> ÐеоÑгийÐеннадÑевиÑ'. 
> Returning Forbidden response.
> See attached screenshot with error message in UI.
> It seems that there is ISO-8859-1 chars but NiFi tries to implement it as 
> UTF-8 sequence. Can't understand what is the reason of this transformation in 
> cluster mode.
> I've tried ldap auth with "Identity Strategy = USE_DN" witthout any mappings 
> and specified my sAMAccountName in file-user-group-provider as Initial User 
> Identity. Such workaround works but I have to create other ldap users 
> manually. So I would prefer ldap authorization.
> Can you help me to find out a solution?
> You can find conf & logs in attachment.
>  
> Env:
> 2 node cluster
> NiFi 1.5.0
> RHEL 7.3
> Windows AD
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4847) Ldap authorization problem in secure cluster

2018-02-06 Thread Georgy (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgy updated NIFI-4847:
-
Attachment: (was: nifi_error.PNG)

> Ldap authorization problem in secure cluster
> 
>
> Key: NIFI-4847
> URL: https://issues.apache.org/jira/browse/NIFI-4847
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: 2 node cluster
> RHEL 7.3
> NiFi 1.5.0
> Windows AD
>Reporter: Georgy
>Priority: Major
> Attachments: nifi.zip, nifi_error.PNG
>
>
> Hi guys,
> Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
> cluster mode.
> My DN in AD looks so:
> CN=Lastname Firstname Middlename, OU=..., ... 
> where CN consists of cyrillic chars (russian alphabet)
> After successful ldap auth and applying specified mappings NiFi passes CN 
> only (only 1st, last, middle name) to ldap authorizer. In single mode I have 
> no problems, my CN successfully passes authorization. But in cluster mode I 
> have such error:
> Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
> ÐеоÑгийÐеннадÑевиÑ'. 
> Returning Forbidden response.
> See attached screenshot with error message in UI.
> It seems that there is ISO-8859-1 chars but NiFi tries to implement it as 
> UTF-8 sequence. Can't understand what is the reason of this transformation in 
> cluster mode.
> I've tried ldap auth with "Identity Strategy = USE_DN" witthout any mappings 
> and specified my sAMAccountName in file-user-group-provider as Initial User 
> Identity. Such workaround works but I have to create other ldap users 
> manually. So I would prefer ldap authorization.
> Can you help me to find out a solution?
> You can find conf & logs in attachment.
>  
> Env:
> 2 node cluster
> NiFi 1.5.0
> RHEL 7.3
> Windows AD
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4847) Ldap authorization problem in secure cluster

2018-02-06 Thread Georgy (JIRA)
Georgy created NIFI-4847:


 Summary: Ldap authorization problem in secure cluster
 Key: NIFI-4847
 URL: https://issues.apache.org/jira/browse/NIFI-4847
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.5.0
 Environment: 2 node cluster
RHEL 7.3
NiFi 1.5.0
Windows AD
Reporter: Georgy
 Attachments: nifi.zip, nifi_error.PNG

Hi guys,

Have a problem when using LDAP Auth with LDAP Authorization in NiFi secure 
cluster mode.

My DN in AD looks so:
CN=Lastname Firstname Middlename, OU=..., ... 
where CN consists of cyrillic chars (russian alphabet)

After successful ldap auth and applying specified mappings NiFi passes CN only 
(only 1st, last, middle name) to ldap authorizer. In single mode I have no 
problems, my CN successfully passes authorization. But in cluster mode I have 
such error:
Unknown user with identity 'ÐезÑÑÐºÐ¸Ñ 
ÐеоÑгийÐеннадÑевиÑ'. 
Returning Forbidden response.
See attached screenshot with error message in UI.

It seems that there is ISO-8859-1 chars but NiFi tries to implement it as UTF-8 
sequence. Can't understand what is the reason of this transformation in cluster 
mode.

I've tried ldap auth with "Identity Strategy = USE_DN" witthout any mappings 
and specified my sAMAccountName in file-user-group-provider as Initial User 
Identity. Such workaround works but I have to create other ldap users manually. 
So I would prefer ldap authorization.

Can you help me to find out a solution?

You can find conf & logs in attachment.

 

Env:
2 node cluster
NiFi 1.5.0
RHEL 7.3
Windows AD

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)