[GitHub] nifi pull request #3156: NIFI-5780 Add pre and post statements to ExecuteSQL...
Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3156#discussion_r231759360 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java --- @@ -82,6 +84,16 @@ .identifiesControllerService(DBCPService.class) .build(); +public static final PropertyDescriptor SQL_PRE_QUERY = new PropertyDescriptor.Builder() +.name("sql-pre-query") +.displayName("SQL pre-query") --- End diff -- They are capitalized in the HIVE processors, so this would match them. ---
[GitHub] nifi pull request #3156: NIFI-5780 Add pre and post statements to ExecuteSQL...
Github user yjhyjhyjh0 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3156#discussion_r231754034 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestExecuteSQLRecord.java --- @@ -350,6 +352,141 @@ public void invokeOnTriggerRecords(final Integer queryTimeout, final String quer assertEquals(durationTime, fetchTime + executionTime); } +@Test +public void testPreQuery() throws Exception { +// remove previous test database, if any +final File dbLocation = new File(DB_LOCATION); +dbLocation.delete(); + +// load test data to database +final Connection con = ((DBCPService) runner.getControllerService("dbcp")).getConnection(); +Statement stmt = con.createStatement(); + +try { +stmt.execute("drop table TEST_NULL_INT"); +} catch (final SQLException sqle) { +} + +stmt.execute("create table TEST_NULL_INT (id integer not null, val1 integer, val2 integer, constraint my_pk primary key (id))"); + +runner.setIncomingConnection(true); +runner.setProperty(ExecuteSQL.SQL_PRE_QUERY, "insert into TEST_NULL_INT values(1,2,3);insert into TEST_NULL_INT values(4,5,6)"); --- End diff -- That's indeed great use case, I'll update test cases. Thanks for the sharing the information. ---
[GitHub] nifi pull request #3158: NIFI-5802: Add QueryRecord nullable field support
GitHub user ijokarumawak opened a pull request: https://github.com/apache/nifi/pull/3158 NIFI-5802: Add QueryRecord nullable field support Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ijokarumawak/nifi nifi-5802 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3158.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3158 commit 8bf29d4c56fb1f507d987c8aea774e5345b286f7 Author: Koji Kawamura Date: 2018-11-08T03:10:36Z NIFI-5802: Add QueryRecord nullable field support ---
[GitHub] nifi pull request #3156: NIFI-5780 Add pre and post statements to ExecuteSQL...
Github user yjhyjhyjh0 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3156#discussion_r231753838 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java --- @@ -82,6 +84,16 @@ .identifiesControllerService(DBCPService.class) .build(); +public static final PropertyDescriptor SQL_PRE_QUERY = new PropertyDescriptor.Builder() +.name("sql-pre-query") +.displayName("SQL pre-query") --- End diff -- Hi Peter, thanks for the feedback. I intentionally type it in 'SQL pre-query' to align with SQL_SELECT_QUERY property displayName 'SQL select query'. For example currently, SQL_PRE_QUERY, display '**SQL pre-query**' SQL_SELECT_QUERY, display '**SQL select query**' SQL_POST_QUERY, display '**SQL post-query**' If modified as you mention (I actually try this version in the first try) For example, SQL_PRE_QUERY, display '**SQL Pre-Query**' SQL_SELECT_QUERY, display '**SQL select query**' SQL_POST_QUERY, display '**SQL Post-Query**' Please let me know your though on this alignment, thanks ---
[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...
Github user colindean commented on the issue: https://github.com/apache/nifi/pull/3133 > an Idle connection was one that had been returned to the pool? That's what I would think, but I couldn't seem to actually trigger it. Reading through the API docs some more, I didn't think to try checking the idle connection count _after_ closing a connection. I'll try that tomorrow. ---
[GitHub] nifi pull request #3113: NIFI-5724 making the database connection autocommit...
Github user viswaug commented on a diff in the pull request: https://github.com/apache/nifi/pull/3113#discussion_r231747175 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutSQL.java --- @@ -134,6 +134,14 @@ .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .build(); +static final PropertyDescriptor AUTO_COMMIT = new PropertyDescriptor.Builder() +.name("database-session-autocommit") +.displayName("Database session autocommit value") +.description("The autocommit mode to set on the database connection being used.") +.allowableValues("true", "false") +.defaultValue("false") +.build(); --- End diff -- @ijokarumawak that git(hub) part was much easier than i expected it to be... i checked in my changes... let me know if you need anything tweaked or changed ð¤ ---
[GitHub] nifi pull request #3113: NIFI-5724 making the database connection autocommit...
Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/3113#discussion_r231743426 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutSQL.java --- @@ -134,6 +134,14 @@ .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .build(); +static final PropertyDescriptor AUTO_COMMIT = new PropertyDescriptor.Builder() +.name("database-session-autocommit") +.displayName("Database session autocommit value") +.description("The autocommit mode to set on the database connection being used.") +.allowableValues("true", "false") +.defaultValue("false") +.build(); --- End diff -- @viswaug Thanks for the snowflake doc link and explanation. Look forward to see new commits for further review. In order to update your PR, you just need to: 1. Make code changes 2. Add the changed files for a commit: `git add ` or `git add -A` (all changed files) 3. Commit with some comments: `git commit`, this makes a commit to your local branch 4. Push the commit to your remote branch: `git push origin configurable_autocommit_putsql`. This command adds the new commit to this PR. ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678937#comment-16678937 ] ASF GitHub Bot commented on NIFI-5790: -- Github user colindean commented on the issue: https://github.com/apache/nifi/pull/3133 Ah, I did that through checking the minIdle and maxIdle properties. > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5801) Evaluating Expression Language can in many cases be made much more efficient
Mark Payne created NIFI-5801: Summary: Evaluating Expression Language can in many cases be made much more efficient Key: NIFI-5801 URL: https://issues.apache.org/jira/browse/NIFI-5801 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne When a StandardPropertyValue is obtained and evaluateAttributeExpressions is called, it builds the entire Evaluator Tree each time. This was done to ensure that Evaluator.evaluate() is called only once. However, the requirement to call this only once was introduced as a way to have anyMatchingAttribute, anyAttribute, allMatchingAttributes, allAttributes, etc. methods work, and these are rarely used. I.e., we introduced semantics that significantly slow performance in order to provide functionality that is used maybe 1% of the time. Instead, we should optimize for the 99% use case and incur a penalty, if necessary, in the 1% use case instead. Profiling the ConsumeKafkaRecord processor shows that 80% of the time in that method is evaluating Expression Language for `${schema.name}` to determine which schema should be used. We can likely make this evaluation just as quick as attributeMap.get("schema.name") by pre-building the Evaluators and re-using them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...
Github user colindean commented on the issue: https://github.com/apache/nifi/pull/3133 Ah, I did that through checking the minIdle and maxIdle properties. ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678932#comment-16678932 ] ASF GitHub Bot commented on NIFI-5790: -- Github user patricker commented on the issue: https://github.com/apache/nifi/pull/3133 My thought was you could look for the idle count and see if it was 0, 8, etc... based on the config, and not worry about testing the timeouts for now. > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...
Github user patricker commented on the issue: https://github.com/apache/nifi/pull/3133 My thought was you could look for the idle count and see if it was 0, 8, etc... based on the config, and not worry about testing the timeouts for now. ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678885#comment-16678885 ] ASF GitHub Bot commented on NIFI-5790: -- Github user patricker commented on the issue: https://github.com/apache/nifi/pull/3133 @colindean I know Matt responded right after I did before, what are your thoughts on working on enabling unit tests by exposing the idle/active connection counts? > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678921#comment-16678921 ] ASF GitHub Bot commented on NIFI-5790: -- Github user colindean commented on the issue: https://github.com/apache/nifi/pull/3133 > What if you exposed the number of active and idle connections in the connection pool as properties on the DBCPConnectionPool? These are available by calling getNumActive() and getNumIdle(). Or you could call listAllObjects() and get back the full pool on the dataSource object. I started down this track but I couldn't actually seem to trigger idle within a reasonable test setup. I can't for the life of me tell what actually makes a connection go idle, only really what removes it _when_ it's idle. Thoughts on how to proceed? > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...
Github user colindean commented on the issue: https://github.com/apache/nifi/pull/3133 > What if you exposed the number of active and idle connections in the connection pool as properties on the DBCPConnectionPool? These are available by calling getNumActive() and getNumIdle(). Or you could call listAllObjects() and get back the full pool on the dataSource object. I started down this track but I couldn't actually seem to trigger idle within a reasonable test setup. I can't for the life of me tell what actually makes a connection go idle, only really what removes it _when_ it's idle. Thoughts on how to proceed? ---
[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP
[ https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678904#comment-16678904 ] Kislay Kumar commented on NIFI-4621: [~markap14], [~patricker] : Thanks for quick help. I am am hoping to come back quickly for review. > Allow inputs to ListSFTP > > > Key: NIFI-4621 > URL: https://issues.apache.org/jira/browse/NIFI-4621 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.4.0 >Reporter: Soumya Shanta Ghosh >Assignee: Kislay Kumar >Priority: Critical > > ListSFTP supports listing of the supplied directory (Remote Path) > out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" > / "Private Key Passphrase". > The password can change at a regular interval (depending on organization > policy) or the Hostname or the Remote Path can change based on some other > requirement. > This is a case to allow ListSFTP to leverage the use of Nifi Expression > language so that the values of Hostname, Password and/or Remote Path can be > set based on the attributes of an incoming flow file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-1161) Create RouteCSV Processor
[ https://issues.apache.org/jira/browse/NIFI-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-1161. -- Resolution: Won't Do This should now be handled by PartitionRecord, QueryRecord, etc. > Create RouteCSV Processor > - > > Key: NIFI-1161 > URL: https://issues.apache.org/jira/browse/NIFI-1161 > Project: Apache NiFi > Issue Type: Task > Components: Extensions >Reporter: Mark Payne >Priority: Major > > We have a RouteText processor built for 0.4.0. This is very powerful but a > very common use case will be routing and grouping CSV data. This use case, we > can make the configuration far easier by creating a RouteCSV Processor > instead of requiring the user to enter complicated Regular Expressions. > For instance, rather than a Grouping Regular Expression, we should provide > a Grouping Fields property where the user can simply enter a comma-separated > list of CSV fields (numeric or perhaps column header names?) > Also, rather than comparing user-defined rules against a line of text, the > rules could be compared against a specific CSV field. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...
Github user patricker commented on the issue: https://github.com/apache/nifi/pull/3133 @colindean I know Matt responded right after I did before, what are your thoughts on working on enabling unit tests by exposing the idle/active connection counts? ---
[jira] [Commented] (NIFI-5800) If RecordSchema has an inner field that references a schema recursively by name, hashCode() throws StackOverflowError
[ https://issues.apache.org/jira/browse/NIFI-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678845#comment-16678845 ] ASF GitHub Bot commented on NIFI-5800: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/3157 NIFI-5800: Do not recursively call hashCode on child schema for Recor… …d Field Types Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-5800 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3157.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3157 commit 111daf241d867ef7624a7f7304d286fce43bbacb Author: Mark Payne Date: 2018-11-07T22:00:51Z NIFI-5800: Do not recursively call hashCode on child schema for Record Field Types > If RecordSchema has an inner field that references a schema recursively by > name, hashCode() throws StackOverflowError > - > > Key: NIFI-5800 > URL: https://issues.apache.org/jira/browse/NIFI-5800 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Priority: Major > > If we have a schema where a field is of type RECORD and references an > outer-level schema, we get StackOverflowError. For example: > { > "name": "person", > "namespace": "nifi", > "type": "record", > "fields": [ > { "name": "name", "type": "string" }, > { "name": "mother", "type": "person" } > ] > } > In this case, if we attempt to add this schema to a HashMap, we get the > following error: > 2018-11-07 19:09:33,021 ERROR [Timer-Driven Process Thread-38] > o.a.n.p.k.pubsub.ConsumeKafkaRecord_2_0 > ConsumeKafkaRecord_2_0[id=ef6bd50b-0166-1000--f55a7995] Exception > while processing data from kafka so will close the lease > org.apache.nifi.processors.kafka.pubsub.ConsumerPool$SimpleConsumerLease@26706081 > due to java.lang.StackOverflowError: java.lang.StackOverflowError > java.lang.StackOverflowError: null > at java.util.AbstractList.hashCode(AbstractList.java:540) > at > java.util.Collections$UnmodifiableList.hashCode(Collections.java:1307) > at > org.apache.nifi.serialization.SimpleRecordSchema.hashCode(SimpleRecordSchema.java:172) > at > org.apache.nifi.serialization.record.type.RecordDataType.hashCode(RecordDataType.java:45) > at > org.apache.nifi.serialization.record.type.ArrayDataType.hashCode(ArrayDataType.java:44) > at > org.apache.nifi.serialization.record.RecordField.hashCode(RecordField.java:108) > at java.util.AbstractList.hashCode(AbstractList.java:541) > at > java.util.Collections$UnmodifiableList.hashCode(Collections.java:1307) > at > org.apache.nifi.serialization.SimpleRecordSchema.hashCode(SimpleRecordSchema.java:172) > at
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678821#comment-16678821 ] ASF GitHub Bot commented on NIFI-5790: -- Github user colindean commented on the issue: https://github.com/apache/nifi/pull/3133 Test failure now seems to be a timeout in another module: ``` [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.287 s <<< FAILURE! - in org.apache.nifi.processors.standard.TestHandleHttpRequest [ERROR] testMultipartFormDataRequest(org.apache.nifi.processors.standard.TestHandleHttpRequest) Time elapsed: 30.012 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 3 milliseconds at org.apache.nifi.processors.standard.TestHandleHttpRequest.testMultipartFormDataRequest(TestHandleHttpRequest.java:271) ``` > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3157: NIFI-5800: Do not recursively call hashCode on chil...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/3157 NIFI-5800: Do not recursively call hashCode on child schema for Recor⦠â¦d Field Types Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-5800 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3157.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3157 commit 111daf241d867ef7624a7f7304d286fce43bbacb Author: Mark Payne Date: 2018-11-07T22:00:51Z NIFI-5800: Do not recursively call hashCode on child schema for Record Field Types ---
[jira] [Resolved] (NIFI-4065) toString() method of StandardConnection provides wrong Source ID
[ https://issues.apache.org/jira/browse/NIFI-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-4065. -- Resolution: Fixed Was fixed in NIFI-5516 > toString() method of StandardConnection provides wrong Source ID > > > Key: NIFI-4065 > URL: https://issues.apache.org/jira/browse/NIFI-4065 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...
Github user colindean commented on the issue: https://github.com/apache/nifi/pull/3133 Test failure now seems to be a timeout in another module: ``` [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.287 s <<< FAILURE! - in org.apache.nifi.processors.standard.TestHandleHttpRequest [ERROR] testMultipartFormDataRequest(org.apache.nifi.processors.standard.TestHandleHttpRequest) Time elapsed: 30.012 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 3 milliseconds at org.apache.nifi.processors.standard.TestHandleHttpRequest.testMultipartFormDataRequest(TestHandleHttpRequest.java:271) ``` ---
[jira] [Resolved] (NIFI-4427) Default for FlowFile's filename should be the FlowFile's UUID
[ https://issues.apache.org/jira/browse/NIFI-4427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-4427. -- Resolution: Fixed Resolved by NIFI-5533 > Default for FlowFile's filename should be the FlowFile's UUID > - > > Key: NIFI-4427 > URL: https://issues.apache.org/jira/browse/NIFI-4427 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Priority: Major > Attachments: Multiple Duplicate Filenames.png > > > Currently, when a new FlowFile is created without any parents, the filename > is set to System.nanoTime(). This is likely to result in filename collisions > when operating at a high rate and the extra System call could be avoided. > Since we are already generating a unique UUID we should just use that as the > filename. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4427) Default for FlowFile's filename should be the FlowFile's UUID
[ https://issues.apache.org/jira/browse/NIFI-4427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4427: - Fix Version/s: 1.9.0 > Default for FlowFile's filename should be the FlowFile's UUID > - > > Key: NIFI-4427 > URL: https://issues.apache.org/jira/browse/NIFI-4427 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Priority: Major > Fix For: 1.9.0 > > Attachments: Multiple Duplicate Filenames.png > > > Currently, when a new FlowFile is created without any parents, the filename > is set to System.nanoTime(). This is likely to result in filename collisions > when operating at a high rate and the extra System call could be avoided. > Since we are already generating a unique UUID we should just use that as the > filename. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-5687) When using load balancing and saving to the flow registry, the "compression" strategy is not saved
[ https://issues.apache.org/jira/browse/NIFI-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-5687. -- Resolution: Fixed This was resolved as part of another commit before the feature was ever released. > When using load balancing and saving to the flow registry, the "compression" > strategy is not saved > -- > > Key: NIFI-5687 > URL: https://issues.apache.org/jira/browse/NIFI-5687 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > > To replicate: Create a flow and configure load balancing with "Compress only > attributes." Save the flow to the flow registry. If you then change the value > of the compression, it doesn't show as a local change. If you import the flow > again, it has "Do Not Compress" chosen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5800) If RecordSchema has an inner field that references a schema recursively by name, hashCode() throws StackOverflowError
Mark Payne created NIFI-5800: Summary: If RecordSchema has an inner field that references a schema recursively by name, hashCode() throws StackOverflowError Key: NIFI-5800 URL: https://issues.apache.org/jira/browse/NIFI-5800 Project: Apache NiFi Issue Type: Bug Reporter: Mark Payne If we have a schema where a field is of type RECORD and references an outer-level schema, we get StackOverflowError. For example: { "name": "person", "namespace": "nifi", "type": "record", "fields": [ { "name": "name", "type": "string" }, { "name": "mother", "type": "person" } ] } In this case, if we attempt to add this schema to a HashMap, we get the following error: 2018-11-07 19:09:33,021 ERROR [Timer-Driven Process Thread-38] o.a.n.p.k.pubsub.ConsumeKafkaRecord_2_0 ConsumeKafkaRecord_2_0[id=ef6bd50b-0166-1000--f55a7995] Exception while processing data from kafka so will close the lease org.apache.nifi.processors.kafka.pubsub.ConsumerPool$SimpleConsumerLease@26706081 due to java.lang.StackOverflowError: java.lang.StackOverflowError java.lang.StackOverflowError: null at java.util.AbstractList.hashCode(AbstractList.java:540) at java.util.Collections$UnmodifiableList.hashCode(Collections.java:1307) at org.apache.nifi.serialization.SimpleRecordSchema.hashCode(SimpleRecordSchema.java:172) at org.apache.nifi.serialization.record.type.RecordDataType.hashCode(RecordDataType.java:45) at org.apache.nifi.serialization.record.type.ArrayDataType.hashCode(ArrayDataType.java:44) at org.apache.nifi.serialization.record.RecordField.hashCode(RecordField.java:108) at java.util.AbstractList.hashCode(AbstractList.java:541) at java.util.Collections$UnmodifiableList.hashCode(Collections.java:1307) at org.apache.nifi.serialization.SimpleRecordSchema.hashCode(SimpleRecordSchema.java:172) at org.apache.nifi.serialization.record.type.RecordDataType.hashCode(RecordDataType.java:45) at org.apache.nifi.serialization.record.type.ArrayDataType.hashCode(ArrayDataType.java:44) at org.apache.nifi.serialization.record.RecordField.hashCode(RecordField.java:108) at java.util.AbstractList.hashCode(AbstractList.java:541) at java.util.Collections$UnmodifiableList.hashCode(Collections.java:1307) at org.apache.nifi.serialization.SimpleRecordSchema.hashCode(SimpleRecordSchema.java:172) at org.apache.nifi.serialization.record.type.RecordDataType.hashCode(RecordDataType.java:45) at org.apache.nifi.serialization.record.type.ArrayDataType.hashCode(ArrayDataType.java:44) at org.apache.nifi.serialization.record.RecordField.hashCode(RecordField.java:108) at java.util.AbstractList.hashCode(AbstractList.java:541) at java.util.Collections$UnmodifiableList.hashCode(Collections.java:1307) at org.apache.nifi.serialization.SimpleRecordSchema.hashCode(SimpleRecordSchema.java:172) at org.apache.nifi.serialization.record.type.RecordDataType.hashCode(RecordDataType.java:45) at org.apache.nifi.serialization.record.type.ArrayDataType.hashCode(ArrayDataType.java:44) at org.apache.nifi.serialization.record.RecordField.hashCode(RecordField.java:108) at java.util.AbstractList.hashCode(AbstractList.java:541) at java.util.Collections$UnmodifiableList.hashCode(Collections.java:1307) at org.apache.nifi.serialization.SimpleRecordSchema.hashCode(SimpleRecordSchema.java:172) at org.apache.nifi.serialization.record.type.RecordDataType.hashCode(RecordDataType.java:45) at org.apache.nifi.serialization.record.type.ArrayDataType.hashCode(ArrayDataType.java:44) at org.apache.nifi.serialization.record.RecordField.hashCode(RecordField.java:108) at java.util.AbstractList.hashCode(AbstractList.java:541) at java.util.Collections$UnmodifiableList.hashCode(Collections.java:1307) ... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...
Github user colindean commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231660071 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +164,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.displayName("Minimum Idle Connections") +.name("dbcp-mim-idle-conns") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.displayName("Max Idle Connections") +.name("dbcp-max-idle-conns") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_CONN_LIFETIME = new PropertyDescriptor.Builder() +.displayName("Max Connection Lifetime") +.name("dbcp-max-conn-lifetime") +.description("The maximum lifetime in milliseconds of a connection. After this time is exceeded the " + +"connection will fail the next activation, passivation or validation test. A value of zero or less " + +"means the connection has an infinite lifetime.") +.defaultValue("-1") --- End diff -- Looking into it, `NONNEGATIVE_INTEGER_VALIDATOR` lacks the custom time period validation. The new time period options should continue to use the `CUSTOM_TIME_PERIOD_VALIDATOR` defined in this class but I'll change `MIN_IDLE` to use `NONNEGATIVE_INTEGER_VALIDATOR`. `MAX_IDLE` needs to stay `INTEGER_VALIDATOR` because 0 is a valid value for it. ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678725#comment-16678725 ] ASF GitHub Bot commented on NIFI-5790: -- Github user colindean commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231660071 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +164,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.displayName("Minimum Idle Connections") +.name("dbcp-mim-idle-conns") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.displayName("Max Idle Connections") +.name("dbcp-max-idle-conns") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_CONN_LIFETIME = new PropertyDescriptor.Builder() +.displayName("Max Connection Lifetime") +.name("dbcp-max-conn-lifetime") +.description("The maximum lifetime in milliseconds of a connection. After this time is exceeded the " + +"connection will fail the next activation, passivation or validation test. A value of zero or less " + +"means the connection has an infinite lifetime.") +.defaultValue("-1") --- End diff -- Looking into it, `NONNEGATIVE_INTEGER_VALIDATOR` lacks the custom time period validation. The new time period options should continue to use the `CUSTOM_TIME_PERIOD_VALIDATOR` defined in this class but I'll change `MIN_IDLE` to use `NONNEGATIVE_INTEGER_VALIDATOR`. `MAX_IDLE` needs to stay `INTEGER_VALIDATOR` because 0 is a valid value for it. > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP
[ https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678720#comment-16678720 ] Mark Payne commented on NIFI-4621: -- [~kislayom] I have assigned the Jira to you. You should now be able to assign Jiras to yourself, as well. [~patricker] FYI - no need for Joe to grant permissions - anyone on the PMC, I believe, should have the ability to do so. > Allow inputs to ListSFTP > > > Key: NIFI-4621 > URL: https://issues.apache.org/jira/browse/NIFI-4621 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.4.0 >Reporter: Soumya Shanta Ghosh >Assignee: Kislay Kumar >Priority: Critical > > ListSFTP supports listing of the supplied directory (Remote Path) > out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" > / "Private Key Passphrase". > The password can change at a regular interval (depending on organization > policy) or the Hostname or the Remote Path can change based on some other > requirement. > This is a case to allow ListSFTP to leverage the use of Nifi Expression > language so that the values of Hostname, Password and/or Remote Path can be > set based on the attributes of an incoming flow file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-4621) Allow inputs to ListSFTP
[ https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne reassigned NIFI-4621: Assignee: Kislay Kumar (was: Puspendu Banerjee) > Allow inputs to ListSFTP > > > Key: NIFI-4621 > URL: https://issues.apache.org/jira/browse/NIFI-4621 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.4.0 >Reporter: Soumya Shanta Ghosh >Assignee: Kislay Kumar >Priority: Critical > > ListSFTP supports listing of the supplied directory (Remote Path) > out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" > / "Private Key Passphrase". > The password can change at a regular interval (depending on organization > policy) or the Hostname or the Remote Path can change based on some other > requirement. > This is a case to allow ListSFTP to leverage the use of Nifi Expression > language so that the values of Hostname, Password and/or Remote Path can be > set based on the attributes of an incoming flow file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...
Github user colindean commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231654261 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.name("Minimum Idle Connections") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.sensitive(false) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.name("Max Idle Connections") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") --- End diff -- @mattyb149 Yes, it's 8. Should I reach into the impl to refer to the default constants? ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678702#comment-16678702 ] ASF GitHub Bot commented on NIFI-5790: -- Github user colindean commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231654261 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.name("Minimum Idle Connections") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.sensitive(false) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.name("Max Idle Connections") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") --- End diff -- @mattyb149 Yes, it's 8. Should I reach into the impl to refer to the default constants? > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3113: NIFI-5724 making the database connection autocommit...
Github user viswaug commented on a diff in the pull request: https://github.com/apache/nifi/pull/3113#discussion_r231638762 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutSQL.java --- @@ -134,6 +134,14 @@ .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .build(); +static final PropertyDescriptor AUTO_COMMIT = new PropertyDescriptor.Builder() +.name("database-session-autocommit") +.displayName("Database session autocommit value") +.description("The autocommit mode to set on the database connection being used.") +.allowableValues("true", "false") +.defaultValue("false") +.build(); --- End diff -- @ijokarumawak Here is the snowflake documentation that refers to the locks being held for ever during abruptly disconnected sessions. https://docs.snowflake.net/manuals/sql-reference/transactions.html#aborting-transactions I have also confirmed this with the snowflake support ticket and the resolution suggested was to set the autocommit value to true. > I did reviewed the query id thanks for providing it. The query is associated to session id 2392474452590 and from the session id we found that there was an alter session set autocommit=false was executed (query id for your reference 10d44ff5-69a4-4d8c-91c3-a206c4a126b8) and after that there was no commit executed explicitly. This lead to the open transactions in the session and hence there was a lock created. > By default autocommit is true and, once all the transaction is completed, it gets automatically committed. The commit query will not be visible in the history as this is a background task. For best practices, we recommend to manually commit the transactions if the session has set to autocommit=false . > Yes, there is a difference between a session being "closed" versus "terminated abruptly". Session being "closed" implies to those sessions which is closed manually after all the transactions is completed . Session being "terminated abruptly" implies to those sessions which terminates due to various reason like network issues, system outage ..etc. I am done making the changes you had requested. I will have a PR out soon. I just need to hone my git skills to combine these commits and send a PR ... still figuring that part out ... ---
[jira] [Commented] (NIFI-5724) Make the autocommit value in the PutSQL processor configurable
[ https://issues.apache.org/jira/browse/NIFI-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678656#comment-16678656 ] ASF GitHub Bot commented on NIFI-5724: -- Github user viswaug commented on a diff in the pull request: https://github.com/apache/nifi/pull/3113#discussion_r231638762 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutSQL.java --- @@ -134,6 +134,14 @@ .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .build(); +static final PropertyDescriptor AUTO_COMMIT = new PropertyDescriptor.Builder() +.name("database-session-autocommit") +.displayName("Database session autocommit value") +.description("The autocommit mode to set on the database connection being used.") +.allowableValues("true", "false") +.defaultValue("false") +.build(); --- End diff -- @ijokarumawak Here is the snowflake documentation that refers to the locks being held for ever during abruptly disconnected sessions. https://docs.snowflake.net/manuals/sql-reference/transactions.html#aborting-transactions I have also confirmed this with the snowflake support ticket and the resolution suggested was to set the autocommit value to true. > I did reviewed the query id thanks for providing it. The query is associated to session id 2392474452590 and from the session id we found that there was an alter session set autocommit=false was executed (query id for your reference 10d44ff5-69a4-4d8c-91c3-a206c4a126b8) and after that there was no commit executed explicitly. This lead to the open transactions in the session and hence there was a lock created. > By default autocommit is true and, once all the transaction is completed, it gets automatically committed. The commit query will not be visible in the history as this is a background task. For best practices, we recommend to manually commit the transactions if the session has set to autocommit=false . > Yes, there is a difference between a session being "closed" versus "terminated abruptly". Session being "closed" implies to those sessions which is closed manually after all the transactions is completed . Session being "terminated abruptly" implies to those sessions which terminates due to various reason like network issues, system outage ..etc. I am done making the changes you had requested. I will have a PR out soon. I just need to hone my git skills to combine these commits and send a PR ... still figuring that part out ... > Make the autocommit value in the PutSQL processor configurable > -- > > Key: NIFI-5724 > URL: https://issues.apache.org/jira/browse/NIFI-5724 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: vish uma >Priority: Minor > > The PutSQL processor currently always sets the autocommit value on the > database session to false before the SQL statement is run and resets it back > to the original value after. > i am not sure if the autocommit value is hardcoded to false for a reason, if > it is, please let me know. > This is causing an issue with the snowflake DB where abruptly disconnected > sessions do not release the locks they have taken. > i would like to make this autocommit value configurable. I can submit a patch > for this if there is no objections. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP
[ https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678545#comment-16678545 ] Peter Wicks commented on NIFI-4621: --- [~kislayom], I can't do this with your current permissions. [~joewitt] will need to grant you permissions to become an assignee. Hopefully he'll see this and grant you permission. Either way, you can work on it and submit a PR. > Allow inputs to ListSFTP > > > Key: NIFI-4621 > URL: https://issues.apache.org/jira/browse/NIFI-4621 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.4.0 >Reporter: Soumya Shanta Ghosh >Assignee: Puspendu Banerjee >Priority: Critical > > ListSFTP supports listing of the supplied directory (Remote Path) > out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" > / "Private Key Passphrase". > The password can change at a regular interval (depending on organization > policy) or the Hostname or the Remote Path can change based on some other > requirement. > This is a case to allow ListSFTP to leverage the use of Nifi Expression > language so that the values of Hostname, Password and/or Remote Path can be > set based on the attributes of an incoming flow file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5780) Add pre and post statements to ExecuteSQL and ExecuteSQLRecord
[ https://issues.apache.org/jira/browse/NIFI-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678537#comment-16678537 ] ASF GitHub Bot commented on NIFI-5780: -- Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3156#discussion_r231596320 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java --- @@ -82,6 +84,16 @@ .identifiesControllerService(DBCPService.class) .build(); +public static final PropertyDescriptor SQL_PRE_QUERY = new PropertyDescriptor.Builder() +.name("sql-pre-query") +.displayName("SQL pre-query") --- End diff -- Can you make this `SQL Pre-Query`? Same for `SQL Post-Query`. > Add pre and post statements to ExecuteSQL and ExecuteSQLRecord > -- > > Key: NIFI-5780 > URL: https://issues.apache.org/jira/browse/NIFI-5780 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.8.0 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > > Sometimes we might need to set up session relate configuration before or > after query. > For example: > Pre query can be used for session relate setting like our use case Teradata > Query Banding. > Same feature (pre query and post query) is added to SelectHiveQL in > https://issues.apache.org/jira/browse/NIFI-5044 > Planning to add this feature to ExecuteSQL and ExecuteSQLRecord processors. > If pre or post statement fail, will not produce resultset flowfile. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5780) Add pre and post statements to ExecuteSQL and ExecuteSQLRecord
[ https://issues.apache.org/jira/browse/NIFI-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678538#comment-16678538 ] ASF GitHub Bot commented on NIFI-5780: -- Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3156#discussion_r231603266 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestExecuteSQLRecord.java --- @@ -350,6 +352,141 @@ public void invokeOnTriggerRecords(final Integer queryTimeout, final String quer assertEquals(durationTime, fetchTime + executionTime); } +@Test +public void testPreQuery() throws Exception { +// remove previous test database, if any +final File dbLocation = new File(DB_LOCATION); +dbLocation.delete(); + +// load test data to database +final Connection con = ((DBCPService) runner.getControllerService("dbcp")).getConnection(); +Statement stmt = con.createStatement(); + +try { +stmt.execute("drop table TEST_NULL_INT"); +} catch (final SQLException sqle) { +} + +stmt.execute("create table TEST_NULL_INT (id integer not null, val1 integer, val2 integer, constraint my_pk primary key (id))"); + +runner.setIncomingConnection(true); +runner.setProperty(ExecuteSQL.SQL_PRE_QUERY, "insert into TEST_NULL_INT values(1,2,3);insert into TEST_NULL_INT values(4,5,6)"); --- End diff -- I know these tests were easy to write with insert/delete, but they don't really show why this feature is needed. The idea is that we need to configure the DBCP connection in some way, but that putting two ExecuteSQL processors together might cause us to use different connections from the connection pool. Could you try changing them to set Derby session properties?Maybe something from here? https://db.apache.org/derby/docs/10.1/ref/rrefsetdbpropproc.html This one looked good, it tells Derby to capture runtime statistics for the current connection, and then turns them back off after, so a legitimate use case. Pre: `CALL SYSCS_UTIL.SYSCS_SET_RUNTIMESTATISTICS(1)` Post: `CALL SYSCS_UTIL.SYSCS_SET_RUNTIMESTATISTICS(0)` > Add pre and post statements to ExecuteSQL and ExecuteSQLRecord > -- > > Key: NIFI-5780 > URL: https://issues.apache.org/jira/browse/NIFI-5780 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.8.0 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > > Sometimes we might need to set up session relate configuration before or > after query. > For example: > Pre query can be used for session relate setting like our use case Teradata > Query Banding. > Same feature (pre query and post query) is added to SelectHiveQL in > https://issues.apache.org/jira/browse/NIFI-5044 > Planning to add this feature to ExecuteSQL and ExecuteSQLRecord processors. > If pre or post statement fail, will not produce resultset flowfile. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3156: NIFI-5780 Add pre and post statements to ExecuteSQL...
Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3156#discussion_r231596320 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java --- @@ -82,6 +84,16 @@ .identifiesControllerService(DBCPService.class) .build(); +public static final PropertyDescriptor SQL_PRE_QUERY = new PropertyDescriptor.Builder() +.name("sql-pre-query") +.displayName("SQL pre-query") --- End diff -- Can you make this `SQL Pre-Query`? Same for `SQL Post-Query`. ---
[GitHub] nifi pull request #3156: NIFI-5780 Add pre and post statements to ExecuteSQL...
Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3156#discussion_r231603266 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestExecuteSQLRecord.java --- @@ -350,6 +352,141 @@ public void invokeOnTriggerRecords(final Integer queryTimeout, final String quer assertEquals(durationTime, fetchTime + executionTime); } +@Test +public void testPreQuery() throws Exception { +// remove previous test database, if any +final File dbLocation = new File(DB_LOCATION); +dbLocation.delete(); + +// load test data to database +final Connection con = ((DBCPService) runner.getControllerService("dbcp")).getConnection(); +Statement stmt = con.createStatement(); + +try { +stmt.execute("drop table TEST_NULL_INT"); +} catch (final SQLException sqle) { +} + +stmt.execute("create table TEST_NULL_INT (id integer not null, val1 integer, val2 integer, constraint my_pk primary key (id))"); + +runner.setIncomingConnection(true); +runner.setProperty(ExecuteSQL.SQL_PRE_QUERY, "insert into TEST_NULL_INT values(1,2,3);insert into TEST_NULL_INT values(4,5,6)"); --- End diff -- I know these tests were easy to write with insert/delete, but they don't really show why this feature is needed. The idea is that we need to configure the DBCP connection in some way, but that putting two ExecuteSQL processors together might cause us to use different connections from the connection pool. Could you try changing them to set Derby session properties?Maybe something from here? https://db.apache.org/derby/docs/10.1/ref/rrefsetdbpropproc.html This one looked good, it tells Derby to capture runtime statistics for the current connection, and then turns them back off after, so a legitimate use case. Pre: `CALL SYSCS_UTIL.SYSCS_SET_RUNTIMESTATISTICS(1)` Post: `CALL SYSCS_UTIL.SYSCS_SET_RUNTIMESTATISTICS(0)` ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678489#comment-16678489 ] ASF GitHub Bot commented on NIFI-5790: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231582876 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +164,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.displayName("Minimum Idle Connections") +.name("dbcp-mim-idle-conns") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.displayName("Max Idle Connections") +.name("dbcp-max-idle-conns") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_CONN_LIFETIME = new PropertyDescriptor.Builder() +.displayName("Max Connection Lifetime") +.name("dbcp-max-conn-lifetime") +.description("The maximum lifetime in milliseconds of a connection. After this time is exceeded the " + +"connection will fail the next activation, passivation or validation test. A value of zero or less " + +"means the connection has an infinite lifetime.") +.defaultValue("-1") +.required(true) +.addValidator(CUSTOM_TIME_PERIOD_VALIDATOR) +.build(); + +public static final PropertyDescriptor EVICTION_RUN_PERIOD = new PropertyDescriptor.Builder() +.displayName("Time Between Eviction Runs") +.name("dbcp-time-between-eviction-runs") +.description("The number of milliseconds to sleep between runs of the idle connection evictor thread. When " + +"non-positive, no idle connection evictor thread will be run.") +.defaultValue("-1") +.required(true) +.addValidator(CUSTOM_TIME_PERIOD_VALIDATOR) +.build(); + +public static final PropertyDescriptor MIN_EVICTABLE_IDLE_TIME = new PropertyDescriptor.Builder() +.displayName("Minimum Evictable Idle Time") +.name("dbcp-min-evictable-idle-time") +.description("The minimum amount of time a connection may sit idle in the pool before it is eligible for eviction.") +.defaultValue("1800 secs") +.required(true) +.addValidator(CUSTOM_TIME_PERIOD_VALIDATOR) --- End diff -- The validator supports expression language, so I think these properties should support expression language at the Variable Registry level > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think
[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231582876 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +164,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.displayName("Minimum Idle Connections") +.name("dbcp-mim-idle-conns") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.displayName("Max Idle Connections") +.name("dbcp-max-idle-conns") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_CONN_LIFETIME = new PropertyDescriptor.Builder() +.displayName("Max Connection Lifetime") +.name("dbcp-max-conn-lifetime") +.description("The maximum lifetime in milliseconds of a connection. After this time is exceeded the " + +"connection will fail the next activation, passivation or validation test. A value of zero or less " + +"means the connection has an infinite lifetime.") +.defaultValue("-1") +.required(true) +.addValidator(CUSTOM_TIME_PERIOD_VALIDATOR) +.build(); + +public static final PropertyDescriptor EVICTION_RUN_PERIOD = new PropertyDescriptor.Builder() +.displayName("Time Between Eviction Runs") +.name("dbcp-time-between-eviction-runs") +.description("The number of milliseconds to sleep between runs of the idle connection evictor thread. When " + +"non-positive, no idle connection evictor thread will be run.") +.defaultValue("-1") +.required(true) +.addValidator(CUSTOM_TIME_PERIOD_VALIDATOR) +.build(); + +public static final PropertyDescriptor MIN_EVICTABLE_IDLE_TIME = new PropertyDescriptor.Builder() +.displayName("Minimum Evictable Idle Time") +.name("dbcp-min-evictable-idle-time") +.description("The minimum amount of time a connection may sit idle in the pool before it is eligible for eviction.") +.defaultValue("1800 secs") +.required(true) +.addValidator(CUSTOM_TIME_PERIOD_VALIDATOR) --- End diff -- The validator supports expression language, so I think these properties should support expression language at the Variable Registry level ---
[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231582349 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +164,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.displayName("Minimum Idle Connections") +.name("dbcp-mim-idle-conns") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.displayName("Max Idle Connections") +.name("dbcp-max-idle-conns") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_CONN_LIFETIME = new PropertyDescriptor.Builder() +.displayName("Max Connection Lifetime") +.name("dbcp-max-conn-lifetime") +.description("The maximum lifetime in milliseconds of a connection. After this time is exceeded the " + +"connection will fail the next activation, passivation or validation test. A value of zero or less " + +"means the connection has an infinite lifetime.") +.defaultValue("-1") +.required(true) +.addValidator(CUSTOM_TIME_PERIOD_VALIDATOR) +.build(); + +public static final PropertyDescriptor EVICTION_RUN_PERIOD = new PropertyDescriptor.Builder() +.displayName("Time Between Eviction Runs") +.name("dbcp-time-between-eviction-runs") +.description("The number of milliseconds to sleep between runs of the idle connection evictor thread. When " + +"non-positive, no idle connection evictor thread will be run.") +.defaultValue("-1") --- End diff -- If the default values are available as constants from DBCP, we should probably use those. If we want to limit "zero or less" to just zero, we can just do an additional Math.max() ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678486#comment-16678486 ] ASF GitHub Bot commented on NIFI-5790: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231582349 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +164,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.displayName("Minimum Idle Connections") +.name("dbcp-mim-idle-conns") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.displayName("Max Idle Connections") +.name("dbcp-max-idle-conns") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_CONN_LIFETIME = new PropertyDescriptor.Builder() +.displayName("Max Connection Lifetime") +.name("dbcp-max-conn-lifetime") +.description("The maximum lifetime in milliseconds of a connection. After this time is exceeded the " + +"connection will fail the next activation, passivation or validation test. A value of zero or less " + +"means the connection has an infinite lifetime.") +.defaultValue("-1") +.required(true) +.addValidator(CUSTOM_TIME_PERIOD_VALIDATOR) +.build(); + +public static final PropertyDescriptor EVICTION_RUN_PERIOD = new PropertyDescriptor.Builder() +.displayName("Time Between Eviction Runs") +.name("dbcp-time-between-eviction-runs") +.description("The number of milliseconds to sleep between runs of the idle connection evictor thread. When " + +"non-positive, no idle connection evictor thread will be run.") +.defaultValue("-1") --- End diff -- If the default values are available as constants from DBCP, we should probably use those. If we want to limit "zero or less" to just zero, we can just do an additional Math.max() > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678482#comment-16678482 ] ASF GitHub Bot commented on NIFI-5790: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231580669 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +164,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.displayName("Minimum Idle Connections") +.name("dbcp-mim-idle-conns") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.displayName("Max Idle Connections") +.name("dbcp-max-idle-conns") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_CONN_LIFETIME = new PropertyDescriptor.Builder() +.displayName("Max Connection Lifetime") +.name("dbcp-max-conn-lifetime") +.description("The maximum lifetime in milliseconds of a connection. After this time is exceeded the " + +"connection will fail the next activation, passivation or validation test. A value of zero or less " + +"means the connection has an infinite lifetime.") +.defaultValue("-1") --- End diff -- Even if the API allows zero or less, we could just say zero means infinite and use a NONNEGATIVE_INTEGER_VALIDATOR > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231580669 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +164,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.displayName("Minimum Idle Connections") +.name("dbcp-mim-idle-conns") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.displayName("Max Idle Connections") +.name("dbcp-max-idle-conns") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.build(); + +public static final PropertyDescriptor MAX_CONN_LIFETIME = new PropertyDescriptor.Builder() +.displayName("Max Connection Lifetime") +.name("dbcp-max-conn-lifetime") +.description("The maximum lifetime in milliseconds of a connection. After this time is exceeded the " + +"connection will fail the next activation, passivation or validation test. A value of zero or less " + +"means the connection has an infinite lifetime.") +.defaultValue("-1") --- End diff -- Even if the API allows zero or less, we could just say zero means infinite and use a NONNEGATIVE_INTEGER_VALIDATOR ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678480#comment-16678480 ] ASF GitHub Bot commented on NIFI-5790: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231580368 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.name("Minimum Idle Connections") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.sensitive(false) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.name("Max Idle Connections") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") --- End diff -- IMO we should keep the default value as the one used when you don't explicitly set it, I guess that's 8? > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPCo...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/3133#discussion_r231580368 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java --- @@ -164,6 +161,71 @@ public ValidationResult validate(final String subject, final String input, final .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .build(); +public static final PropertyDescriptor MIN_IDLE = new PropertyDescriptor.Builder() +.name("Minimum Idle Connections") +.description("The minimum number of connections that can remain idle in the pool, without extra ones being " + +"created, or zero to create none.") +.defaultValue("0") +.required(true) +.addValidator(StandardValidators.INTEGER_VALIDATOR) +.sensitive(false) +.build(); + +public static final PropertyDescriptor MAX_IDLE = new PropertyDescriptor.Builder() +.name("Max Idle Connections") +.description("The maximum number of connections that can remain idle in the pool, without extra ones being " + +"released, or negative for no limit.") +.defaultValue("8") --- End diff -- IMO we should keep the default value as the one used when you don't explicitly set it, I guess that's 8? ---
[jira] [Commented] (NIFI-5780) Add pre and post statements to ExecuteSQL and ExecuteSQLRecord
[ https://issues.apache.org/jira/browse/NIFI-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678474#comment-16678474 ] ASF GitHub Bot commented on NIFI-5780: -- GitHub user yjhyjhyjh0 opened a pull request: https://github.com/apache/nifi/pull/3156 NIFI-5780 Add pre and post statements to ExecuteSQL and ExecuteSQLRecord Add pre, post query property to AbstractExecuteSQL. Most of implementation comes from SelectHiveQL. Add unit test to pre, post query. Finish local nifi integration test. ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [x] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/yjhyjhyjh0/nifi NIFI-5780 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3156.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3156 commit e5ec707036b04607217c276c616c145aad54df4c Author: yjhyjhyjh0 Date: 2018-11-07T16:25:50Z NIFI-5780 Add pre and post statements to ExecuteSQL and ExecuteSQLRecord > Add pre and post statements to ExecuteSQL and ExecuteSQLRecord > -- > > Key: NIFI-5780 > URL: https://issues.apache.org/jira/browse/NIFI-5780 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.8.0 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > > Sometimes we might need to set up session relate configuration before or > after query. > For example: > Pre query can be used for session relate setting like our use case Teradata > Query Banding. > Same feature (pre query and post query) is added to SelectHiveQL in > https://issues.apache.org/jira/browse/NIFI-5044 > Planning to add this feature to ExecuteSQL and ExecuteSQLRecord processors. > If pre or post statement fail, will not produce resultset flowfile. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3156: NIFI-5780 Add pre and post statements to ExecuteSQL...
GitHub user yjhyjhyjh0 opened a pull request: https://github.com/apache/nifi/pull/3156 NIFI-5780 Add pre and post statements to ExecuteSQL and ExecuteSQLRecord Add pre, post query property to AbstractExecuteSQL. Most of implementation comes from SelectHiveQL. Add unit test to pre, post query. Finish local nifi integration test. ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [x] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/yjhyjhyjh0/nifi NIFI-5780 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3156.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3156 commit e5ec707036b04607217c276c616c145aad54df4c Author: yjhyjhyjh0 Date: 2018-11-07T16:25:50Z NIFI-5780 Add pre and post statements to ExecuteSQL and ExecuteSQLRecord ---
[jira] [Created] (NIFI-5799) Unable to extend docker image with custom configuration
Noel Winstanley created NIFI-5799: - Summary: Unable to extend docker image with custom configuration Key: NIFI-5799 URL: https://issues.apache.org/jira/browse/NIFI-5799 Project: Apache NiFi Issue Type: Bug Components: Docker Affects Versions: 1.8.0 Reporter: Noel Winstanley I build a custom docker image for my product, based on the stock docker nifi image. My Dockerfile installs a flow.xml.gz, and then uses sed to adjust some properties in conf/bootstrap.conf and conf/nifi.properties. This worked well with previous versions of Nifi https://issues.apache.org/jira/browse/NIFI-5438 (introduced in Nifi 1.8.0) added a VOLUME command to the docker file that covers ${NIFI_HOME}/conf . I am still able to copy a new flow.xml.gz into the conf directory, BUT am unable to edit files already in the conf dir (using sed or otherwise). This surprising behaviour is backed up by the dockerfile docs https://docs.docker.com/engine/reference/builder/#volume - " *Changing the volume from within the Dockerfile*: If any build steps change the data within the volume after it has been declared, those changes will be discarded. " The properties I wish to override (e.g. nifi.bored.yield.duration) aren't supported as environment variables, so I can't pass them in from a compose file or similar. Having a VOLUME command over a configuration directory really hampers the extensibility of the docker image - could it be removed? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5784) With the addition of the Toolkit Guide, edit other docs that contain duplicate content
[ https://issues.apache.org/jira/browse/NIFI-5784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678459#comment-16678459 ] ASF GitHub Bot commented on NIFI-5784: -- GitHub user andrewmlim opened a pull request: https://github.com/apache/nifi/pull/3155 NIFI-5784 Edit Admin Guide to remove duplicate content that is in new… … Toolkit Guide Edited Toolkit Guide as needed for links. You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrewmlim/nifi NIFI-5784 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3155.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3155 commit 77ef84a01eb198482fb01bd4c6a4937bddd10136 Author: Andrew Lim Date: 2018-11-07T16:20:59Z NIFI-5784 Edit Admin Guide to remove duplicate content that is in new Toolkit Guide > With the addition of the Toolkit Guide, edit other docs that contain > duplicate content > -- > > Key: NIFI-5784 > URL: https://issues.apache.org/jira/browse/NIFI-5784 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation & Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > > The Admin Guide has duplicate content for the following: > * Configuration encryption - > [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#encrypt-config_tool] > * File manager - > [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#file-manager] > * Node manager - > [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#node-manager] > * TLS Toolkit - > [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#tls_generation_toolkit] > * ZooKeeper migrator - > [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#zookeeper_migrator] > Will remove and add links to Toolkit Guide as needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3155: NIFI-5784 Edit Admin Guide to remove duplicate cont...
GitHub user andrewmlim opened a pull request: https://github.com/apache/nifi/pull/3155 NIFI-5784 Edit Admin Guide to remove duplicate content that is in new⦠⦠Toolkit Guide Edited Toolkit Guide as needed for links. You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrewmlim/nifi NIFI-5784 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3155.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3155 commit 77ef84a01eb198482fb01bd4c6a4937bddd10136 Author: Andrew Lim Date: 2018-11-07T16:20:59Z NIFI-5784 Edit Admin Guide to remove duplicate content that is in new Toolkit Guide ---
[jira] [Commented] (NIFI-5790) DBCPConnectionPool configuration should expose underlying connection idle and eviction configuration
[ https://issues.apache.org/jira/browse/NIFI-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678395#comment-16678395 ] ASF GitHub Bot commented on NIFI-5790: -- Github user patricker commented on the issue: https://github.com/apache/nifi/pull/3133 @colindean I don't have a good answer yet, hoping I can get some input from other developers. But I was thinking about unit tests, and what you could do to help make this code change unit testable. What if you exposed the number of active and idle connections in the connection pool as properties on the DBCPConnectinoPool? These are available by calling `getNumActive()` and `getNumIdle()`. Or you could call `listAllObjects()` and get back the full pool on the `dataSource` object. With these numbers it would be possible to test at least the min/max connection settings, and maybe more. > DBCPConnectionPool configuration should expose underlying connection idle and > eviction configuration > > > Key: NIFI-5790 > URL: https://issues.apache.org/jira/browse/NIFI-5790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.8.0 >Reporter: Colin Dean >Priority: Major > Labels: DBCP, database > > While investigating a fix for NIFI-5789, I noticed in the [DBCPConnectionPool > documentation|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.8.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html] > that NiFi appears _not_ to have controller service configuration options > associated with [Apache > Commons-DBCP|https://commons.apache.org/proper/commons-dbcp/configuration.html] > {{BasicDataSource}} parameters like {{minIdle}} and {{maxIdle}}, which I > think should be both set to 0 in my particular use case. > Alternatively, I think I could set {{maxConnLifetimeMillis}} to something > even in the minutes range and satisfy my use case (a connection need not be > released _immediately_ but within a reasonable period of time), but this > option is also not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3133: NIFI-5790: Exposes 6 commons-dbcp options in DBCPConnectio...
Github user patricker commented on the issue: https://github.com/apache/nifi/pull/3133 @colindean I don't have a good answer yet, hoping I can get some input from other developers. But I was thinking about unit tests, and what you could do to help make this code change unit testable. What if you exposed the number of active and idle connections in the connection pool as properties on the DBCPConnectinoPool? These are available by calling `getNumActive()` and `getNumIdle()`. Or you could call `listAllObjects()` and get back the full pool on the `dataSource` object. With these numbers it would be possible to test at least the min/max connection settings, and maybe more. ---
[GitHub] nifi pull request #3100: NIFI-5718: Implemented LineDemarcator and removed N...
Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3100#discussion_r231534519 --- Diff: nifi-commons/nifi-utils/src/main/java/org/apache/nifi/stream/io/RepeatingInputStream.java --- @@ -0,0 +1,103 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.stream.io; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.util.Objects; + +public class RepeatingInputStream extends InputStream { --- End diff -- @markap14 Thoughts? ---
[jira] [Commented] (NIFI-5718) Performance degraded in ReplaceText processor
[ https://issues.apache.org/jira/browse/NIFI-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678325#comment-16678325 ] ASF GitHub Bot commented on NIFI-5718: -- Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3100#discussion_r231534519 --- Diff: nifi-commons/nifi-utils/src/main/java/org/apache/nifi/stream/io/RepeatingInputStream.java --- @@ -0,0 +1,103 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.stream.io; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.util.Objects; + +public class RepeatingInputStream extends InputStream { --- End diff -- @markap14 Thoughts? > Performance degraded in ReplaceText processor > - > > Key: NIFI-5718 > URL: https://issues.apache.org/jira/browse/NIFI-5718 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Attachments: Screen Shot 2018-10-17 at 10.55.53 AM.png > > > NIFI-5711 addresses some licensing concerns in the NLKBufferedReader class. > In doing so, however, it results in lower performance. The ReplaceText > processor is affected if the Evaluation Mode is set to Line-by-Line, and the > RouteText processor will also be affected. We should be able to match the > performance of the previous version. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5798) FlattenJson improperly escapes special characters
[ https://issues.apache.org/jira/browse/NIFI-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678264#comment-16678264 ] ASF GitHub Bot commented on NIFI-5798: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/3138 NIFI-5798:Fixed bug in FlattenJson that was escaping text as Java ins… …tead of escaping as JSON Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-5798 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3138.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3138 commit e3707f2226ce8a763bb0417e53918d3b4c1b7252 Author: Mark Payne Date: 2018-11-07T14:08:54Z NIFI-5798:Fixed bug in FlattenJson that was escaping text as Java instead of escaping as JSON > FlattenJson improperly escapes special characters > - > > Key: NIFI-5798 > URL: https://issues.apache.org/jira/browse/NIFI-5798 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.9.0 > > > FlattenJSON uses a "String Escape Policy" of ESCAPE_JAVA instead of > ESCAPE_JSON. This results in valid JSON characters getting escaped as hex > characters. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5798) FlattenJson improperly escapes special characters
[ https://issues.apache.org/jira/browse/NIFI-5798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-5798: - Fix Version/s: 1.9.0 Status: Patch Available (was: Open) > FlattenJson improperly escapes special characters > - > > Key: NIFI-5798 > URL: https://issues.apache.org/jira/browse/NIFI-5798 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.9.0 > > > FlattenJSON uses a "String Escape Policy" of ESCAPE_JAVA instead of > ESCAPE_JSON. This results in valid JSON characters getting escaped as hex > characters. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3138: NIFI-5798:Fixed bug in FlattenJson that was escapin...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/3138 NIFI-5798:Fixed bug in FlattenJson that was escaping text as Java ins⦠â¦tead of escaping as JSON Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-5798 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3138.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3138 commit e3707f2226ce8a763bb0417e53918d3b4c1b7252 Author: Mark Payne Date: 2018-11-07T14:08:54Z NIFI-5798:Fixed bug in FlattenJson that was escaping text as Java instead of escaping as JSON ---
[jira] [Created] (NIFI-5798) FlattenJson improperly escapes special characters
Mark Payne created NIFI-5798: Summary: FlattenJson improperly escapes special characters Key: NIFI-5798 URL: https://issues.apache.org/jira/browse/NIFI-5798 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Mark Payne Assignee: Mark Payne FlattenJSON uses a "String Escape Policy" of ESCAPE_JAVA instead of ESCAPE_JSON. This results in valid JSON characters getting escaped as hex characters. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor
[ https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678164#comment-16678164 ] ASF GitHub Bot commented on NIFI-5791: -- Github user stevedlawrence commented on the issue: https://github.com/apache/nifi/pull/3130 Ah yes, you are correct. > Add Apache Daffodil parse/unparse processor > --- > > Key: NIFI-5791 > URL: https://issues.apache.org/jira/browse/NIFI-5791 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Steve Lawrence >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle
Github user stevedlawrence commented on the issue: https://github.com/apache/nifi/pull/3130 Ah yes, you are correct. ---
[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor
[ https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678162#comment-16678162 ] ASF GitHub Bot commented on NIFI-5791: -- Github user ottobackwards commented on the issue: https://github.com/apache/nifi/pull/3130 What I mean is that from a high level, it is a transformational capability, and may be used as an alternative. > Add Apache Daffodil parse/unparse processor > --- > > Key: NIFI-5791 > URL: https://issues.apache.org/jira/browse/NIFI-5791 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Steve Lawrence >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle
Github user ottobackwards commented on the issue: https://github.com/apache/nifi/pull/3130 What I mean is that from a high level, it is a transformational capability, and may be used as an alternative. ---
[jira] [Commented] (NIFI-5791) Add Apache Daffodil parse/unparse processor
[ https://issues.apache.org/jira/browse/NIFI-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678146#comment-16678146 ] ASF GitHub Bot commented on NIFI-5791: -- Github user stevedlawrence commented on the issue: https://github.com/apache/nifi/pull/3130 From my understanding of Jolt, this capability is pretty different. Jolt reads JSON data, performs various transformations as described by a JOLT specification, and then writes out back to JSON. Daffodil/DFDL (Data Format Description Language) on the other hand, reads a wide array of data formats (text, binary, scientific, military, financial, etc.) and parses the data format as described by a DFDL schema (see @DFDLSchemas for some publicly available schemas), and outputs the data to an infoset, which can be projected into XML or JSON for consumption. It is also capable of the reverse, by reading in the XML/JSON infoset and "unparsing" it back to the original data format. So while JOLT is a transformation from JSON to JSON, Daffodil/DFDL could be considered a transformation from arbitrary data to XML or JSON, as described by a DFDL schema. > Add Apache Daffodil parse/unparse processor > --- > > Key: NIFI-5791 > URL: https://issues.apache.org/jira/browse/NIFI-5791 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Steve Lawrence >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3130: NIFI-5791: Add Apache Daffodil (incubating) bundle
Github user stevedlawrence commented on the issue: https://github.com/apache/nifi/pull/3130 From my understanding of Jolt, this capability is pretty different. Jolt reads JSON data, performs various transformations as described by a JOLT specification, and then writes out back to JSON. Daffodil/DFDL (Data Format Description Language) on the other hand, reads a wide array of data formats (text, binary, scientific, military, financial, etc.) and parses the data format as described by a DFDL schema (see @DFDLSchemas for some publicly available schemas), and outputs the data to an infoset, which can be projected into XML or JSON for consumption. It is also capable of the reverse, by reading in the XML/JSON infoset and "unparsing" it back to the original data format. So while JOLT is a transformation from JSON to JSON, Daffodil/DFDL could be considered a transformation from arbitrary data to XML or JSON, as described by a DFDL schema. ---
[jira] [Commented] (NIFI-5775) DataTypeUtils "toString" incorrectly treats value as a "byte" when passing an array leading to ClassCastException
[ https://issues.apache.org/jira/browse/NIFI-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678002#comment-16678002 ] Sivaprasanna Sethuraman commented on NIFI-5775: --- When do you see this error? Any example flow would help. > DataTypeUtils "toString" incorrectly treats value as a "byte" when passing an > array leading to ClassCastException > - > > Key: NIFI-5775 > URL: https://issues.apache.org/jira/browse/NIFI-5775 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Joseph Percivall >Priority: Major > > To reproduce, change this line[1] to either put "String" as the first choice > of record type or just set the key to use string. > The resulting error: > {noformat} > java.lang.ClassCastException: java.lang.String cannot be cast to > java.lang.Byte > at > org.apache.nifi.serialization.record.util.DataTypeUtils.toString(DataTypeUtils.java:530) > at > org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:147) > at > org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:115) > at > org.apache.nifi.json.WriteJsonResult.writeValue(WriteJsonResult.java:284) > at > org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:187) > at > org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:136) > at > org.apache.nifi.json.TestWriteJsonResult.testChoiceArray(TestWriteJsonResult.java:494) > {noformat} > [1] > https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/test/java/org/apache/nifi/json/TestWriteJsonResult.java#L479 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5752) Load balancing fails with wildcard certs
[ https://issues.apache.org/jira/browse/NIFI-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677884#comment-16677884 ] ASF GitHub Bot commented on NIFI-5752: -- Github user kotarot commented on the issue: https://github.com/apache/nifi/pull/3110 @ijokarumawak Thanks for reviewing and merging my PR! > Load balancing fails with wildcard certs > > > Key: NIFI-5752 > URL: https://issues.apache.org/jira/browse/NIFI-5752 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Kotaro Terada >Assignee: Kotaro Terada >Priority: Major > Fix For: 1.9.0 > > > Load balancing fails when we construct a secure cluster with wildcard certs. > For example, assume that we have a valid wildcard cert for {{*.example.com}} > and a cluster consists of {{nf1.example.com}}, {{nf2.example.com}}, and > {{nf3.example.com}} . We cannot transfer a FlowFile between nodes for load > balancing because of the following authorization error: > {noformat} > 2018-10-25 19:05:13,520 WARN [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ClusterLoadBalanceAuthorizer Authorization failed for Client > ID's [*.example.com] to Load Balance data because none of the ID's are known > Cluster Node Identifiers > 2018-10-25 19:05:13,521 ERROR [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ConnectionLoadBalanceServer Failed to communicate with Peer > /xxx.xxx.xxx.xxx:x > org.apache.nifi.controller.queue.clustered.server.NotAuthorizedException: > Client ID's [*.example.com] are not authorized to Load Balance data > at > org.apache.nifi.controller.queue.clustered.server.ClusterLoadBalanceAuthorizer.authorize(ClusterLoadBalanceAuthorizer.java:65) > at > org.apache.nifi.controller.queue.clustered.server.StandardLoadBalanceProtocol.receiveFlowFiles(StandardLoadBalanceProtocol.java:142) > at > org.apache.nifi.controller.queue.clustered.server.ConnectionLoadBalanceServer$CommunicateAction.run(ConnectionLoadBalanceServer.java:176) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > This problem occurs because in {{authorize}} method in > {{ClusterLoadBalanceAuthorizer}} class, authorization is tried by just > matching strings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3110: NIFI-5752: Load balancing fails with wildcard certs
Github user kotarot commented on the issue: https://github.com/apache/nifi/pull/3110 @ijokarumawak Thanks for reviewing and merging my PR! ---
[jira] [Updated] (NIFI-5752) Load balancing fails with wildcard certs
[ https://issues.apache.org/jira/browse/NIFI-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-5752: Resolution: Fixed Fix Version/s: 1.9.0 Status: Resolved (was: Patch Available) > Load balancing fails with wildcard certs > > > Key: NIFI-5752 > URL: https://issues.apache.org/jira/browse/NIFI-5752 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Kotaro Terada >Assignee: Kotaro Terada >Priority: Major > Fix For: 1.9.0 > > > Load balancing fails when we construct a secure cluster with wildcard certs. > For example, assume that we have a valid wildcard cert for {{*.example.com}} > and a cluster consists of {{nf1.example.com}}, {{nf2.example.com}}, and > {{nf3.example.com}} . We cannot transfer a FlowFile between nodes for load > balancing because of the following authorization error: > {noformat} > 2018-10-25 19:05:13,520 WARN [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ClusterLoadBalanceAuthorizer Authorization failed for Client > ID's [*.example.com] to Load Balance data because none of the ID's are known > Cluster Node Identifiers > 2018-10-25 19:05:13,521 ERROR [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ConnectionLoadBalanceServer Failed to communicate with Peer > /xxx.xxx.xxx.xxx:x > org.apache.nifi.controller.queue.clustered.server.NotAuthorizedException: > Client ID's [*.example.com] are not authorized to Load Balance data > at > org.apache.nifi.controller.queue.clustered.server.ClusterLoadBalanceAuthorizer.authorize(ClusterLoadBalanceAuthorizer.java:65) > at > org.apache.nifi.controller.queue.clustered.server.StandardLoadBalanceProtocol.receiveFlowFiles(StandardLoadBalanceProtocol.java:142) > at > org.apache.nifi.controller.queue.clustered.server.ConnectionLoadBalanceServer$CommunicateAction.run(ConnectionLoadBalanceServer.java:176) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > This problem occurs because in {{authorize}} method in > {{ClusterLoadBalanceAuthorizer}} class, authorization is tried by just > matching strings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5752) Load balancing fails with wildcard certs
[ https://issues.apache.org/jira/browse/NIFI-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677817#comment-16677817 ] ASF GitHub Bot commented on NIFI-5752: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/3110 > Load balancing fails with wildcard certs > > > Key: NIFI-5752 > URL: https://issues.apache.org/jira/browse/NIFI-5752 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Kotaro Terada >Assignee: Kotaro Terada >Priority: Major > Fix For: 1.9.0 > > > Load balancing fails when we construct a secure cluster with wildcard certs. > For example, assume that we have a valid wildcard cert for {{*.example.com}} > and a cluster consists of {{nf1.example.com}}, {{nf2.example.com}}, and > {{nf3.example.com}} . We cannot transfer a FlowFile between nodes for load > balancing because of the following authorization error: > {noformat} > 2018-10-25 19:05:13,520 WARN [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ClusterLoadBalanceAuthorizer Authorization failed for Client > ID's [*.example.com] to Load Balance data because none of the ID's are known > Cluster Node Identifiers > 2018-10-25 19:05:13,521 ERROR [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ConnectionLoadBalanceServer Failed to communicate with Peer > /xxx.xxx.xxx.xxx:x > org.apache.nifi.controller.queue.clustered.server.NotAuthorizedException: > Client ID's [*.example.com] are not authorized to Load Balance data > at > org.apache.nifi.controller.queue.clustered.server.ClusterLoadBalanceAuthorizer.authorize(ClusterLoadBalanceAuthorizer.java:65) > at > org.apache.nifi.controller.queue.clustered.server.StandardLoadBalanceProtocol.receiveFlowFiles(StandardLoadBalanceProtocol.java:142) > at > org.apache.nifi.controller.queue.clustered.server.ConnectionLoadBalanceServer$CommunicateAction.run(ConnectionLoadBalanceServer.java:176) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > This problem occurs because in {{authorize}} method in > {{ClusterLoadBalanceAuthorizer}} class, authorization is tried by just > matching strings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3110: NIFI-5752: Load balancing fails with wildcard certs
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/3110 ---
[jira] [Commented] (NIFI-5752) Load balancing fails with wildcard certs
[ https://issues.apache.org/jira/browse/NIFI-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677815#comment-16677815 ] ASF subversion and git services commented on NIFI-5752: --- Commit 13232c74136e8452b3cbd708e535af7a1fc0d1cb in nifi's branch refs/heads/master from [~kotarot] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=13232c7 ] NIFI-5752: Load balancing fails with wildcard certs NIFI-5752: Remove an unnecessary String.format NIFI-5752: Remove an unnecessary block This closes #3110. Signed-off-by: Koji Kawamura > Load balancing fails with wildcard certs > > > Key: NIFI-5752 > URL: https://issues.apache.org/jira/browse/NIFI-5752 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Kotaro Terada >Assignee: Kotaro Terada >Priority: Major > > Load balancing fails when we construct a secure cluster with wildcard certs. > For example, assume that we have a valid wildcard cert for {{*.example.com}} > and a cluster consists of {{nf1.example.com}}, {{nf2.example.com}}, and > {{nf3.example.com}} . We cannot transfer a FlowFile between nodes for load > balancing because of the following authorization error: > {noformat} > 2018-10-25 19:05:13,520 WARN [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ClusterLoadBalanceAuthorizer Authorization failed for Client > ID's [*.example.com] to Load Balance data because none of the ID's are known > Cluster Node Identifiers > 2018-10-25 19:05:13,521 ERROR [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ConnectionLoadBalanceServer Failed to communicate with Peer > /xxx.xxx.xxx.xxx:x > org.apache.nifi.controller.queue.clustered.server.NotAuthorizedException: > Client ID's [*.example.com] are not authorized to Load Balance data > at > org.apache.nifi.controller.queue.clustered.server.ClusterLoadBalanceAuthorizer.authorize(ClusterLoadBalanceAuthorizer.java:65) > at > org.apache.nifi.controller.queue.clustered.server.StandardLoadBalanceProtocol.receiveFlowFiles(StandardLoadBalanceProtocol.java:142) > at > org.apache.nifi.controller.queue.clustered.server.ConnectionLoadBalanceServer$CommunicateAction.run(ConnectionLoadBalanceServer.java:176) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > This problem occurs because in {{authorize}} method in > {{ClusterLoadBalanceAuthorizer}} class, authorization is tried by just > matching strings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5752) Load balancing fails with wildcard certs
[ https://issues.apache.org/jira/browse/NIFI-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677814#comment-16677814 ] ASF subversion and git services commented on NIFI-5752: --- Commit 13232c74136e8452b3cbd708e535af7a1fc0d1cb in nifi's branch refs/heads/master from [~kotarot] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=13232c7 ] NIFI-5752: Load balancing fails with wildcard certs NIFI-5752: Remove an unnecessary String.format NIFI-5752: Remove an unnecessary block This closes #3110. Signed-off-by: Koji Kawamura > Load balancing fails with wildcard certs > > > Key: NIFI-5752 > URL: https://issues.apache.org/jira/browse/NIFI-5752 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Kotaro Terada >Assignee: Kotaro Terada >Priority: Major > > Load balancing fails when we construct a secure cluster with wildcard certs. > For example, assume that we have a valid wildcard cert for {{*.example.com}} > and a cluster consists of {{nf1.example.com}}, {{nf2.example.com}}, and > {{nf3.example.com}} . We cannot transfer a FlowFile between nodes for load > balancing because of the following authorization error: > {noformat} > 2018-10-25 19:05:13,520 WARN [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ClusterLoadBalanceAuthorizer Authorization failed for Client > ID's [*.example.com] to Load Balance data because none of the ID's are known > Cluster Node Identifiers > 2018-10-25 19:05:13,521 ERROR [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ConnectionLoadBalanceServer Failed to communicate with Peer > /xxx.xxx.xxx.xxx:x > org.apache.nifi.controller.queue.clustered.server.NotAuthorizedException: > Client ID's [*.example.com] are not authorized to Load Balance data > at > org.apache.nifi.controller.queue.clustered.server.ClusterLoadBalanceAuthorizer.authorize(ClusterLoadBalanceAuthorizer.java:65) > at > org.apache.nifi.controller.queue.clustered.server.StandardLoadBalanceProtocol.receiveFlowFiles(StandardLoadBalanceProtocol.java:142) > at > org.apache.nifi.controller.queue.clustered.server.ConnectionLoadBalanceServer$CommunicateAction.run(ConnectionLoadBalanceServer.java:176) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > This problem occurs because in {{authorize}} method in > {{ClusterLoadBalanceAuthorizer}} class, authorization is tried by just > matching strings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5752) Load balancing fails with wildcard certs
[ https://issues.apache.org/jira/browse/NIFI-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677813#comment-16677813 ] ASF subversion and git services commented on NIFI-5752: --- Commit 13232c74136e8452b3cbd708e535af7a1fc0d1cb in nifi's branch refs/heads/master from [~kotarot] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=13232c7 ] NIFI-5752: Load balancing fails with wildcard certs NIFI-5752: Remove an unnecessary String.format NIFI-5752: Remove an unnecessary block This closes #3110. Signed-off-by: Koji Kawamura > Load balancing fails with wildcard certs > > > Key: NIFI-5752 > URL: https://issues.apache.org/jira/browse/NIFI-5752 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Kotaro Terada >Assignee: Kotaro Terada >Priority: Major > > Load balancing fails when we construct a secure cluster with wildcard certs. > For example, assume that we have a valid wildcard cert for {{*.example.com}} > and a cluster consists of {{nf1.example.com}}, {{nf2.example.com}}, and > {{nf3.example.com}} . We cannot transfer a FlowFile between nodes for load > balancing because of the following authorization error: > {noformat} > 2018-10-25 19:05:13,520 WARN [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ClusterLoadBalanceAuthorizer Authorization failed for Client > ID's [*.example.com] to Load Balance data because none of the ID's are known > Cluster Node Identifiers > 2018-10-25 19:05:13,521 ERROR [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ConnectionLoadBalanceServer Failed to communicate with Peer > /xxx.xxx.xxx.xxx:x > org.apache.nifi.controller.queue.clustered.server.NotAuthorizedException: > Client ID's [*.example.com] are not authorized to Load Balance data > at > org.apache.nifi.controller.queue.clustered.server.ClusterLoadBalanceAuthorizer.authorize(ClusterLoadBalanceAuthorizer.java:65) > at > org.apache.nifi.controller.queue.clustered.server.StandardLoadBalanceProtocol.receiveFlowFiles(StandardLoadBalanceProtocol.java:142) > at > org.apache.nifi.controller.queue.clustered.server.ConnectionLoadBalanceServer$CommunicateAction.run(ConnectionLoadBalanceServer.java:176) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > This problem occurs because in {{authorize}} method in > {{ClusterLoadBalanceAuthorizer}} class, authorization is tried by just > matching strings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5752) Load balancing fails with wildcard certs
[ https://issues.apache.org/jira/browse/NIFI-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677810#comment-16677810 ] ASF GitHub Bot commented on NIFI-5752: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/3110 It looks good. +1. Merging. Thanks, @kotarot! > Load balancing fails with wildcard certs > > > Key: NIFI-5752 > URL: https://issues.apache.org/jira/browse/NIFI-5752 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Kotaro Terada >Assignee: Kotaro Terada >Priority: Major > > Load balancing fails when we construct a secure cluster with wildcard certs. > For example, assume that we have a valid wildcard cert for {{*.example.com}} > and a cluster consists of {{nf1.example.com}}, {{nf2.example.com}}, and > {{nf3.example.com}} . We cannot transfer a FlowFile between nodes for load > balancing because of the following authorization error: > {noformat} > 2018-10-25 19:05:13,520 WARN [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ClusterLoadBalanceAuthorizer Authorization failed for Client > ID's [*.example.com] to Load Balance data because none of the ID's are known > Cluster Node Identifiers > 2018-10-25 19:05:13,521 ERROR [Load Balance Server Thread-2] > o.a.n.c.q.c.s.ConnectionLoadBalanceServer Failed to communicate with Peer > /xxx.xxx.xxx.xxx:x > org.apache.nifi.controller.queue.clustered.server.NotAuthorizedException: > Client ID's [*.example.com] are not authorized to Load Balance data > at > org.apache.nifi.controller.queue.clustered.server.ClusterLoadBalanceAuthorizer.authorize(ClusterLoadBalanceAuthorizer.java:65) > at > org.apache.nifi.controller.queue.clustered.server.StandardLoadBalanceProtocol.receiveFlowFiles(StandardLoadBalanceProtocol.java:142) > at > org.apache.nifi.controller.queue.clustered.server.ConnectionLoadBalanceServer$CommunicateAction.run(ConnectionLoadBalanceServer.java:176) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > This problem occurs because in {{authorize}} method in > {{ClusterLoadBalanceAuthorizer}} class, authorization is tried by just > matching strings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-648) add processor and add processor with linkage nomenclature is confusing
[ https://issues.apache.org/jira/browse/MINIFICPP-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677809#comment-16677809 ] ASF GitHub Bot commented on MINIFICPP-648: -- Github user arpadboda commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/432 > > > @arpadboda is this good? I'm good with this otherwise. Now it is. > add processor and add processor with linkage nomenclature is confusing > -- > > Key: MINIFICPP-648 > URL: https://issues.apache.org/jira/browse/MINIFICPP-648 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Arpad Boda >Priority: Blocker > Labels: CAPI > Fix For: 0.6.0 > > > add_processor should be changed to always add a processor with linkage > without compelling documentation as why this exists.. As a result we will > need to add a create_processor function to create one without adding it to > the flow ( certain use cases where a flow isn't needed such as invokehttp or > listenhttp ) this can be moved to 0.7.0 if we tag before recent commits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #432: MINIFICPP-648 - add processor and add processor ...
Github user arpadboda commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/432 > > > @arpadboda is this good? I'm good with this otherwise. Now it is. ---
[GitHub] nifi issue #3110: NIFI-5752: Load balancing fails with wildcard certs
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/3110 It looks good. +1. Merging. Thanks, @kotarot! ---