[jira] [Commented] (NIFI-8254) Timestamp in Avro schema not working in PutDatabaseRecord on Oracle
[ https://issues.apache.org/jira/browse/NIFI-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616609#comment-17616609 ] Matt Burgess commented on NIFI-8254: Changed the title of this case in order to split up the two issues. I believe I have a fix for the enum side, which will be done in NIFI-10635 > Timestamp in Avro schema not working in PutDatabaseRecord on Oracle > --- > > Key: NIFI-8254 > URL: https://issues.apache.org/jira/browse/NIFI-8254 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0 >Reporter: Jonathan Keller >Priority: Major > > There appears to be a regression in the PutDatabaseRecord processor with > 1.13.0 (works in 1.12.1) > I have an Avro schema for a record which contained enum fields and > timestamps. After upgrading, a processor which had been working was failing > on errors on both of these field types. One stated it could not find the > record type, the other with the exception and stack trace below. The fields > in question were defined as below. > The files which were erroring out were the exact same files, and no changes > had been made to the schema or the CSV reader controller between the tests. > I was also able to successfully move the flow.xml.gz file back to the older > version of NiFi and the PutDatabaseRecord processor was able to work again to > insert the database records. > {noformat} > { > "name": "PER_ORG", > "type": { > "type": "enum", > "name": "PerOrgFlag", > "symbols": [ > "EMP", > "CWR" > ] > } > },{ > "name": "UPD_BT_DTM", > "type": { > "type": "long", > "logicalType": "timestamp-millis" > } > }, > {noformat} > > {noformat} > 2021-02-23 11:44:26,496 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.PutDatabaseRecord > PutDatabaseRecord[id=93b478c8-0177-1000-1ca3-59b4189b1ecb] Failed to put > Records to database for > StandardFlowFileRecord[uuid=fd4e3be1-8f7b-4a7a-bf45-2e98627f732a,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1614107687354-5, container=default, > section=5], offset=682549, > length=6820],offset=0,name=DDODS_DVCMP_PS_JOB_2021-02-09_021816_2648.dat,size=6820]. > Routing to failure.: java.sql.BatchUpdateException: ORA-00932: inconsistent > datatypes: expected NUMBER got TIMESTAMP > java.sql.BatchUpdateException: ORA-00932: inconsistent datatypes: expected > NUMBER got TIMESTAMP > at > oracle.jdbc.driver.OraclePreparedStatement.executeLargeBatch(OraclePreparedStatement.java:9711) > at > oracle.jdbc.driver.T4CPreparedStatement.executeLargeBatch(T4CPreparedStatement.java:1447) > at > oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9487) > at > oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:237) > at > org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242) > at > org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242) > at sun.reflect.GeneratedMethodAccessor583.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240) > at com.sun.proxy.$Proxy360.executeBatch(Unknown Source) > at > org.apache.nifi.processors.standard.PutDatabaseRecord.executeDML(PutDatabaseRecord.java:751) > at > org.apache.nifi.processors.standard.PutDatabaseRecord.putToDatabase(PutDatabaseRecord.java:838) > at > org.apache.nifi.processors.standard.PutDatabaseRecord.onTrigger(PutDatabaseRecord.java:487) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214) > at >
[jira] [Created] (NIFI-10635) Enum in Avro schema not working in PutDatabaseRecord
Matt Burgess created NIFI-10635: --- Summary: Enum in Avro schema not working in PutDatabaseRecord Key: NIFI-10635 URL: https://issues.apache.org/jira/browse/NIFI-10635 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.13.0 Reporter: Jonathan Keller There appears to be a regression in the PutDatabaseRecord processor with 1.13.0 (works in 1.12.1) I have an Avro schema for a record which contained enum fields and timestamps. After upgrading, a processor which had been working was failing on errors on both of these field types. One stated it could not find the record type, the other with the exception and stack trace below. The fields in question were defined as below. The files which were erroring out were the exact same files, and no changes had been made to the schema or the CSV reader controller between the tests. I was also able to successfully move the flow.xml.gz file back to the older version of NiFi and the PutDatabaseRecord processor was able to work again to insert the database records. {noformat} { "name": "PER_ORG", "type": { "type": "enum", "name": "PerOrgFlag", "symbols": [ "EMP", "CWR" ] } },{ "name": "UPD_BT_DTM", "type": { "type": "long", "logicalType": "timestamp-millis" } }, {noformat} {noformat} 2021-02-23 11:44:26,496 ERROR [Timer-Driven Process Thread-3] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=93b478c8-0177-1000-1ca3-59b4189b1ecb] Failed to put Records to database for StandardFlowFileRecord[uuid=fd4e3be1-8f7b-4a7a-bf45-2e98627f732a,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1614107687354-5, container=default, section=5], offset=682549, length=6820],offset=0,name=DDODS_DVCMP_PS_JOB_2021-02-09_021816_2648.dat,size=6820]. Routing to failure.: java.sql.BatchUpdateException: ORA-00932: inconsistent datatypes: expected NUMBER got TIMESTAMP java.sql.BatchUpdateException: ORA-00932: inconsistent datatypes: expected NUMBER got TIMESTAMP at oracle.jdbc.driver.OraclePreparedStatement.executeLargeBatch(OraclePreparedStatement.java:9711) at oracle.jdbc.driver.T4CPreparedStatement.executeLargeBatch(T4CPreparedStatement.java:1447) at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9487) at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:237) at org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242) at org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242) at sun.reflect.GeneratedMethodAccessor583.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240) at com.sun.proxy.$Proxy360.executeBatch(Unknown Source) at org.apache.nifi.processors.standard.PutDatabaseRecord.executeDML(PutDatabaseRecord.java:751) at org.apache.nifi.processors.standard.PutDatabaseRecord.putToDatabase(PutDatabaseRecord.java:838) at org.apache.nifi.processors.standard.PutDatabaseRecord.onTrigger(PutDatabaseRecord.java:487) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at
[jira] [Updated] (NIFI-8254) Timestamp in Avro schema not working in PutDatabaseRecord on Oracle
[ https://issues.apache.org/jira/browse/NIFI-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-8254: --- Summary: Timestamp in Avro schema not working in PutDatabaseRecord on Oracle (was: Timestamp and enums in Avro schema not working in PutDatabaseRecord on Oracle) > Timestamp in Avro schema not working in PutDatabaseRecord on Oracle > --- > > Key: NIFI-8254 > URL: https://issues.apache.org/jira/browse/NIFI-8254 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0 >Reporter: Jonathan Keller >Priority: Major > > There appears to be a regression in the PutDatabaseRecord processor with > 1.13.0 (works in 1.12.1) > I have an Avro schema for a record which contained enum fields and > timestamps. After upgrading, a processor which had been working was failing > on errors on both of these field types. One stated it could not find the > record type, the other with the exception and stack trace below. The fields > in question were defined as below. > The files which were erroring out were the exact same files, and no changes > had been made to the schema or the CSV reader controller between the tests. > I was also able to successfully move the flow.xml.gz file back to the older > version of NiFi and the PutDatabaseRecord processor was able to work again to > insert the database records. > {noformat} > { > "name": "PER_ORG", > "type": { > "type": "enum", > "name": "PerOrgFlag", > "symbols": [ > "EMP", > "CWR" > ] > } > },{ > "name": "UPD_BT_DTM", > "type": { > "type": "long", > "logicalType": "timestamp-millis" > } > }, > {noformat} > > {noformat} > 2021-02-23 11:44:26,496 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.PutDatabaseRecord > PutDatabaseRecord[id=93b478c8-0177-1000-1ca3-59b4189b1ecb] Failed to put > Records to database for > StandardFlowFileRecord[uuid=fd4e3be1-8f7b-4a7a-bf45-2e98627f732a,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1614107687354-5, container=default, > section=5], offset=682549, > length=6820],offset=0,name=DDODS_DVCMP_PS_JOB_2021-02-09_021816_2648.dat,size=6820]. > Routing to failure.: java.sql.BatchUpdateException: ORA-00932: inconsistent > datatypes: expected NUMBER got TIMESTAMP > java.sql.BatchUpdateException: ORA-00932: inconsistent datatypes: expected > NUMBER got TIMESTAMP > at > oracle.jdbc.driver.OraclePreparedStatement.executeLargeBatch(OraclePreparedStatement.java:9711) > at > oracle.jdbc.driver.T4CPreparedStatement.executeLargeBatch(T4CPreparedStatement.java:1447) > at > oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9487) > at > oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:237) > at > org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242) > at > org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242) > at sun.reflect.GeneratedMethodAccessor583.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240) > at com.sun.proxy.$Proxy360.executeBatch(Unknown Source) > at > org.apache.nifi.processors.standard.PutDatabaseRecord.executeDML(PutDatabaseRecord.java:751) > at > org.apache.nifi.processors.standard.PutDatabaseRecord.putToDatabase(PutDatabaseRecord.java:838) > at > org.apache.nifi.processors.standard.PutDatabaseRecord.onTrigger(PutDatabaseRecord.java:487) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) >
[GitHub] [nifi] adenes opened a new pull request, #6519: NIFI-10594 Labels ignore empty lines
adenes opened a new pull request, #6519: URL: https://github.com/apache/nifi/pull/6519 # Summary [NIFI-10594](https://issues.apache.org/jira/browse/NIFI-10594) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Manually tested, see screenshot: https://user-images.githubusercontent.com/1275646/195429863-99e16044-1e6a-4117-94d5-e227aaafb929.png;> ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [x] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-10633) ParameterProvider and/or ControllerService only support HashiCorp Vault KV v1
Chris Scheib created NIFI-10633: --- Summary: ParameterProvider and/or ControllerService only support HashiCorp Vault KV v1 Key: NIFI-10633 URL: https://issues.apache.org/jira/browse/NIFI-10633 Project: Apache NiFi Issue Type: Improvement Components: Documentation Website Affects Versions: 1.18.0 Reporter: Chris Scheib The documentation for the HashiCorp Vault ParameterProvider (HashiCorpVaultParameterProvider) and Controller Service (StandardHashiCorpVaultClientService) don't denote that they currently only support secret stores of type KV v1. Integration fails with type of KV v2. KV v2 supports secrets versioning and has a slightly different API. Please update the documentation to reflect support. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10633) ParameterProvider and/or ControllerService only support HashiCorp Vault KV v1
[ https://issues.apache.org/jira/browse/NIFI-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Scheib updated NIFI-10633: Description: The documentation for the HashiCorp Vault ParameterProvider (HashiCorpVaultParameterProvider) and Controller Service (StandardHashiCorpVaultClientService) don't denote that they currently only support secret stores of type KV v1. Integration fails with type of KV v2. KV v2 supports secrets versioning and has a slightly different API. Please update the documentation to reflect supported secret stores. was: The documentation for the HashiCorp Vault ParameterProvider (HashiCorpVaultParameterProvider) and Controller Service (StandardHashiCorpVaultClientService) don't denote that they currently only support secret stores of type KV v1. Integration fails with type of KV v2. KV v2 supports secrets versioning and has a slightly different API. Please update the documentation to reflect support. > ParameterProvider and/or ControllerService only support HashiCorp Vault KV v1 > - > > Key: NIFI-10633 > URL: https://issues.apache.org/jira/browse/NIFI-10633 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Affects Versions: 1.18.0 >Reporter: Chris Scheib >Priority: Major > > The documentation for the HashiCorp Vault ParameterProvider > (HashiCorpVaultParameterProvider) and Controller Service > (StandardHashiCorpVaultClientService) don't denote that they currently only > support secret stores of type KV v1. Integration fails with type of KV v2. > KV v2 supports secrets versioning and has a slightly different API. > Please update the documentation to reflect supported secret stores. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] priyanka-28 opened a new pull request, #6515: NIFI-10629 Fixed Flaky Test failing with NonDex
priyanka-28 opened a new pull request, #6515: URL: https://github.com/apache/nifi/pull/6515 # Summary [NIFI-10629](https://issues.apache.org/jira/browse/NIFI-10629) Tests in the testBulletinEntity() under TestJsonEntitySerializer express non-deterministic behavior when comparing two JSON strings. The fix is adding the readTree() from the Object Mapper class that treats the JSON object as a JsonNode to ensure deterministic behavior. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory opened a new pull request, #6514: NIFI-10625 Add support for HTTP/2 in Registry
exceptionfactory opened a new pull request, #6514: URL: https://github.com/apache/nifi/pull/6514 # Summary [NIFI-10625](https://issues.apache.org/jira/browse/NIFI-10625) Adds optional support for HTTP/2 in NiFi Registry. The implementation builds on the shared `nifi-jetty-configuration` capabilities, which already support HTTP/2. Additional changes include the introduction of a `nifi-security-ssl` for building an `SSLContext`, without the dependencies currently associated with `nifi-security-utils`. The configuration includes a new property named `nifi.registry.web.https.application.protocols`, which defaults to `http/1.1` and supports enabling HTTP/2 by specifying `h2 http/1.1` for protocol negotiation. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 8 - [X] JDK 11 - [X] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10625) Add HTTP/2 Support to Registry Server
[ https://issues.apache.org/jira/browse/NIFI-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10625: Status: Patch Available (was: Open) > Add HTTP/2 Support to Registry Server > - > > Key: NIFI-10625 > URL: https://issues.apache.org/jira/browse/NIFI-10625 > Project: Apache NiFi > Issue Type: Improvement > Components: NiFi Registry >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > The {{nifi-jetty-configuration}} module includes shared configuration > capabilities that support HTTP/2 in the NiFi Application Server, > HandleHttpRequest, and ListenHTTP. This shared configuration should be > extended to NiFi Registry in order to support enabling HTTP/2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] Kerr0220 opened a new pull request, #6516: NIFI_16031 Fix Flaky Test TestHBase_2_ClientService.testScan
Kerr0220 opened a new pull request, #6516: URL: https://github.com/apache/nifi/pull/6516 # Summary [NIFI-10631](https://issues.apache.org/jira/browse/NIFI-10631) This problem was caused by HashMap's feature that iterating order is non-deterministic. After changing HashMap to LinkedHashMap, the problem is solved. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (NIFI-10576) ParquetRecordSetWriter doesn't write avro schema
[ https://issues.apache.org/jira/browse/NIFI-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616617#comment-17616617 ] Nathan Gough edited comment on NIFI-10576 at 10/12/22 5:50 PM: --- This appears to be an issue also in NiFi 1.18, so I have submitted a PR which should now add the avro schema as an attribute like XMLRecordSetWriter does: https://github.com/apache/nifi/pull/6517 was (Author: thenatog): This appears to be an issue also in NiFi 1.18, so I have submitted a PR which should now add the avro schema as an attribute like XMLRecordSetWriter does. > ParquetRecordSetWriter doesn't write avro schema > > > Key: NIFI-10576 > URL: https://issues.apache.org/jira/browse/NIFI-10576 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.2 >Reporter: DEOM Damien >Assignee: Nathan Gough >Priority: Critical > > ParquetRecordSetWrite ignores Set 'avro.schema' Attribute option -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10576) ParquetRecordSetWriter doesn't write avro schema
[ https://issues.apache.org/jira/browse/NIFI-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616617#comment-17616617 ] Nathan Gough commented on NIFI-10576: - This appears to be an issue also in NiFi 1.18, so I have submitted a PR which should now add the avro schema as an attribute like XMLRecordSetWriter does. > ParquetRecordSetWriter doesn't write avro schema > > > Key: NIFI-10576 > URL: https://issues.apache.org/jira/browse/NIFI-10576 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.2 >Reporter: DEOM Damien >Assignee: Nathan Gough >Priority: Critical > > ParquetRecordSetWrite ignores Set 'avro.schema' Attribute option -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10635) Enum in Avro schema not working in PutDatabaseRecord
[ https://issues.apache.org/jira/browse/NIFI-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-10635: Affects Version/s: (was: 1.13.0) Status: Patch Available (was: In Progress) > Enum in Avro schema not working in PutDatabaseRecord > > > Key: NIFI-10635 > URL: https://issues.apache.org/jira/browse/NIFI-10635 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Jonathan Keller >Assignee: Matt Burgess >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > There appears to be a regression in the PutDatabaseRecord processor with > 1.13.0 (works in 1.12.1) > I have an Avro schema for a record which contained enum fields. After > upgrading, a processor which had been working was failing on errors, stating > it could not find the record type. The field in question is defined as below. > The files which were erroring out were the exact same files, and no changes > had been made to the schema or the CSV reader controller between the tests. > I was also able to successfully move the flow.xml.gz file back to the older > version of NiFi and the PutDatabaseRecord processor was able to work again to > insert the database records. > {noformat} > { > "name": "PER_ORG", > "type": { > "type": "enum", > "name": "PerOrgFlag", > "symbols": [ > "EMP", > "CWR" > ] > } > } > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mattyb149 opened a new pull request, #6518: NIFI-10635: Fix handling of enums in PutDatabaseRecord
mattyb149 opened a new pull request, #6518: URL: https://github.com/apache/nifi/pull/6518 # Summary [NIFI-10635](https://issues.apache.org/jira/browse/NIFI-10635) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [x] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-10594) NiFi - UI - Labels ignore empty lines
[ https://issues.apache.org/jira/browse/NIFI-10594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denes Arvay reassigned NIFI-10594: -- Assignee: Denes Arvay > NiFi - UI - Labels ignore empty lines > - > > Key: NIFI-10594 > URL: https://issues.apache.org/jira/browse/NIFI-10594 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.16.3 >Reporter: Benji Benning >Assignee: Denes Arvay >Priority: Minor > Attachments: Label with multiple newlines.png > > Time Spent: 10m > Remaining Estimate: 0h > > NiFi - UI - The Labels seem to ignore multiple newlines in the rendered > version. > Editor shows them correctly. > Example: > {code:java} > text1 > text2 > text3{code} > will be displayed as: > {code:java} > text1 > text2 > text3{code} > NiFi 1.16.3 > See attached screenshot > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10594) NiFi - UI - Labels ignore empty lines
[ https://issues.apache.org/jira/browse/NIFI-10594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denes Arvay updated NIFI-10594: --- Status: Patch Available (was: Open) > NiFi - UI - Labels ignore empty lines > - > > Key: NIFI-10594 > URL: https://issues.apache.org/jira/browse/NIFI-10594 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.16.3 >Reporter: Benji Benning >Assignee: Denes Arvay >Priority: Minor > Attachments: Label with multiple newlines.png > > Time Spent: 10m > Remaining Estimate: 0h > > NiFi - UI - The Labels seem to ignore multiple newlines in the rendered > version. > Editor shows them correctly. > Example: > {code:java} > text1 > text2 > text3{code} > will be displayed as: > {code:java} > text1 > text2 > text3{code} > NiFi 1.16.3 > See attached screenshot > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10633) ParameterProvider and/or ControllerService only support HashiCorp Vault KV v1
[ https://issues.apache.org/jira/browse/NIFI-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Gresock updated NIFI-10633: --- Status: Patch Available (was: In Progress) > ParameterProvider and/or ControllerService only support HashiCorp Vault KV v1 > - > > Key: NIFI-10633 > URL: https://issues.apache.org/jira/browse/NIFI-10633 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Affects Versions: 1.18.0 >Reporter: Chris Scheib >Assignee: Joe Gresock >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The documentation for the HashiCorp Vault ParameterProvider > (HashiCorpVaultParameterProvider) and Controller Service > (StandardHashiCorpVaultClientService) don't denote that they currently only > support secret stores of type KV v1. Integration fails with type of KV v2. > KV v2 supports secrets versioning and has a slightly different API. > Please update the documentation to reflect supported secret stores. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10624) Remove Sensitive Properties Key Warning from Component Documentation
[ https://issues.apache.org/jira/browse/NIFI-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10624: Status: Patch Available (was: Open) > Remove Sensitive Properties Key Warning from Component Documentation > > > Key: NIFI-10624 > URL: https://issues.apache.org/jira/browse/NIFI-10624 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Trivial > Time Spent: 10m > Remaining Estimate: 0h > > The generated component documentation includes the following warning for > classes that include sensitive properties: > {quote} > Before entering a value in a sensitive property, ensure that the > *nifi.properties* file has an entry for the property > {*}nifi.sensitive.props.key{*}. > {quote} > NiFi 1.14.0 and following require configuration of the > {{nifi.sensitive.props.key}}, so this warning should be removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10634) ConsumeKinesisStream.java exception while connect/consume from AWS kinesis
Eric Wong created NIFI-10634: Summary: ConsumeKinesisStream.java exception while connect/consume from AWS kinesis Key: NIFI-10634 URL: https://issues.apache.org/jira/browse/NIFI-10634 Project: Apache NiFi Issue Type: Bug Components: Extensions Environment: https://hub.docker.com/layers/apache/nifi/1.17.0/images Reporter: Eric Wong Dear Nifi community, This is request for help and solution from apache nifi AWS Kinesis consumer. Following are info of official nifi version 1.17 from 2 month ago packaged in container docker pull apache/nifi:1.17.0 [https://hub.docker.com/layers/apache/nifi/1.17.0/images] Although similar Kinesis producer can always push message to AWS kinesis in same setup with correct configuration, when Kinesis consumer connect and pull from Kinesis, it always hit exception in nifi at {noformat} ConsumeKinesisStream.java{noformat} unable to identify the cause of this exception with following in the log: {code:java} 2022-09-23 22:07:50,498 ERROR [Timer-Driven Process Thread-7] o.a.n.p.a.k.stream.ConsumeKinesisStream ConsumeKinesisStream[id=5ca4f272-0183-1000--65d7aa17] Processing failed 2org.apache.nifi.processor.exception.ProcessException: Worker has shutdown unexpectedly, possibly due to a configuration issue; check logs for details 3 at org.apache.nifi.processors.aws.kinesis.stream.ConsumeKinesisStream.onTrigger(ConsumeKinesisStream.java:502) 4 at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1356) 5 at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246) 6 at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102) 7 at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) 8 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 9 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 10 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) 11 at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) 12 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 13 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 14 at java.lang.Thread.run(Thread.java:750) {code} And there is no other additional info from nifi logs inside container. Could you please help? Or best to work with support directly so I can reproduce all. thanks a lot! Eric -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10635) Enum in Avro schema not working in PutDatabaseRecord
[ https://issues.apache.org/jira/browse/NIFI-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616610#comment-17616610 ] Matt Burgess commented on NIFI-10635: - Verified this is an issue in Postgres as well, reported in StackOverflow: https://stackoverflow.com/questions/73888542/apache-nifi-fail-inserting-using-putdatabaserecord-in-a-table-with-a-enum-colu > Enum in Avro schema not working in PutDatabaseRecord > > > Key: NIFI-10635 > URL: https://issues.apache.org/jira/browse/NIFI-10635 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0 >Reporter: Jonathan Keller >Priority: Major > > There appears to be a regression in the PutDatabaseRecord processor with > 1.13.0 (works in 1.12.1) > I have an Avro schema for a record which contained enum fields. After > upgrading, a processor which had been working was failing on errors, stating > it could not find the record type. The field in question is defined as below. > The files which were erroring out were the exact same files, and no changes > had been made to the schema or the CSV reader controller between the tests. > I was also able to successfully move the flow.xml.gz file back to the older > version of NiFi and the PutDatabaseRecord processor was able to work again to > insert the database records. > {noformat} > { > "name": "PER_ORG", > "type": { > "type": "enum", > "name": "PerOrgFlag", > "symbols": [ > "EMP", > "CWR" > ] > } > } > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-10633) ParameterProvider and/or ControllerService only support HashiCorp Vault KV v1
[ https://issues.apache.org/jira/browse/NIFI-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Gresock reassigned NIFI-10633: -- Assignee: Joe Gresock > ParameterProvider and/or ControllerService only support HashiCorp Vault KV v1 > - > > Key: NIFI-10633 > URL: https://issues.apache.org/jira/browse/NIFI-10633 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Affects Versions: 1.18.0 >Reporter: Chris Scheib >Assignee: Joe Gresock >Priority: Major > > The documentation for the HashiCorp Vault ParameterProvider > (HashiCorpVaultParameterProvider) and Controller Service > (StandardHashiCorpVaultClientService) don't denote that they currently only > support secret stores of type KV v1. Integration fails with type of KV v2. > KV v2 supports secrets versioning and has a slightly different API. > Please update the documentation to reflect supported secret stores. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-10635) Enum in Avro schema not working in PutDatabaseRecord
[ https://issues.apache.org/jira/browse/NIFI-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess reassigned NIFI-10635: --- Assignee: Matt Burgess > Enum in Avro schema not working in PutDatabaseRecord > > > Key: NIFI-10635 > URL: https://issues.apache.org/jira/browse/NIFI-10635 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0 >Reporter: Jonathan Keller >Assignee: Matt Burgess >Priority: Major > > There appears to be a regression in the PutDatabaseRecord processor with > 1.13.0 (works in 1.12.1) > I have an Avro schema for a record which contained enum fields. After > upgrading, a processor which had been working was failing on errors, stating > it could not find the record type. The field in question is defined as below. > The files which were erroring out were the exact same files, and no changes > had been made to the schema or the CSV reader controller between the tests. > I was also able to successfully move the flow.xml.gz file back to the older > version of NiFi and the PutDatabaseRecord processor was able to work again to > insert the database records. > {noformat} > { > "name": "PER_ORG", > "type": { > "type": "enum", > "name": "PerOrgFlag", > "symbols": [ > "EMP", > "CWR" > ] > } > } > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mattyb149 commented on pull request #6517: NiFi-10576 - ParquetRecordSetWriter doesn't write avro schema to attribute
mattyb149 commented on PR #6517: URL: https://github.com/apache/nifi/pull/6517#issuecomment-1276595864 Looks good. Is there any way to add a unit test for this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog opened a new pull request, #6517: NiFi-10576 - ParquetRecordSetWriter doesn't write avro schema to attribute
thenatog opened a new pull request, #6517: URL: https://github.com/apache/nifi/pull/6517 # Summary [NiFi-10576](https://issues.apache.org/jira/browse/NiFi-10576) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NiFi-10576` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NiFi-10576` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [x] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory opened a new pull request, #6513: NIFI-10624 Remove sensitive properties key warning
exceptionfactory opened a new pull request, #6513: URL: https://github.com/apache/nifi/pull/6513 # Summary [NIFI-10624](https://issues.apache.org/jira/browse/NIFI-10624) Removes the warning for setting a sensitive properties key from the generated component documentation. NiFi 1.14.0 and following require a configured value for `nifi.sensitive.props.key`, so the warning is no longer necessary. Additional changes include correcting logging statement construction and removing unnecessary test classes. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 8 - [X] JDK 11 - [X] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [X] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10635) Enum in Avro schema not working in PutDatabaseRecord
[ https://issues.apache.org/jira/browse/NIFI-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-10635: Description: There appears to be a regression in the PutDatabaseRecord processor with 1.13.0 (works in 1.12.1) I have an Avro schema for a record which contained enum fields. After upgrading, a processor which had been working was failing on errors, stating it could not find the record type. The field in question is defined as below. The files which were erroring out were the exact same files, and no changes had been made to the schema or the CSV reader controller between the tests. I was also able to successfully move the flow.xml.gz file back to the older version of NiFi and the PutDatabaseRecord processor was able to work again to insert the database records. {noformat} { "name": "PER_ORG", "type": { "type": "enum", "name": "PerOrgFlag", "symbols": [ "EMP", "CWR" ] } } {noformat} was: There appears to be a regression in the PutDatabaseRecord processor with 1.13.0 (works in 1.12.1) I have an Avro schema for a record which contained enum fields and timestamps. After upgrading, a processor which had been working was failing on errors on both of these field types. One stated it could not find the record type, the other with the exception and stack trace below. The fields in question were defined as below. The files which were erroring out were the exact same files, and no changes had been made to the schema or the CSV reader controller between the tests. I was also able to successfully move the flow.xml.gz file back to the older version of NiFi and the PutDatabaseRecord processor was able to work again to insert the database records. {noformat} { "name": "PER_ORG", "type": { "type": "enum", "name": "PerOrgFlag", "symbols": [ "EMP", "CWR" ] } },{ "name": "UPD_BT_DTM", "type": { "type": "long", "logicalType": "timestamp-millis" } }, {noformat} {noformat} 2021-02-23 11:44:26,496 ERROR [Timer-Driven Process Thread-3] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=93b478c8-0177-1000-1ca3-59b4189b1ecb] Failed to put Records to database for StandardFlowFileRecord[uuid=fd4e3be1-8f7b-4a7a-bf45-2e98627f732a,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1614107687354-5, container=default, section=5], offset=682549, length=6820],offset=0,name=DDODS_DVCMP_PS_JOB_2021-02-09_021816_2648.dat,size=6820]. Routing to failure.: java.sql.BatchUpdateException: ORA-00932: inconsistent datatypes: expected NUMBER got TIMESTAMP java.sql.BatchUpdateException: ORA-00932: inconsistent datatypes: expected NUMBER got TIMESTAMP at oracle.jdbc.driver.OraclePreparedStatement.executeLargeBatch(OraclePreparedStatement.java:9711) at oracle.jdbc.driver.T4CPreparedStatement.executeLargeBatch(T4CPreparedStatement.java:1447) at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9487) at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:237) at org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242) at org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242) at sun.reflect.GeneratedMethodAccessor583.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240) at com.sun.proxy.$Proxy360.executeBatch(Unknown Source) at org.apache.nifi.processors.standard.PutDatabaseRecord.executeDML(PutDatabaseRecord.java:751) at org.apache.nifi.processors.standard.PutDatabaseRecord.putToDatabase(PutDatabaseRecord.java:838) at org.apache.nifi.processors.standard.PutDatabaseRecord.onTrigger(PutDatabaseRecord.java:487) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at
[GitHub] [nifi] gresockj opened a new pull request, #6520: NIFI-10633: Adding references to Key/Value Version 1 secrets engine i…
gresockj opened a new pull request, #6520: URL: https://github.com/apache/nifi/pull/6520 …n HashiCorp Vault documentation # Summary [NIFI-10633](https://issues.apache.org/jira/browse/NIFI-10633) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1400: MINIFICPP-1848 Create a generic solution for processor metrics
adamdebreceni commented on code in PR #1400: URL: https://github.com/apache/nifi-minifi-cpp/pull/1400#discussion_r993315426 ## libminifi/include/utils/Averager.h: ## @@ -0,0 +1,91 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include +#include + +namespace org::apache::nifi::minifi::utils { + +template +concept Summable = requires(T x) { x + x; }; // NOLINT(readability/braces) + +template +concept DividableByInteger = requires(T x, uint32_t divisor) { x / divisor; }; // NOLINT(readability/braces) + +template +requires Summable && DividableByInteger +class Averager { + public: + explicit Averager(uint32_t sample_size) : SAMPLE_SIZE_(sample_size) { +values_.reserve(SAMPLE_SIZE_); + } + + ValueType getAverage() const; + ValueType getLastValue() const; + void addValue(ValueType runtime); + + private: + const uint32_t SAMPLE_SIZE_; + mutable std::mutex average_value_mutex_; + uint32_t next_average_index_ = 0; + std::vector values_; +}; + +template +requires Summable && DividableByInteger +ValueType Averager::getAverage() const { + if (values_.empty()) { +return {}; + } + ValueType sum{}; + std::lock_guard lock(average_value_mutex_); + for (const auto& value : values_) { +sum += value; + } + return sum / values_.size(); +} + +template +requires Summable && DividableByInteger +void Averager::addValue(ValueType runtime) { + std::lock_guard lock(average_value_mutex_); + if (values_.size() < SAMPLE_SIZE_) { +values_.push_back(runtime); + } else { +if (next_average_index_ >= values_.size()) { + next_average_index_ = 0; +} +values_[next_average_index_] = runtime; +++next_average_index_; + } +} + +template +requires Summable && DividableByInteger +ValueType Averager::getLastValue() const { + std::lock_guard lock(average_value_mutex_); + if (values_.empty()) { +return {}; + } else if (values_.size() < SAMPLE_SIZE_) { +return values_[values_.size() - 1]; + } else { +return values_[next_average_index_ - 1]; Review Comment: initializing `next_average_index_` to `SAMPLE_SIZE_` could solve this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10428) Allow for loading Avro record schema from files
[ https://issues.apache.org/jira/browse/NIFI-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616475#comment-17616475 ] Joe Witt commented on NIFI-10428: - If we're purusing a file based option how will this work in a clustered environment of multiple nifi nodes? Would all schema files need to be on all nodes or is it sharing state somehow? > Allow for loading Avro record schema from files > --- > > Key: NIFI-10428 > URL: https://issues.apache.org/jira/browse/NIFI-10428 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Affects Versions: 1.18.0 >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Major > Labels: Record, avro, registry, schema > Time Spent: 10m > Remaining Estimate: 0h > > The various record reader and writer controllers XML, CSV, JSON and AVRO, etc > can load schemas from a registry service. The AvroSchemaRegistry can only > load its schema from a text property. It would be useful to have a file > based Avro schema registry, with a configurable refresh interval to allow for > adding, removing or updates to schema files. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1400: MINIFICPP-1848 Create a generic solution for processor metrics
adamdebreceni commented on code in PR #1400: URL: https://github.com/apache/nifi-minifi-cpp/pull/1400#discussion_r993313911 ## libminifi/include/utils/Averager.h: ## @@ -0,0 +1,91 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include +#include + +namespace org::apache::nifi::minifi::utils { + +template +concept Summable = requires(T x) { x + x; }; // NOLINT(readability/braces) + +template +concept DividableByInteger = requires(T x, uint32_t divisor) { x / divisor; }; // NOLINT(readability/braces) + +template +requires Summable && DividableByInteger +class Averager { + public: + explicit Averager(uint32_t sample_size) : SAMPLE_SIZE_(sample_size) { +values_.reserve(SAMPLE_SIZE_); + } + + ValueType getAverage() const; + ValueType getLastValue() const; + void addValue(ValueType runtime); + + private: + const uint32_t SAMPLE_SIZE_; + mutable std::mutex average_value_mutex_; + uint32_t next_average_index_ = 0; + std::vector values_; +}; + +template +requires Summable && DividableByInteger +ValueType Averager::getAverage() const { + if (values_.empty()) { +return {}; + } + ValueType sum{}; + std::lock_guard lock(average_value_mutex_); + for (const auto& value : values_) { +sum += value; + } + return sum / values_.size(); +} + +template +requires Summable && DividableByInteger +void Averager::addValue(ValueType runtime) { + std::lock_guard lock(average_value_mutex_); + if (values_.size() < SAMPLE_SIZE_) { +values_.push_back(runtime); + } else { +if (next_average_index_ >= values_.size()) { + next_average_index_ = 0; +} +values_[next_average_index_] = runtime; +++next_average_index_; + } +} + +template +requires Summable && DividableByInteger +ValueType Averager::getLastValue() const { + std::lock_guard lock(average_value_mutex_); + if (values_.empty()) { +return {}; + } else if (values_.size() < SAMPLE_SIZE_) { +return values_[values_.size() - 1]; + } else { +return values_[next_average_index_ - 1]; Review Comment: wouldn't this cause an underflow, if we query `getLastValue` right after we have pushed the last value at the end of the `values_` vector? (`values_.size() == SAMPLE_SIZE_ && next_average_index_ == 0`) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1422: MINIFICPP-1938 Enable parallel onTrigger calls for Azure and AWS processors
szaszm closed pull request #1422: MINIFICPP-1938 Enable parallel onTrigger calls for Azure and AWS processors URL: https://github.com/apache/nifi-minifi-cpp/pull/1422 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1428: MINIFICPP-1648 Input/OutputStreamCallback should use Input/OutputStream instead of BaseStream
szaszm closed pull request #1428: MINIFICPP-1648 Input/OutputStreamCallback should use Input/OutputStream instead of BaseStream URL: https://github.com/apache/nifi-minifi-cpp/pull/1428 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] UcanInfosec commented on pull request #6510: NIFI-10626 Update jruby-complete to 9.3.8.0
UcanInfosec commented on PR #6510: URL: https://github.com/apache/nifi/pull/6510#issuecomment-1276034674 Had to run a second workflow on my own branch to have the canceled/unsuccessful check to run. That might need to be the case here -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (MINIFICPP-1936) Python link error in script extension
[ https://issues.apache.org/jira/browse/MINIFICPP-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Szasz resolved MINIFICPP-1936. - Resolution: Fixed > Python link error in script extension > - > > Key: MINIFICPP-1936 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1936 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Adam Debreceni >Assignee: Adam Debreceni >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > libpython.so is expected to be opened into the global scope, moreover there > was a collision on systems using a different openssl library as the bundled > libressl's symbols polluted the global scope -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (MINIFICPP-1936) Python link error in script extension
[ https://issues.apache.org/jira/browse/MINIFICPP-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Szasz updated MINIFICPP-1936: Fix Version/s: 0.13.0 > Python link error in script extension > - > > Key: MINIFICPP-1936 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1936 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Adam Debreceni >Assignee: Adam Debreceni >Priority: Major > Fix For: 0.13.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > libpython.so is expected to be opened into the global scope, moreover there > was a collision on systems using a different openssl library as the bundled > libressl's symbols polluted the global scope -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1400: MINIFICPP-1848 Create a generic solution for processor metrics
adamdebreceni commented on code in PR #1400: URL: https://github.com/apache/nifi-minifi-cpp/pull/1400#discussion_r993320857 ## libminifi/src/core/ProcessorMetrics.cpp: ## @@ -0,0 +1,104 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include "core/ProcessorMetrics.h" + +#include "core/Processor.h" +#include "utils/gsl.h" + +using namespace std::literals::chrono_literals; + +namespace org::apache::nifi::minifi::core { + +ProcessorMetrics::ProcessorMetrics(const Processor& source_processor) +: source_processor_(source_processor), + on_trigger_runtime_averager_(STORED_ON_TRIGGER_RUNTIME_COUNT) { +} + +std::string ProcessorMetrics::getName() const { + return source_processor_.getProcessorType() + "Metrics"; +} + +std::unordered_map ProcessorMetrics::getCommonLabels() const { + return {{"metric_class", getName()}, {"processor_name", source_processor_.getName()}, {"processor_uuid", source_processor_.getUUIDStr()}}; +} + +std::vector ProcessorMetrics::serialize() { + std::vector resp; + + state::response::SerializedResponseNode root_node { +.name = source_processor_.getUUIDStr(), +.children = { + {.name = "OnTriggerInvocations", .value = static_cast(iterations.load())}, + {.name = "AverageOnTriggerRunTime", .value = static_cast(getAverageOnTriggerRuntime().count())}, + {.name = "LastOnTriggerRunTime", .value = static_cast(getLastOnTriggerRuntime().count())}, + {.name = "TransferredFlowFiles", .value = static_cast(transferred_flow_files.load())}, + {.name = "TransferredBytes", .value = transferred_bytes.load()} +} + }; + + for (const auto& [relationship, count] : transferred_relationships_) { Review Comment: is it safe to access `transferred_relationships_` here without holding the `transferred_relationships_mutex_`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1959) Fix VolatileFlowFileRepository leaving dangling ResourceClaims
Adam Debreceni created MINIFICPP-1959: - Summary: Fix VolatileFlowFileRepository leaving dangling ResourceClaims Key: MINIFICPP-1959 URL: https://issues.apache.org/jira/browse/MINIFICPP-1959 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Adam Debreceni Assignee: Adam Debreceni In `ProcessSession::commit` waiting for `MultiPut` to finish before calling `increaseFlowFileRecordOwnedCount` could evict previously stored flowfiles from the VolatileRepo causing their resource's ref_counter to go to 0 (as we are yet to increase on behalf of the stored reference) causing dangling ResourceClaims. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10428) Allow for loading Avro record schema from files
[ https://issues.apache.org/jira/browse/NIFI-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616516#comment-17616516 ] Joe Witt commented on NIFI-10428: - They do all have a design which considers operating in a clustered environment. Ones where this is inherently problematic have more awkward solutions like something that is a CS meant to run on all nodes to hold state but is actually only interacted with by clients that talk to a single node. These solutions are inelegant for sure. Our DistributedMapCache(Server/Client) examples apply here. This is why we've not done this file based approach already as it is just wonky. You want a real Schema Registry service and then nifi acts as a client to it... > Allow for loading Avro record schema from files > --- > > Key: NIFI-10428 > URL: https://issues.apache.org/jira/browse/NIFI-10428 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Affects Versions: 1.18.0 >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Major > Labels: Record, avro, registry, schema > Time Spent: 10m > Remaining Estimate: 0h > > The various record reader and writer controllers XML, CSV, JSON and AVRO, etc > can load schemas from a registry service. The AvroSchemaRegistry can only > load its schema from a text property. It would be useful to have a file > based Avro schema registry, with a configurable refresh interval to allow for > adding, removing or updates to schema files. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6507: NIFI-10622 Fix Flaky Test
exceptionfactory commented on code in PR #6507: URL: https://github.com/apache/nifi/pull/6507#discussion_r993352966 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-site-to-site/src/main/java/org/apache/nifi/remote/protocol/AbstractFlowFileServerProtocol.java: ## @@ -257,7 +259,13 @@ public int transferFlowFiles(final Peer peer, final ProcessContext context, fina session.read(flowFile, new InputStreamCallback() { @Override public void process(final InputStream in) throws IOException { -final DataPacket dataPacket = new StandardDataPacket(toSend.getAttributes(), in, toSend.getSize()); +LinkedHashMap attributes = new LinkedHashMap<>(); +String[] keySet = toSend.getAttributes().keySet().toArray(new String[0]); +Arrays.sort(keySet); +for(String key: keySet){ +attributes.put(key, toSend.getAttributes().get(key)); +} Review Comment: This approach effectively makes an additional copy of the attributes prior to transmission, which introduces some inefficiency when sending packets. It would be better to evaluate the initial attribute storage in the FlowFile implementation, as opposed to making this change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (MINIFICPP-1938) Enable parallel onTrigger calls for Azure and AWS processors
[ https://issues.apache.org/jira/browse/MINIFICPP-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi resolved MINIFICPP-1938. -- Fix Version/s: 0.13.0 Resolution: Fixed > Enable parallel onTrigger calls for Azure and AWS processors > > > Key: MINIFICPP-1938 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1938 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Major > Fix For: 0.13.0 > > Time Spent: 20m > Remaining Estimate: 0h > > AWS and Azure processors currently do not support running in parallel, but > that would highly improve the throughput of these processors. We should solve > the problems occurring with parallel threads and enable multi-threading with > these processors. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (MINIFICPP-1648) InputStreamCallback OutputStreamCallback should use Input/OutputStream instead of BaseStream
[ https://issues.apache.org/jira/browse/MINIFICPP-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi resolved MINIFICPP-1648. -- Fix Version/s: 0.13.0 Resolution: Fixed > InputStreamCallback OutputStreamCallback should use Input/OutputStream > instead of BaseStream > > > Key: MINIFICPP-1648 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1648 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Martin Zink >Assignee: Gábor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 0.13.0 > > Time Spent: 1h > Remaining Estimate: 0h > > The current Input/Output callback interfaces use the BaseStream instead of > the respective InputStream, OutputStream. > {code:java} > virtual int64_t InputStreamCallback::process(const > std::shared_ptr& stream) = 0; > virtual int64_t OutputStreamCallback::process(const > std::shared_ptr& stream) = 0; > {code} > Ideally it should look like this > {code:java} > virtual int64_t InputStreamCallback::process(const > std::shared_ptr& stream) = 0; > virtual int64_t OutputStreamCallback::process(const > std::shared_ptr& stream) = 0; > {code} > Without this it is impossible to create and use ReadOnly/WriteOnly streams > for FlowFile IO (the BaseStream requires implementing both the Input and > Output Stream interfaces) > However there may be some feature dependent on this solution. (e.g. calling > write from InputStreamCallback) > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1400: MINIFICPP-1848 Create a generic solution for processor metrics
adamdebreceni commented on code in PR #1400: URL: https://github.com/apache/nifi-minifi-cpp/pull/1400#discussion_r993301197 ## libminifi/include/utils/Averager.h: ## @@ -0,0 +1,91 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include +#include + +namespace org::apache::nifi::minifi::utils { + +template +concept Summable = requires(T x) { x + x; }; // NOLINT(readability/braces) + +template +concept DividableByInteger = requires(T x, uint32_t divisor) { x / divisor; }; // NOLINT(readability/braces) + +template +requires Summable && DividableByInteger +class Averager { + public: + explicit Averager(uint32_t sample_size) : SAMPLE_SIZE_(sample_size) { +values_.reserve(SAMPLE_SIZE_); + } + + ValueType getAverage() const; + ValueType getLastValue() const; + void addValue(ValueType runtime); + + private: + const uint32_t SAMPLE_SIZE_; + mutable std::mutex average_value_mutex_; + uint32_t next_average_index_ = 0; + std::vector values_; +}; + +template +requires Summable && DividableByInteger +ValueType Averager::getAverage() const { + if (values_.empty()) { +return {}; + } + ValueType sum{}; + std::lock_guard lock(average_value_mutex_); + for (const auto& value : values_) { Review Comment: could `std::accumulate` work here? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1400: MINIFICPP-1848 Create a generic solution for processor metrics
adamdebreceni commented on code in PR #1400: URL: https://github.com/apache/nifi-minifi-cpp/pull/1400#discussion_r993313911 ## libminifi/include/utils/Averager.h: ## @@ -0,0 +1,91 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include +#include + +namespace org::apache::nifi::minifi::utils { + +template +concept Summable = requires(T x) { x + x; }; // NOLINT(readability/braces) + +template +concept DividableByInteger = requires(T x, uint32_t divisor) { x / divisor; }; // NOLINT(readability/braces) + +template +requires Summable && DividableByInteger +class Averager { + public: + explicit Averager(uint32_t sample_size) : SAMPLE_SIZE_(sample_size) { +values_.reserve(SAMPLE_SIZE_); + } + + ValueType getAverage() const; + ValueType getLastValue() const; + void addValue(ValueType runtime); + + private: + const uint32_t SAMPLE_SIZE_; + mutable std::mutex average_value_mutex_; + uint32_t next_average_index_ = 0; + std::vector values_; +}; + +template +requires Summable && DividableByInteger +ValueType Averager::getAverage() const { + if (values_.empty()) { +return {}; + } + ValueType sum{}; + std::lock_guard lock(average_value_mutex_); + for (const auto& value : values_) { +sum += value; + } + return sum / values_.size(); +} + +template +requires Summable && DividableByInteger +void Averager::addValue(ValueType runtime) { + std::lock_guard lock(average_value_mutex_); + if (values_.size() < SAMPLE_SIZE_) { +values_.push_back(runtime); + } else { +if (next_average_index_ >= values_.size()) { + next_average_index_ = 0; +} +values_[next_average_index_] = runtime; +++next_average_index_; + } +} + +template +requires Summable && DividableByInteger +ValueType Averager::getLastValue() const { + std::lock_guard lock(average_value_mutex_); + if (values_.empty()) { +return {}; + } else if (values_.size() < SAMPLE_SIZE_) { +return values_[values_.size() - 1]; + } else { +return values_[next_average_index_ - 1]; Review Comment: wouldn't this cause an ~underflow~ out-of-bounds indexing, if we query `getLastValue` right after we have pushed the last value at the end of the `values_` vector? (`values_.size() == SAMPLE_SIZE_ && next_average_index_ == 0`) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10428) Allow for loading Avro record schema from files
[ https://issues.apache.org/jira/browse/NIFI-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616506#comment-17616506 ] Daniel Stieglitz commented on NIFI-10428: - [~joewitt] That is a good point which I did not consider. Do other controller services share state in a clustered environment of multiple nifi nodes? > Allow for loading Avro record schema from files > --- > > Key: NIFI-10428 > URL: https://issues.apache.org/jira/browse/NIFI-10428 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Affects Versions: 1.18.0 >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Major > Labels: Record, avro, registry, schema > Time Spent: 10m > Remaining Estimate: 0h > > The various record reader and writer controllers XML, CSV, JSON and AVRO, etc > can load schemas from a registry service. The AvroSchemaRegistry can only > load its schema from a text property. It would be useful to have a file > based Avro schema registry, with a configurable refresh interval to allow for > adding, removing or updates to schema files. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] xuanronaldo commented on pull request #6416: NIFI-10234 implement PutIoTDB
xuanronaldo commented on PR #6416: URL: https://github.com/apache/nifi/pull/6416#issuecomment-1276959632 Sorry for replying you so late, because I got a lot of work to do. There is an excellent contributor @lizhizhou of our community would like to finish my job. Besides, he has commited pr to NiFi. So, I think you guys will work well together. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] UcanInfosec opened a new pull request, #6521: NIFI-10636 Update Jython-standalone to 2.7.3
UcanInfosec opened a new pull request, #6521: URL: https://github.com/apache/nifi/pull/6521 # Summary [NIFI-10636](https://issues.apache.org/jira/browse/NIFI-10636) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI-10636) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] TheGreatRandall opened a new pull request, #6522: NIFI-10637 fixing flaky test in TestParseCEF
TheGreatRandall opened a new pull request, #6522: URL: https://github.com/apache/nifi/pull/6522 # Summary [NIFI-10637](https://issues.apache.org/jira/browse/NIFI-10637) Following the problem in the issue NIFI-10637. I dived deep to the class "ValidateLocale" where the validator has been determined valid or not. After several printouts and checking log files, we found that the available locales is a deterministic list. ``` final Locale[] availableLocales = Locale.getAvailableLocales(); ``` However, the if condition in this class looks like this: ``` if (availableLocales[0].equals(testLocale)) { // Locale matches the "null" locale so it is treated as invalid return new ValidationResult.Builder().subject(subject).input(input).valid(false) .explanation(input + " is not a valid locale format.").build(); } ``` The purpose of this if statement is to determine if the testLocale is empty. However, as I said before, the availableLocales is a given list and the first element is not the zero. So I changed the code to this: ``` if ("".equals(testLocale.toString())) { // Locale matches the "null" locale so it is treated as invalid return new ValidationResult.Builder().subject(subject).input(input).valid(false) .explanation(input + " is not a valid locale format.").build(); } ``` By changing this, the if statement can correctly work and the bug has been fixed. It is not flaky anymore and can pass the NonDex's check. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [X] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] asfgit closed pull request #6510: NIFI-10626 Update jruby-complete to 9.3.8.0
asfgit closed pull request #6510: NIFI-10626 Update jruby-complete to 9.3.8.0 URL: https://github.com/apache/nifi/pull/6510 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10626) Update jruby-complete to 9.3.8.0
[ https://issues.apache.org/jira/browse/NIFI-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616734#comment-17616734 ] ASF subversion and git services commented on NIFI-10626: Commit 14a2249c0e3e1946bbfc9b810dd2b56b51a26997 in nifi's branch refs/heads/main from UcanInfosec [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=14a2249c0e ] NIFI-10626 Update jruby-complete to 9.3.8.0 This closes #6510 Signed-off-by: Mike Thomsen > Update jruby-complete to 9.3.8.0 > > > Key: NIFI-10626 > URL: https://issues.apache.org/jira/browse/NIFI-10626 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.17.0, 1.18.0 >Reporter: Mike R >Assignee: Mike R >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Update jruby-complete to 9.3.8.0 from 9.3.4.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10637) TestParseCEF.testCustomValidator is a flaky test
Haoran Jiang created NIFI-10637: --- Summary: TestParseCEF.testCustomValidator is a flaky test Key: NIFI-10637 URL: https://issues.apache.org/jira/browse/NIFI-10637 Project: Apache NiFi Issue Type: Bug Components: NiFi Registry Environment: Apache Maven 3.6.0; openjdk version "1.8.0_342"; OpenJDK Runtime Environment (build 1.8.0_342-8u342-b07-0ubuntu1~20.04-b07); OpenJDK 64-Bit Server VM (build 25.342-b07, mixed mode); Reporter: Haoran Jiang Attachments: org.apache.nifi.processors.standard.TestParseCEF.testCustomValidator-log.txt {code:java} org.apache.nifi.processors.standard.TestParseCEF.testCustomValidator {code} This is the flaky test. it can pass maven-test while testing failed under [NonDex|[https://github.com/TestingResearchIllinois/NonDex]]. This is the log for the result. {code:java} Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.46 s <<< FAILURE! - in org.apache.nifi.processors.standard.TestParseCEF org.apache.nifi.processors.standard.TestParseCEF.testCustomValidator Time elapsed: 0.391 s <<< FAILURE! org.opentest4j.AssertionFailedError: Processor appears to be valid but expected it to be invalid ==> expected: but was: {code} *Steps to reproduce the failure:* 1.intall [Nondex|https://github.com/TestingResearchIllinois/NonDex] 2.Run the command in nifi: {code:java} mvn install -pl nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors -am -DskipTests mvn -pl nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors edu.illinois:nondex-maven-plugin:1.1.2:nondex -Dtest=org.apache.nifi.processors.standard.TestParseCEF#testCustomValidator {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10638) It was too slow for Disconnected Node reconnecting
Hadi created NIFI-10638: --- Summary: It was too slow for Disconnected Node reconnecting Key: NIFI-10638 URL: https://issues.apache.org/jira/browse/NIFI-10638 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.16.3 Environment: java8 Reporter: Hadi Attachments: image-2022-10-13-10-04-19-411.png, image-2022-10-13-10-04-38-523.png, image-2022-10-13-10-11-40-280.png It 's a new feature that we can change the config for processors when nodes disconnected, and flow.json & flow.xml. In my cluster, both are Coexisting in nifi(in dir archive). but i find it's some trouble with nifi. i have many times when change processors config, and node will disconnected and reconnected. some time will still at connecting. i find the log and code, it likes to stop processor one by one (and no time out). if i restart the node, it will cost 45 mins. this is the code: !image-2022-10-13-10-04-38-523.png! not only stop processors ,but also add processors: !image-2022-10-13-10-04-19-411.png! code: org.apache.nifi.controller.serialization.AffectedComponentSet#waitForConnectablesStopped !image-2022-10-13-10-11-40-280.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] Kerr0220 commented on a diff in pull request #6507: NIFI-10622 Fix Flaky Test
Kerr0220 commented on code in PR #6507: URL: https://github.com/apache/nifi/pull/6507#discussion_r994009564 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-site-to-site/src/main/java/org/apache/nifi/remote/protocol/AbstractFlowFileServerProtocol.java: ## @@ -257,7 +259,13 @@ public int transferFlowFiles(final Peer peer, final ProcessContext context, fina session.read(flowFile, new InputStreamCallback() { @Override public void process(final InputStream in) throws IOException { -final DataPacket dataPacket = new StandardDataPacket(toSend.getAttributes(), in, toSend.getSize()); +LinkedHashMap attributes = new LinkedHashMap<>(); +String[] keySet = toSend.getAttributes().keySet().toArray(new String[0]); +Arrays.sort(keySet); +for(String key: keySet){ +attributes.put(key, toSend.getAttributes().get(key)); +} Review Comment: Thank you for your suggestion! Your concern is reasonable which I haven't thought about. I would make more effort to try fixing it by replacing the initial storage of attributes from HashMap to LinkedHashMap. And you are right that since it's a low-level change that may widely influence the entire module, we still need to do more evaluation on whether to change it or just let it remain the original way. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-10636) Update Jython Standalone to 2.7.3
Mike R created NIFI-10636: - Summary: Update Jython Standalone to 2.7.3 Key: NIFI-10636 URL: https://issues.apache.org/jira/browse/NIFI-10636 Project: Apache NiFi Issue Type: Improvement Affects Versions: 1.18.0, 1.17.0 Reporter: Mike R Update Jython Standalone to 2.7.3 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10624) Remove Sensitive Properties Key Warning from Component Documentation
[ https://issues.apache.org/jira/browse/NIFI-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-10624: Fix Version/s: 1.19.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Remove Sensitive Properties Key Warning from Component Documentation > > > Key: NIFI-10624 > URL: https://issues.apache.org/jira/browse/NIFI-10624 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Trivial > Fix For: 1.19.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > The generated component documentation includes the following warning for > classes that include sensitive properties: > {quote} > Before entering a value in a sensitive property, ensure that the > *nifi.properties* file has an entry for the property > {*}nifi.sensitive.props.key{*}. > {quote} > NiFi 1.14.0 and following require configuration of the > {{nifi.sensitive.props.key}}, so this warning should be removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] Kerr0220 commented on pull request #6508: NIFI-10623 fix flaky tests in TestHttpClient
Kerr0220 commented on PR #6508: URL: https://github.com/apache/nifi/pull/6508#issuecomment-1276890806 Sorry about that, but let me explain it. This pr is related to #6507 which would ensure that this one can pass all checks since they both work on calculating checksum in deterministic(one from sending side, the other from receiving side). Unfortunately, #6507 may not be approved now and thus this one may cause some tests to fail. I'm currently working on further fixing #6507 with other approaches and if #6507 can be accepted, this one should pass tests. And I will mention to you in the future if #6507 can be accepted. Thank you for your time again! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10624) Remove Sensitive Properties Key Warning from Component Documentation
[ https://issues.apache.org/jira/browse/NIFI-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616743#comment-17616743 ] ASF subversion and git services commented on NIFI-10624: Commit 11314e813242464709df152f33a5e66b00d84cf9 in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=11314e8132 ] NIFI-10624 Removed sensitive properties key warning - Corrected logging statements with placeholders instead of concatenation - Removed unused NiFiServerStub Signed-off-by: Matthew Burgess This closes #6513 > Remove Sensitive Properties Key Warning from Component Documentation > > > Key: NIFI-10624 > URL: https://issues.apache.org/jira/browse/NIFI-10624 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Trivial > Time Spent: 0.5h > Remaining Estimate: 0h > > The generated component documentation includes the following warning for > classes that include sensitive properties: > {quote} > Before entering a value in a sensitive property, ensure that the > *nifi.properties* file has an entry for the property > {*}nifi.sensitive.props.key{*}. > {quote} > NiFi 1.14.0 and following require configuration of the > {{nifi.sensitive.props.key}}, so this warning should be removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mattyb149 commented on pull request #6513: NIFI-10624 Remove sensitive properties key warning
mattyb149 commented on PR #6513: URL: https://github.com/apache/nifi/pull/6513#issuecomment-1276926817 +1 LGTM, spot-checked some docs and verified everything looks good. Thanks for the improvement! Merging to main -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mattyb149 closed pull request #6513: NIFI-10624 Remove sensitive properties key warning
mattyb149 closed pull request #6513: NIFI-10624 Remove sensitive properties key warning URL: https://github.com/apache/nifi/pull/6513 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-10626) Update jruby-complete to 9.3.8.0
[ https://issues.apache.org/jira/browse/NIFI-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Thomsen resolved NIFI-10626. - Fix Version/s: 1.19.0 Resolution: Fixed > Update jruby-complete to 9.3.8.0 > > > Key: NIFI-10626 > URL: https://issues.apache.org/jira/browse/NIFI-10626 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.17.0, 1.18.0 >Reporter: Mike R >Assignee: Mike R >Priority: Major > Fix For: 1.19.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Update jruby-complete to 9.3.8.0 from 9.3.4.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10632) AWS S3 CopyObject
Malthe Borch created NIFI-10632: --- Summary: AWS S3 CopyObject Key: NIFI-10632 URL: https://issues.apache.org/jira/browse/NIFI-10632 Project: Apache NiFi Issue Type: Improvement Reporter: Malthe Borch AWS S3 provides a [CopyObject|https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html] API call which: {quote}Creates a copy of an object that is already stored in Amazon S3.{quote} In NiFi, the PutS3Object processor could be extended to support this API such that one can specify a source for the copy operation, either a different AWS Credentials Provider Service or simply a different bucket and/or object name. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-8834) javax.persistence.spi::No valid providers found in log
[ https://issues.apache.org/jira/browse/NIFI-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wiktor Kubicki updated NIFI-8834: - Affects Version/s: 1.17.0 1.16.0 > javax.persistence.spi::No valid providers found in log > -- > > Key: NIFI-8834 > URL: https://issues.apache.org/jira/browse/NIFI-8834 > Project: Apache NiFi > Issue Type: Bug > Components: NiFi Registry >Affects Versions: 1.16.0, 1.17.0 > Environment: CentOS 7, standalone nifi 1.12.1, standalone nifi > registry >Reporter: Wiktor Kubicki >Priority: Minor > > I have just look into my log file as maintanance procedure and saw WARN: > {code:java} > 2020-12-13 09:09:26,058 WARN [NiFi Registry Web Server-6409] > javax.persistence.spi javax.persistence.spi::No valid providers found. > {code} > It apperas about once per day. I dont see any other problems with stability > of that solution. -- This message was sent by Atlassian Jira (v8.20.10#820010)