[jira] [Assigned] (NIFI-11449) Investigate Iceberg insert on Object Storage
[ https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Steinebrey reassigned NIFI-11449: - Assignee: (was: Jim Steinebrey) > Investigate Iceberg insert on Object Storage > > > Key: NIFI-11449 > URL: https://issues.apache.org/jira/browse/NIFI-11449 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.21.0 > Environment: Any Nifi Deployment >Reporter: Abdelrahim Ahmad >Priority: Blocker > Labels: Trino, autocommit, database, iceberg, putdatabaserecord > > The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When > using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write > to an Iceberg catalog, it disables the autocommit feature. This leads to > errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}". > the autocommit feature needs to be added in the processor to be > enabled/disabled. > enabling auto-commit in the Nifi PutDatabaseRecord processor is important for > Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by > allowing atomic writes to be performed in the underlying database. This will > allow the process to be widely used with bigger range of databases. > _Improving this processor will allow Nifi to be the main tool to ingest data > into these new Technologies. So we don't have to deal with another tool to do > so._ > +*_{color:#de350b}BUT:{color}_*+ > I have reviewed The {{PutDatabaseRecord}} processor in NiFi. It inserts > records one by one into the database using a prepared statement, and commits > the transaction at the end of the loop that processes each record. This > approach can be inefficient and slow when inserting large volumes of data > into tables that are optimized for bulk ingestion, such as Delta Lake, > Iceberg, and Hudi tables. > These tables use various techniques to optimize the performance of bulk > ingestion, such as partitioning, clustering, and indexing. Inserting records > one by one using a prepared statement can bypass these optimizations, leading > to poor performance and potentially causing issues such as excessive disk > usage, increased memory consumption, and decreased query performance. > To avoid these issues, it is recommended to have a new processor, or add > feature to the current one, to bulk insert method with AutoCommit feature > when inserting large volumes of data into Delta Lake, Iceberg, and Hudi > tables. > > P.S.: using PutSQL is not a have autoCommit but have the same performance > problem described above.. > Thanks and best regards :) > Abdelrahim Ahmad -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11449) Investigate Iceberg insert on Object Storage
[ https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-11449: Summary: Investigate Iceberg insert on Object Storage (was: add autocommit property to PutDatabaseRecord processor) > Investigate Iceberg insert on Object Storage > > > Key: NIFI-11449 > URL: https://issues.apache.org/jira/browse/NIFI-11449 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.21.0 > Environment: Any Nifi Deployment >Reporter: Abdelrahim Ahmad >Assignee: Jim Steinebrey >Priority: Blocker > Labels: Trino, autocommit, database, iceberg, putdatabaserecord > > The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When > using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write > to an Iceberg catalog, it disables the autocommit feature. This leads to > errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}". > the autocommit feature needs to be added in the processor to be > enabled/disabled. > enabling auto-commit in the Nifi PutDatabaseRecord processor is important for > Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by > allowing atomic writes to be performed in the underlying database. This will > allow the process to be widely used with bigger range of databases. > _Improving this processor will allow Nifi to be the main tool to ingest data > into these new Technologies. So we don't have to deal with another tool to do > so._ > +*_{color:#de350b}BUT:{color}_*+ > I have reviewed The {{PutDatabaseRecord}} processor in NiFi. It inserts > records one by one into the database using a prepared statement, and commits > the transaction at the end of the loop that processes each record. This > approach can be inefficient and slow when inserting large volumes of data > into tables that are optimized for bulk ingestion, such as Delta Lake, > Iceberg, and Hudi tables. > These tables use various techniques to optimize the performance of bulk > ingestion, such as partitioning, clustering, and indexing. Inserting records > one by one using a prepared statement can bypass these optimizations, leading > to poor performance and potentially causing issues such as excessive disk > usage, increased memory consumption, and decreased query performance. > To avoid these issues, it is recommended to have a new processor, or add > feature to the current one, to bulk insert method with AutoCommit feature > when inserting large volumes of data into Delta Lake, Iceberg, and Hudi > tables. > > P.S.: using PutSQL is not a have autoCommit but have the same performance > problem described above.. > Thanks and best regards :) > Abdelrahim Ahmad -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-11858 Configurable Column Name Normalization in PutDatabaseRecord and UpdateDatabaseTable [nifi]
ravinarayansingh commented on PR #7544: URL: https://github.com/apache/nifi/pull/7544#issuecomment-2038164793 > @ravinarayansingh Thanks for your work and patience on this pull request. > > Unfortunately I may not have been clear in previous comments regarding the interface-based approached for the `ColumnNameNormalizer`. I did not intend for the `ColumnNameNormalizer` to be implemented by each Processor, but instead, the `ColumnNameNormalizer` should have separate implementations for each strategy. > > If it would be helpful, I could follow up with a commit that implements the approach I described. Hi @exceptionfactory , could you please provide more details or a code snippet ? I'm having trouble understanding the issue and would appreciate your clarification on current implementation Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-13002) Restore compression level mistakenly removed in NIFI-12996
[ https://issues.apache.org/jira/browse/NIFI-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834065#comment-17834065 ] ASF subversion and git services commented on NIFI-13002: Commit bf45bebbc00b745c5d7ef81e95b6672a5dfd6900 in nifi's branch refs/heads/support/nifi-1.x from Joe Witt [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=bf45bebbc0 ] NIFI-13002 Restored zstd Compression Level in CompressContent This closes #8604 Signed-off-by: David Handermann (cherry picked from commit 7f5680d1fed374666ee92fd9e43c9d660507c467) > Restore compression level mistakenly removed in NIFI-12996 > -- > > Key: NIFI-13002 > URL: https://issues.apache.org/jira/browse/NIFI-13002 > Project: Apache NiFi > Issue Type: Task >Reporter: Joe Witt >Assignee: Joe Witt >Priority: Major > Fix For: 2.0.0-M3, 1.26.0 > > Time Spent: 20m > Remaining Estimate: 0h > > I removed the compression level as I misread the underlying code. It needs > to remain so this JIRA puts it back in. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-13002) Restore compression level mistakenly removed in NIFI-12996
[ https://issues.apache.org/jira/browse/NIFI-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-13002. - Resolution: Fixed > Restore compression level mistakenly removed in NIFI-12996 > -- > > Key: NIFI-13002 > URL: https://issues.apache.org/jira/browse/NIFI-13002 > Project: Apache NiFi > Issue Type: Task >Reporter: Joe Witt >Assignee: Joe Witt >Priority: Major > Fix For: 2.0.0-M3, 1.26.0 > > Time Spent: 20m > Remaining Estimate: 0h > > I removed the compression level as I misread the underlying code. It needs > to remain so this JIRA puts it back in. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-13002) Restore zstd compression level for CompressContent mistakenly removed in NIFI-12996
[ https://issues.apache.org/jira/browse/NIFI-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-13002: Summary: Restore zstd compression level for CompressContent mistakenly removed in NIFI-12996 (was: Restore compression level mistakenly removed in NIFI-12996) > Restore zstd compression level for CompressContent mistakenly removed in > NIFI-12996 > --- > > Key: NIFI-13002 > URL: https://issues.apache.org/jira/browse/NIFI-13002 > Project: Apache NiFi > Issue Type: Task >Reporter: Joe Witt >Assignee: Joe Witt >Priority: Major > Fix For: 2.0.0-M3, 1.26.0 > > Time Spent: 20m > Remaining Estimate: 0h > > I removed the compression level as I misread the underlying code. It needs > to remain so this JIRA puts it back in. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-13002 restoring compression level that should not have been removed [nifi]
exceptionfactory closed pull request #8604: NIFI-13002 restoring compression level that should not have been removed URL: https://github.com/apache/nifi/pull/8604 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-13002) Restore compression level mistakenly removed in NIFI-12996
[ https://issues.apache.org/jira/browse/NIFI-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834064#comment-17834064 ] ASF subversion and git services commented on NIFI-13002: Commit 7f5680d1fed374666ee92fd9e43c9d660507c467 in nifi's branch refs/heads/main from Joe Witt [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7f5680d1fe ] NIFI-13002 Restored zstd Compression Level in CompressContent This closes #8604 Signed-off-by: David Handermann > Restore compression level mistakenly removed in NIFI-12996 > -- > > Key: NIFI-13002 > URL: https://issues.apache.org/jira/browse/NIFI-13002 > Project: Apache NiFi > Issue Type: Task >Reporter: Joe Witt >Assignee: Joe Witt >Priority: Major > Fix For: 2.0.0-M3, 1.26.0 > > Time Spent: 20m > Remaining Estimate: 0h > > I removed the compression level as I misread the underlying code. It needs > to remain so this JIRA puts it back in. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12889: Retry Kerberos login on auth failure in HDFS processors [nifi]
Lehel44 commented on code in PR #8495: URL: https://github.com/apache/nifi/pull/8495#discussion_r1552005815 ## nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/DeleteHDFS.java: ## @@ -177,16 +177,20 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro flowFile = session.putAttribute(flowFile, HADOOP_FILE_URL_ATTRIBUTE, qualifiedPath.toString()); session.getProvenanceReporter().invokeRemoteProcess(flowFile, qualifiedPath.toString()); } catch (IOException ioe) { -// One possible scenario is that the IOException is permissions based, however it would be impractical to check every possible -// external HDFS authorization tool (Ranger, Sentry, etc). Local ACLs could be checked but the operation would be expensive. -getLogger().warn("Failed to delete file or directory", ioe); - -Map attributes = Maps.newHashMapWithExpectedSize(1); -// The error message is helpful in understanding at a flowfile level what caused the IOException (which ACL is denying the operation, e.g.) -attributes.put(getAttributePrefix() + ".error.message", ioe.getMessage()); - - session.transfer(session.putAllAttributes(session.clone(flowFile), attributes), getFailureRelationship()); -failedPath++; +if (handleAuthErrors(ioe, session, context)) { +return null; +} else { +// One possible scenario is that the IOException is permissions based, however it would be impractical to check every possible +// external HDFS authorization tool (Ranger, Sentry, etc). Local ACLs could be checked but the operation would be expensive. +getLogger().warn("Failed to delete file or directory", ioe); + +Map attributes = Maps.newHashMapWithExpectedSize(1); +// The error message is helpful in understanding at a flowfile level what caused the IOException (which ACL is denying the operation, e.g.) +attributes.put(getAttributePrefix() + ".error.message", ioe.getMessage()); + + session.transfer(session.putAllAttributes(session.clone(flowFile), attributes), getFailureRelationship()); +failedPath++; +} } } } Review Comment: Error handling of GSSException should be added to line 213 outer catch as well, because hdfs is called outside of the inner catch block. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12889: Retry Kerberos login on auth failure in HDFS processors [nifi]
Lehel44 commented on code in PR #8495: URL: https://github.com/apache/nifi/pull/8495#discussion_r1552005815 ## nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/DeleteHDFS.java: ## @@ -177,16 +177,20 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro flowFile = session.putAttribute(flowFile, HADOOP_FILE_URL_ATTRIBUTE, qualifiedPath.toString()); session.getProvenanceReporter().invokeRemoteProcess(flowFile, qualifiedPath.toString()); } catch (IOException ioe) { -// One possible scenario is that the IOException is permissions based, however it would be impractical to check every possible -// external HDFS authorization tool (Ranger, Sentry, etc). Local ACLs could be checked but the operation would be expensive. -getLogger().warn("Failed to delete file or directory", ioe); - -Map attributes = Maps.newHashMapWithExpectedSize(1); -// The error message is helpful in understanding at a flowfile level what caused the IOException (which ACL is denying the operation, e.g.) -attributes.put(getAttributePrefix() + ".error.message", ioe.getMessage()); - - session.transfer(session.putAllAttributes(session.clone(flowFile), attributes), getFailureRelationship()); -failedPath++; +if (handleAuthErrors(ioe, session, context)) { +return null; +} else { +// One possible scenario is that the IOException is permissions based, however it would be impractical to check every possible +// external HDFS authorization tool (Ranger, Sentry, etc). Local ACLs could be checked but the operation would be expensive. +getLogger().warn("Failed to delete file or directory", ioe); + +Map attributes = Maps.newHashMapWithExpectedSize(1); +// The error message is helpful in understanding at a flowfile level what caused the IOException (which ACL is denying the operation, e.g.) +attributes.put(getAttributePrefix() + ".error.message", ioe.getMessage()); + + session.transfer(session.putAllAttributes(session.clone(flowFile), attributes), getFailureRelationship()); +failedPath++; +} } } } Review Comment: Error handling of GSSException should be added to line 213 outer catch as well, because the fileSystem.exists is called outside of the inner catch block. ## nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/MoveHDFS.java: ## @@ -294,7 +302,7 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro if (logEmptyListing.getAndDecrement() > 0) { getLogger().info( "Obtained file listing in {} milliseconds; listing had {} items, {} of which were new", -new Object[]{millis, listedFiles.size(), newItems}); +millis, listedFiles.size(), newItems); Review Comment: Could not comment on line 450: In processBachOfFiles line 450 the GSSException should be handled as well. ## nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/MoveHDFS.java: ## @@ -254,8 +254,16 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro if (!directoryExists) { throw new IOException("Input Directory or File does not exist in HDFS"); } +} catch (final IOException e) { Review Comment: The IOException does not need to be handled differently, it can be handled in the Exception catch branch ```java catch (Exception e) { if(!handleAuthErrors(e, session, context)) { getLogger().error("Failed to retrieve content from {} for {} due to {}; routing to failure", filenameValue, flowFile, e); flowFile = session.putAttribute(flowFile, "hdfs.failure.reason", e.getMessage()); flowFile = session.penalize(flowFile); session.transfer(flowFile, REL_FAILURE); } return; } ``` ## nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java: ## @@
[PR] NIFI-13002 restoring compression level that should not have been removed [nifi]
joewitt opened a new pull request, #8604: URL: https://github.com/apache/nifi/pull/8604 # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 21 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-13002) Restore compression level mistakenly removed in NIFI-12996
Joe Witt created NIFI-13002: --- Summary: Restore compression level mistakenly removed in NIFI-12996 Key: NIFI-13002 URL: https://issues.apache.org/jira/browse/NIFI-13002 Project: Apache NiFi Issue Type: Task Reporter: Joe Witt Assignee: Joe Witt Fix For: 2.0.0-M3, 1.26.0 I removed the compression level as I misread the underlying code. It needs to remain so this JIRA puts it back in. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12999: Revert "Bump webpack-dev-middleware and karma-webpack (#8547)" [nifi]
scottyaslan commented on PR #8603: URL: https://github.com/apache/nifi/pull/8603#issuecomment-2037825890 @sardell Running `mvn clean install -DskipTests -PjsUnitTests` from /nifi/nifi-registry also builds successfully for me. Can you please provide more details on your environment and how you are running this to see this failure? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-12969) Under heavy load, nifi node unable to rejoin cluster, graph modified with temp funnel
[ https://issues.apache.org/jira/browse/NIFI-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834034#comment-17834034 ] Nissim Shiman commented on NIFI-12969: -- Thank you very much [~markap14] [~joewitt] [~pgyori] for looking at this and the resolution. I did some testing and verified the issue no longer occurs. > Under heavy load, nifi node unable to rejoin cluster, graph modified with > temp funnel > - > > Key: NIFI-12969 > URL: https://issues.apache.org/jira/browse/NIFI-12969 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.24.0, 2.0.0-M2 >Reporter: Nissim Shiman >Assignee: Mark Payne >Priority: Critical > Fix For: 2.0.0-M3, 1.26.0 > > Attachments: nifi-app.log, simple_flow.png, > simple_flow_with_temp-funnel.png > > Time Spent: 20m > Remaining Estimate: 0h > > Under heavy load, if a node leaves the cluster (due to heartbeat time out), > many times it is unable to rejoin the cluster. > The nodes' graph will have been modified with a temp-funnel as well. > Appears to be some sort of [timing > issue|https://github.com/apache/nifi/blob/rel/nifi-2.0.0-M2/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/connectable/StandardConnection.java#L298] > # To reproduce, on a nifi cluster of three nodes, set up: > 2 GenerateFlowFile processors -> PG > Inside PG: > inputPort -> UpdateAttribute > # Keep all defaults except for the following: > For UpdateAttribute terminate the success relationship > One of the GenerateFlowFile processors can be disabled, > the other one should have Run Schedule to be 0 min (this will allow for the > heavy load) > # In nifi.properties (on all 3 nodes) to allow for nodes to fall out of the > cluster, set: nifi.cluster.protocol.heartbeat.interval=2 sec (default is 5) > nifi.cluster.protocol.heartbeat.missable.max=1 (default is 8) > Restart nifi. Start flow. The nodes will quickly fall out and rejoin cluster. > After a few minutes one will likely not be able to rejoin. The graph for > that node will have the disabled GenerateFlowFile now pointing to a funnel (a > temp-funnel) instead of the PG > Stack trace on that nodes nifi-app.log will look like this: (this is from > 2.0.0-M2): > {code:java} > 2024-03-28 13:55:19,395 INFO [Reconnect to Cluster] > o.a.nifi.controller.StandardFlowService Node disconnected due to Failed to > properly handle Reconnection request due to org.apache.nifi.control > ler.serialization.FlowSynchronizationException: Failed to connect node to > cluster because local flow controller partially updated. Administrator should > disconnect node and review flow for corrup > tion. > 2024-03-28 13:55:19,395 ERROR [Reconnect to Cluster] > o.a.nifi.controller.StandardFlowService Handling reconnection request failed > due to: org.apache.nifi.controller.serialization.FlowSynchroniza > tionException: Failed to connect node to cluster because local flow > controller partially updated. Administrator should disconnect node and review > flow for corruption. > org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed > to connect node to cluster because local flow controller partially updated. > Administrator should disconnect node and > review flow for corruption. > at > org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:985) > at > org.apache.nifi.controller.StandardFlowService.handleReconnectionRequest(StandardFlowService.java:655) > at > org.apache.nifi.controller.StandardFlowService$1.run(StandardFlowService.java:384) > at java.base/java.lang.Thread.run(Thread.java:1583) > Caused by: > org.apache.nifi.controller.serialization.FlowSynchronizationException: > java.lang.IllegalStateException: Cannot change destination of Connection > because FlowFiles from this Connection > are currently held by LocalPort[id=99213c00-78ca-4848-112f-5454cc20656b, > type=INPUT_PORT, name=inputPort, group=innerPG] > at > org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:472) > at > org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:223) > at > org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1740) > at > org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91) > at > org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:805) > at > org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:954) > ... 3
Re: [PR] NIFI-12999: Revert "Bump webpack-dev-middleware and karma-webpack (#8547)" [nifi]
scottyaslan commented on PR #8603: URL: https://github.com/apache/nifi/pull/8603#issuecomment-2037744152 > Node 18, so it is unclear how this works in automated builds. It also works on recent local builds, so is the issue limited to running Registry unit tests? If it is possible to rescope the changes, as opposed to just reverting that would be ideal, but I understand it may be more complicated. @sardell Is it possible to update the version of node that this requires? Possibly in the pom.xml? Or maybe even update the registry's package.json such that we don't fail because of it? I know we don't want to support npm versions less than the minimum but I think newer versions of npm are fine right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - main branch [nifi]
slambrose commented on PR #8536: URL: https://github.com/apache/nifi/pull/8536#issuecomment-2037713285 @exceptionfactory Okay, this should be good now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12993) PutDatabaseRecord: add auto commit property and fully implement Batch Size for sql statement type
[ https://issues.apache.org/jira/browse/NIFI-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-12993: Affects Version/s: (was: 1.25.0) (was: 2.0.0-M2) Status: Patch Available (was: Open) > PutDatabaseRecord: add auto commit property and fully implement Batch Size > for sql statement type > - > > Key: NIFI-12993 > URL: https://issues.apache.org/jira/browse/NIFI-12993 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Jim Steinebrey >Assignee: Jim Steinebrey >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Add auto_commit property to PutDatabaseRecord. > Batch size property exists in PutDatabaseRecord and is implemented for some > statement types, but batch size is ignored for the SQL statement type > processing. Implement batch size processing for SQL statement types so all > statement types in PutDatabaseRecord support it equally. > PutSQL and other SQL processors have auto commit and batch size properties so > it will be beneficial for PutDatabaseRecord to also implement them fully for > consistency. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12999: Revert "Bump webpack-dev-middleware and karma-webpack (#8547)" [nifi]
sardell commented on PR #8603: URL: https://github.com/apache/nifi/pull/8603#issuecomment-2037662958 @exceptionfactory I've confirmed with @koccs that he's able to reproduce the issue building the latest from `main` locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12993 Add auto commit feature and add batch processing [nifi]
mattyb149 commented on PR #8597: URL: https://github.com/apache/nifi/pull/8597#issuecomment-2037655887 I recommend putting all the unit tests in one file and not using abbreviations for the test name. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12993 Add auto commit feature and add batch processing [nifi]
mattyb149 commented on PR #8597: URL: https://github.com/apache/nifi/pull/8597#issuecomment-2037649089 Reviewing... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12923 Added append avro mode to PutHDFS [nifi]
mattyb149 commented on PR #8544: URL: https://github.com/apache/nifi/pull/8544#issuecomment-2037620687 In order for the append to work I had to set `Writing Strategy` to `Simple write`, if I leave the default `Write and rename`, it actually deletes the file. Is this intended? If not we should add another dependent property so if AVRO is chosen, the Writing Strategy must be Simple Write. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - 1.x support branch [nifi]
slambrose commented on PR #8572: URL: https://github.com/apache/nifi/pull/8572#issuecomment-2037604719 PR has been closed until the 2.0 fix is completed and merged (PR 8536) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - 1.x support branch [nifi]
slambrose closed pull request #8572: NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - 1.x support branch URL: https://github.com/apache/nifi/pull/8572 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - main branch [nifi]
slambrose commented on code in PR #8536: URL: https://github.com/apache/nifi/pull/8536#discussion_r1551952629 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/flow/synchronization/StandardVersionedComponentSynchronizer.java: ## @@ -509,6 +509,8 @@ private String determineRegistryId(final VersionedFlowCoordinates coordinates) { } else { return explicitRegistryId; } +} else { +explicitRegistryId = "1"; Review Comment: Okay, so a better way to fix this going with your guidance on that storageLocation check is to add a check for the in memory registry client (which is used 100% of the time for stateless). Code is ready for re-review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12997) Correct AWS STS Version 2 Location
[ https://issues.apache.org/jira/browse/NIFI-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-12997: --- Fix Version/s: 2.0.0-M3 Resolution: Fixed Status: Resolved (was: Patch Available) > Correct AWS STS Version 2 Location > -- > > Key: NIFI-12997 > URL: https://issues.apache.org/jira/browse/NIFI-12997 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Fix For: 2.0.0-M3 > > Time Spent: 20m > Remaining Estimate: 0h > > The AWS STS library for Version 2 is included in {{nifi-aws-processors}}, but > not in {{nifi-aws-service-api}}. The STS library needs to be in the same > ClassLoader as the AWS Auth library for runtime credential loading. Version 1 > of the STS library is already included in {{nifi-aws-service-api}} and > corresponding NAR, which allows S3 Processors to work with STS. Other AWS > Processors using SDK Version 2 need the STS library moved in order to work > with this credentials strategy. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12997) Correct AWS STS Version 2 Location
[ https://issues.apache.org/jira/browse/NIFI-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833980#comment-17833980 ] ASF subversion and git services commented on NIFI-12997: Commit 7c8b4b8688242f4008b55350bf0c45fede3e1c05 in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7c8b4b8688 ] NIFI-12997 Corrected AWS STS Version 2 location (#8599) - Moved STS Version 2 library from nifi-aws-processors to nifi-aws-service-api - Set nifi-aws-service-api as provided at the bundle level - Set STS and Auth dependencies as provided in nifi-aws-nar > Correct AWS STS Version 2 Location > -- > > Key: NIFI-12997 > URL: https://issues.apache.org/jira/browse/NIFI-12997 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The AWS STS library for Version 2 is included in {{nifi-aws-processors}}, but > not in {{nifi-aws-service-api}}. The STS library needs to be in the same > ClassLoader as the AWS Auth library for runtime credential loading. Version 1 > of the STS library is already included in {{nifi-aws-service-api}} and > corresponding NAR, which allows S3 Processors to work with STS. Other AWS > Processors using SDK Version 2 need the STS library moved in order to work > with this credentials strategy. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12997 Correct AWS STS Version 2 location [nifi]
bbende merged PR #8599: URL: https://github.com/apache/nifi/pull/8599 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12963) Process Group Versioning
[ https://issues.apache.org/jira/browse/NIFI-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Fellows updated NIFI-12963: --- Fix Version/s: 2.0.0-M3 Resolution: Fixed Status: Resolved (was: Patch Available) > Process Group Versioning > > > Key: NIFI-12963 > URL: https://issues.apache.org/jira/browse/NIFI-12963 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core UI >Reporter: Rob Fellows >Assignee: Rob Fellows >Priority: Major > Fix For: 2.0.0-M3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > * Start > * Commit > * Force Commit > * -Show changes- (NIFI-12995) > * -Revert changes- (NIFI-12995) > * -Change Flow version- (NIFI-12995) > * Stop -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-12963) Process Group Versioning
[ https://issues.apache.org/jira/browse/NIFI-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833951#comment-17833951 ] ASF subversion and git services commented on NIFI-12963: Commit 307c4017d95350c91f9fcd9858bb7335b6add104 in nifi's branch refs/heads/main from Rob Fellows [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=307c4017d9 ] [NIFI-12963] Process Group Versioning (#8596) * [NIFI-12963] - Flow Versioning * Start version control * Stop version control * Commit local changes * Force commit local changes This closes #8596 > Process Group Versioning > > > Key: NIFI-12963 > URL: https://issues.apache.org/jira/browse/NIFI-12963 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core UI >Reporter: Rob Fellows >Assignee: Rob Fellows >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > * Start > * Commit > * Force Commit > * -Show changes- (NIFI-12995) > * -Revert changes- (NIFI-12995) > * -Change Flow version- (NIFI-12995) > * Stop -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [NIFI-12963] Process Group Versioning [nifi]
scottyaslan merged PR #8596: URL: https://github.com/apache/nifi/pull/8596 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-12400) Remaining items to migrate UI to currently supported/active framework
[ https://issues.apache.org/jira/browse/NIFI-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-12400: --- Description: The purpose of this Jira is to track all remaining items following the initial commit [1] for NIFI-11481. The description will be kept up to date with remaining features, tasks, and improvements. As each items is worked, a new sub task Jira will be created and referenced in this description. * Support Parameters in Properties with Allowable Values (NIFI-12401) * Summary (NIFI-12437) ** Remaining work not addressed in initial Jira: *** input ports (NIFI-12504) *** output ports (NIFI-12504) *** remote process groups (NIFI-12504) *** process groups (NIFI-12504) *** connections (NIFI-12504) *** System Diagnostics (NIFI-12505) *** support for cluster-specific ui elements (NIFI-12537) *** Add pagination (NIFI-12552) * Counters (NIFI-12415) ** Counter table has extra unnecessary can modify check (NIFI-12948) * Bulletin Board (NIFI-12560) * Provenance (NIFI-12445) ** Event Listing (NIFI-12445) ** Search (NIFI-12445) ** Event Dialog (NIFI-12445) ** Lineage (NIFI-12485) ** Replay from context menu (NIFI-12445) ** Clustering (NIFI-12807) * Configure Reporting Task (NIFI-12563) * Flow Analysis Rules (NIFI-12588) * Registry Clients (NIFI-12486) * Import from Registry (NIFI-12734) * Parameter Providers (NIFI-12622) ** Fetch parameters from provider, map to parameter context (dialog) - (NIFI-12665) * Cluster ** Status History - node specific values (NIFI-12848) * Flow Configuration History (NIFI-12754) ** ActionEntity.action should be optional (NIFI-12948) * Node Status History (NIFI-12553) * Status history for components from canvas context menu (NIFI-12553) * Users (NIFI-12543) ** Don't show users or groups in create/edit dialog is there are none (NIFI-12948) * Policies (NIFI-12548) ** Overridden policy Empty or Copy (NIFI-12679) ** Select Empty by default (NIFI-12948) * Help (NIFI-12795) * About * Show Upstream/Downstream * Align * List Queue (NIFI-12589) ** Clustering (NIFI-12807) * Empty [all] Queue (NIFI-12604) * View Content (NIFI-12589 and NIFI-12445) * View State (NIFI-12611) ** Clustering * Change Component Version * Consider PG permissions in Toolbox (NIFI-12683) * Handle linking to components that are not on the canvas * PG Version (NIFI-12963 & NIFI-12995) ** Start (NIFI-12963) ** Commit (NIFI-12963) ** Force Commit (NIFI-12963) ** Show changes (NIFI-12995) ** Revert changes (NIFI-12995) ** Change Flow version (NIFI-12995) ** Stop (NIFI-12963) * Configure PG (NIFI-12417) * Process Group Services (NIFI-12425) ** Listing (NIFI-12425) ** Create (NIFI-12425) ** Configure (NIFI-12425) ** Delete (NIFI-12425) ** Enable (NIFI-12529) ** Disable (NIFI-12529) ** Improve layout and breadcrumbs ** Disable and Configure * Configure Processor ** Service Link (NIFI-12425) ** Create inline Service (NIFI-12425) ** Parameter Link (NIFI-12502) ** Convert to Parameter (NIFI-12502) ** Fix issue with Property Editor width (NIFI-12547) ** Stop and Configure ** Open Custom UI (NIFI-12958) ** Property History ** Unable to re-add any removed Property (NIFI-12743) ** Shift-Enter new line when editing Property (NIFI-12743) * Property Verification * More Details (Processor, Controller Service, Reporting Task) * Download Flow * Create RPG (NIFI-12758) * Configure RPG (NIFI-12774) * RPG Remote Ports (NIFI-12778) * RPG Go To (NIFI-12759) * RPG Refresh (NIFI-12761) * Color * Move to Front * Copy/Paste * Add/Update Info Icons in dialogs throughout the application * Set viewport earlier when loading a Process Group (NIFI-12737) * Canvas global menu item should navigate user back to where they were on the canvas (NIFI-12737) * Better theme support (NIFI-12655) * Set up development/production environments files * Run unit tests are part of standard build (NIFI-12941) * Update all API calls to consider disconnect node confirmation (NIFI-13001) * Update API calls to use uiOnly flag (NIFI-12950) * Use polling interval from API * Load FlowConfiguration in guard (NIFI-12948) * Routing error handling * General API response error handling ** Management CS (NIFI-12663) ** Canvas CS (NIFI-12684) ** Remainder of Settings (NIFI-12723) ** Counters (NIFI-12723) ** Bulletins (NIFI-12723) ** Flow Designer ** Parameter Contexts (NIFI-12937) ** Parameter ** Provenance (NIFI-12767) ** Queue Listing (NIFI-12742) ** Summary (NIFI-12742) ** Users (NIFI-12742) ** Policies ** Status History * Introduce header in new pages to unify with canvas and offer better navigation. (NIFI-12597) * Theme docs, view flow file, and custom ui's * Prompt user to save Parameter Context when Edit form is dirty * Upgrade to Angular 17 (NIFI-12790) * Start/Stop processors, process groups, ... (NIFI-12568) * Dialog vertical resizing on smaller screens
[jira] [Created] (NIFI-13001) Disconnected Node Acknowledgement
Matt Gilman created NIFI-13001: -- Summary: Disconnected Node Acknowledgement Key: NIFI-13001 URL: https://issues.apache.org/jira/browse/NIFI-13001 Project: Apache NiFi Issue Type: Sub-task Components: Core UI Reporter: Matt Gilman Assignee: Matt Gilman When a node that is serving the UI for a client get's disconnected from the cluster the user must acknowledge that they are disconnected prior to be able to submit flow changes. Once they acknowledge the change, this acknowledge must be sent in all mutable requests to the API. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - main branch [nifi]
slambrose commented on code in PR #8536: URL: https://github.com/apache/nifi/pull/8536#discussion_r1551746576 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/flow/synchronization/StandardVersionedComponentSynchronizer.java: ## @@ -509,6 +509,8 @@ private String determineRegistryId(final VersionedFlowCoordinates coordinates) { } else { return explicitRegistryId; } +} else { +explicitRegistryId = "1"; Review Comment: @exceptionfactory - So your most recent comment on my PR led me down an interesting path. I see what you mean about looking into the subsequent call using the storageLocation. What's interesting about that is Stateless flows always ever use the InMemoryFlowRegistry.class , and then therefore that call: `try { locationApplicable = flowRegistryClientNode.isStorageLocationApplicable(location); } catch (final Exception e) { LOG.error("Unable to determine if {} is an applicable Flow Registry Client for storage location {}", flowRegistryClientNode, location, e); continue; } if (locationApplicable) { LOG.debug("Found Flow Registry Client {} that is applicable for storage location {}", flowRegistryClientNode, location); return flowRegistryClientNode.getIdentifier(); }` So for stateless, the locationApplication is always false InMemoryFlowRegistry: `@Override public boolean isStorageLocationApplicable(final FlowRegistryClientConfigurationContext context, final String location) { return false; }` So, basically the only way to fix stateless is to either keep the "1" for registryId fix, or try to set this isStorageLocationApplication method for the in memory registry to truealthough I'm sure that will probably break other things. Do you have any opinion on that? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - main branch [nifi]
slambrose commented on code in PR #8536: URL: https://github.com/apache/nifi/pull/8536#discussion_r1551746576 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/flow/synchronization/StandardVersionedComponentSynchronizer.java: ## @@ -509,6 +509,8 @@ private String determineRegistryId(final VersionedFlowCoordinates coordinates) { } else { return explicitRegistryId; } +} else { +explicitRegistryId = "1"; Review Comment: @exceptionfactory - So your most recent comment on my PR led me down an interesting path. I see what you mean about looking into the subsequent call using the storageLocation. What's interesting about that is Stateless flows always ever use the InMemoryFlowRegistry.class , and then therefore that call: `try { locationApplicable = flowRegistryClientNode.isStorageLocationApplicable(location); } catch (final Exception e) { LOG.error("Unable to determine if {} is an applicable Flow Registry Client for storage location {}", flowRegistryClientNode, location, e); continue; } if (locationApplicable) { LOG.debug("Found Flow Registry Client {} that is applicable for storage location {}", flowRegistryClientNode, location); return flowRegistryClientNode.getIdentifier(); }` So for stateless, the locationApplication is always false InMemoryFlowRegistry: `@Override public boolean isStorageLocationApplicable(final FlowRegistryClientConfigurationContext context, final String location) { return false; }` So, basically the only way to fix stateless is to either keep the "1" for registryId fix, or try to set this isStorageLocationApplication method for the in memory registry to truealthough I'm sure that will probably break other things. Do you have any opinion on that? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - main branch [nifi]
slambrose commented on code in PR #8536: URL: https://github.com/apache/nifi/pull/8536#discussion_r1551746576 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/flow/synchronization/StandardVersionedComponentSynchronizer.java: ## @@ -509,6 +509,8 @@ private String determineRegistryId(final VersionedFlowCoordinates coordinates) { } else { return explicitRegistryId; } +} else { +explicitRegistryId = "1"; Review Comment: So your most recent comment on my PR led me down an interesting path, haha. I see what you mean about looking into the subsequent call using the storageLocation. What's interesting about that is Stateless flows always ever use the InMemoryFlowRegistry.class , and then therefore that call: ` try { locationApplicable = flowRegistryClientNode.isStorageLocationApplicable(location); } catch (final Exception e) { LOG.error("Unable to determine if {} is an applicable Flow Registry Client for storage location {}", flowRegistryClientNode, location, e); continue; } if (locationApplicable) { LOG.debug("Found Flow Registry Client {} that is applicable for storage location {}", flowRegistryClientNode, location); return flowRegistryClientNode.getIdentifier(); } ` So for stateless, the locationApplication is always false InMemoryFlowRegistry: ` @Override public boolean isStorageLocationApplicable(final FlowRegistryClientConfigurationContext context, final String location) { return false; } ` So, basically the only way to fix stateless is to either keep the "1" for registryId fix, or try to set this isStorageLocationApplication method for the in memory registry to truealthough I'm sure that will probably break other things. Do you have any opinion on that? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12918 Fix Stateless NullPointerException on versioned sub-process groups - main branch [nifi]
slambrose commented on code in PR #8536: URL: https://github.com/apache/nifi/pull/8536#discussion_r1551746576 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/flow/synchronization/StandardVersionedComponentSynchronizer.java: ## @@ -509,6 +509,8 @@ private String determineRegistryId(final VersionedFlowCoordinates coordinates) { } else { return explicitRegistryId; } +} else { +explicitRegistryId = "1"; Review Comment: So your most recent comment on my PR led me down an interesting path, haha. I see what you mean about looking into the subsequent call using the storageLocation. What's interesting about that is Stateless flows always ever use the InMemoryFlowRegistry.class , and then therefore that call: try { locationApplicable = flowRegistryClientNode.isStorageLocationApplicable(location); } catch (final Exception e) { LOG.error("Unable to determine if {} is an applicable Flow Registry Client for storage location {}", flowRegistryClientNode, location, e); continue; } if (locationApplicable) { LOG.debug("Found Flow Registry Client {} that is applicable for storage location {}", flowRegistryClientNode, location); return flowRegistryClientNode.getIdentifier(); } So for stateless, the locationApplication is always false InMemoryFlowRegistry: @Override public boolean isStorageLocationApplicable(final FlowRegistryClientConfigurationContext context, final String location) { return false; } So, basically the only way to fix stateless is to either keep the "1" for registryId fix, or try to set this isStorageLocationApplication method for the in memory registry to truealthough I'm sure that will probably break other things. Do you have any opinion on that? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-13000) Prevent text selection
[ https://issues.apache.org/jira/browse/NIFI-13000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-13000: --- Description: The UI should prevent text selection where appropriate. One notable place is on the entire canvas and the extension creation component but we should consider other areas of the UI as well. (was: The UI should prevent text selection where appropriate. One notable place is on the entire canvas but we should consider other areas of the UI as well.) > Prevent text selection > -- > > Key: NIFI-13000 > URL: https://issues.apache.org/jira/browse/NIFI-13000 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core UI >Reporter: Matt Gilman >Priority: Major > > The UI should prevent text selection where appropriate. One notable place is > on the entire canvas and the extension creation component but we should > consider other areas of the UI as well. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (MINIFICPP-2324) Add an option to the Windows installer whether to start the service after installation
Ferenc Gerlits created MINIFICPP-2324: - Summary: Add an option to the Windows installer whether to start the service after installation Key: MINIFICPP-2324 URL: https://issues.apache.org/jira/browse/MINIFICPP-2324 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Ferenc Gerlits Assignee: Ferenc Gerlits -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (MINIFICPP-2323) ListenTCP custom delimiter
Martin Zink created MINIFICPP-2323: -- Summary: ListenTCP custom delimiter Key: MINIFICPP-2323 URL: https://issues.apache.org/jira/browse/MINIFICPP-2323 Project: Apache NiFi MiNiFi C++ Issue Type: New Feature Reporter: Martin Zink Assignee: Martin Zink -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-13000) Prevent text selection
Matt Gilman created NIFI-13000: -- Summary: Prevent text selection Key: NIFI-13000 URL: https://issues.apache.org/jira/browse/NIFI-13000 Project: Apache NiFi Issue Type: Sub-task Components: Core UI Reporter: Matt Gilman The UI should prevent text selection where appropriate. One notable place is on the entire canvas but we should consider other areas of the UI as well. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] NIFI-12959: Support loading python processors from NARs [nifi]
markap14 commented on code in PR #8573: URL: https://github.com/apache/nifi/pull/8573#discussion_r1551643116 ## nifi-nar-bundles/nifi-py4j-bundle/nifi-py4j-bridge/src/main/java/org/apache/nifi/py4j/PythonProcess.java: ## @@ -234,8 +240,18 @@ private Process launchPythonProcess(final int listeningPort, final String authTo final List commands = new ArrayList<>(); commands.add(pythonCommand); +if (!isUseVirtualEnv()) { +// If not using venv, we will not launch a separate virtual environment, so we need to use the -S +// flag in order to prevent the Python process from using the installation's site-packages. This provides +// proper dependency isolation to the Python process. +commands.add("-S"); +} String pythonPath = pythonApiDirectory.getAbsolutePath(); +final String absolutePath = virtualEnvHome.getAbsolutePath(); +final File dependenciesDir = new File(new File(absolutePath), "NAR-INF/bundled-dependencies"); Review Comment: Good catch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [NIFI-12963] Process Group Versioning [nifi]
rfellows commented on code in PR #8596: URL: https://github.com/apache/nifi/pull/8596#discussion_r1551618697 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-frontend/src/main/nifi/src/app/pages/flow-designer/service/canvas-context-menu.service.ts: ## @@ -139,14 +175,18 @@ export class CanvasContextMenu implements ContextMenuDefinitionProvider { isSeparator: true }, { -condition: (selection: any) => { -// TODO - supportsStopFlowVersioning -return false; +condition: (selection: d3.Selection) => { +return this.canvasUtils.supportsStopFlowVersioning(selection); }, clazz: 'fa', text: 'Stop version control', -action: () => { -// TODO - stopVersionControl +action: (selection: d3.Selection) => { +const selectionData = selection.datum(); +const request: StopVersionControlRequest = { +revision: selectionData.revision, +processGroupId: selectionData.id +}; +this.store.dispatch(stopVersionControlRequest({ request })); Review Comment: I realized after I started working on the next few actions for versioning that this doesn't support stopping process group if no selection is made (meaning it is the canvas pg). I have addressed this in my upcoming PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] NIFI-12999: Revert "Bump webpack-dev-middleware and karma-webpack (#8547)" [nifi]
sardell commented on PR #8603: URL: https://github.com/apache/nifi/pull/8603#issuecomment-2037093709 @exceptionfactory Thanks for taking a look. I created https://issues.apache.org/jira/browse/NIFI-12999 to track the issue. > To the substance of the issue, however, can you clarify how this is a problem for a local build as opposed to GitHub Actions build? That is a great question, and I'm not 100% sure. Maybe there is some global caching going on with those deps in GitHub Actions or some configuration that causes them to fail silently in that environment, but I'm not sure. I would have to investigate further. > It also works on recent local builds, so is the issue limited to running Registry unit tests? If it is possible to rescope the changes, as opposed to just reverting that would be ideal, but I understand it may be more complicated. The issue happens to me whenever I try to build the project with maven (used `./mvnw clean install` initially and `mvn clean install` at the root level to see that error. I also cannot cd into nifi-registry's ui and manually install npm deps either. If others aren't running into this issue on `main`, I'll gladly close this and chalk it up to a local issue I need to investigate. I found this issue late last night and wanted to have a PR ready to discuss in the morning due to the potential impact a failing build on `main` would have. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-12999) UI - local build fails because of updated nifi-registry dependency
Shane Ardell created NIFI-12999: --- Summary: UI - local build fails because of updated nifi-registry dependency Key: NIFI-12999 URL: https://issues.apache.org/jira/browse/NIFI-12999 Project: Apache NiFi Issue Type: Bug Components: NiFi Registry Reporter: Shane Ardell Assignee: Shane Ardell Attachments: revert-1.png, revert-2.png When attempting to build a local version of NiFi on my machine, the build fails when attempting to install npm dependencies for nifi-registry. See attached screenshots for more details. -- This message was sent by Atlassian Jira (v8.20.10#820010)