[jira] [Created] (NIFI-7973) Oracle JDBC - ExcecuteSqlRecord - Unable to determine Avro Schema - Decimal types
Robin Giesen created NIFI-7973: -- Summary: Oracle JDBC - ExcecuteSqlRecord - Unable to determine Avro Schema - Decimal types Key: NIFI-7973 URL: https://issues.apache.org/jira/browse/NIFI-7973 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.12.1 Environment: Centos 8.2 JDK 11 Oracle 11.2 Oracle JDBC 8 Reporter: Robin Giesen Hi there, during the upgrade from 11.4 to 12.1 we noticed that isn't longer possible to reuse the old canvas and settings because the ExecuteSqlRecord Processor cannot handle Oracles Number conversion. We use a ExecuteSqlRecord processor with incoming SQL statements and an AvroRecordWriter where the schema is determined on the fly. We also tested the ExceuteSql Processor which can use default precision and scale. Hence, there was no error. In the 12.0 version, the error doesn't occur. We tested the following combinations: 1. SELECT 1 as test from dual => FAIL 2. SELECT CAST(1 as number(12,2)) as test from dual => SUCCESS 2020-10-30 18:08:24,677 ERROR [Timer-Driven Process Thread-9] o.a.n.p.standard.ExecuteSQLRecord ExecuteSQLRecord[id=7a50a193-0175-1000-f2e7-78054e6dcba0] Unable to execute SQL select query SELECT cast(1 as number(12,2)) TEST, 1 as test2 FROM DUAL due to org.apache.nifi.processor.exception.ProcessException: java.io.IOException: org.apache.nifi.processor.exception.ProcessException: Could not determine the Avro Schema to use for writing the content. No FlowFile to route to failure: org.apache.nifi.processor.exception.ProcessException: java.io.IOException: org.apache.nifi.processor.exception.ProcessException: Could not determine the Avro Schema to use for writing the content org.apache.nifi.processor.exception.ProcessException: java.io.IOException: org.apache.nifi.processor.exception.ProcessException: Could not determine the Avro Schema to use for writing the content at org.apache.nifi.processors.standard.AbstractExecuteSQL.lambda$onTrigger$1(AbstractExecuteSQL.java:302) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2751) at org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:298) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1174) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.io.IOException: org.apache.nifi.processor.exception.ProcessException: Could not determine the Avro Schema to use for writing the content at org.apache.nifi.processors.standard.sql.RecordSqlWriter.writeResultSet(RecordSqlWriter.java:88) at org.apache.nifi.processors.standard.AbstractExecuteSQL.lambda$onTrigger$1(AbstractExecuteSQL.java:300) ... 13 common frames omitted Caused by: org.apache.nifi.processor.exception.ProcessException: Could not determine the Avro Schema to use for writing the content at org.apache.nifi.avro.AvroRecordSetWriter.createWriter(AvroRecordSetWriter.java:154) at jdk.internal.reflect.GeneratedMethodAccessor410.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:105) at com.sun.proxy.$Proxy167.createWriter(Unknown Source) at org.apache.nifi.processors.standard.sql.RecordSqlWriter.writeResultSet(RecordSqlWriter.java:81) ... 14 common frames omitted Caused by: org.apache.nifi.schema.access.SchemaNotFoundException: Failed to compile Avro Schema at org.apache.nifi.avro.AvroRecordSetWriter.createWriter(AvroRecordSetWriter.java:145) ... 21 common frames omitted Caused by:
[jira] [Updated] (MINIFICPP-1402) Enable encryption of configuration file upon C2Update
[ https://issues.apache.org/jira/browse/MINIFICPP-1402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Debreceni updated MINIFICPP-1402: -- Description: When a C2Agent issues a configuration update command, and the master encryption key is set and the encryption is explicitly requested in the properties file, it should persist the new flow configuration encrypted using said encryption key. Auxiliary the agent should be able to start without a configuration file, and fetch the configuration from the C2Agent (or based on the "nifi.c2.flow.url" value in the properties file) was: When a C2Agent issues a configuration update command, and the master encryption key is set and the encryption is explicitly requested in the properties file, it should persist the new flow configuration encrypted using said encryption key. Auxiliary the agent should be able to start without a configuration file, and fetch the configuration based on the "nifi.c2.flow.url" value in the properties file. > Enable encryption of configuration file upon C2Update > - > > Key: MINIFICPP-1402 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1402 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Adam Debreceni >Assignee: Adam Debreceni >Priority: Major > > When a C2Agent issues a configuration update command, and the master > encryption key is set and the encryption is explicitly requested in the > properties file, it should persist the new flow configuration encrypted using > said encryption key. > Auxiliary the agent should be able to start without a configuration file, and > fetch the configuration from the C2Agent (or based on the "nifi.c2.flow.url" > value in the properties file) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1402) Enable encryption of configuration file upon C2Update
Adam Debreceni created MINIFICPP-1402: - Summary: Enable encryption of configuration file upon C2Update Key: MINIFICPP-1402 URL: https://issues.apache.org/jira/browse/MINIFICPP-1402 Project: Apache NiFi MiNiFi C++ Issue Type: New Feature Reporter: Adam Debreceni Assignee: Adam Debreceni When a C2Agent issues a configuration update command, and the master encryption key is set and the encryption is explicitly requested in the properties file, it should persist the new flow configuration encrypted using said encryption key. Auxiliary the agent should be able to start without a configuration file, and fetch the configuration based on the "nifi.c2.flow.url" value in the properties file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-7406) PutAzureCosmosRecord processor to provide the cosmos sql api support for Azure Cosmos DB
[ https://issues.apache.org/jira/browse/NIFI-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joey Frazee resolved NIFI-7406. --- Resolution: Fixed > PutAzureCosmosRecord processor to provide the cosmos sql api support for > Azure Cosmos DB > > > Key: NIFI-7406 > URL: https://issues.apache.org/jira/browse/NIFI-7406 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Seokwon Yang >Assignee: Seokwon Yang >Priority: Major > Labels: azure > Fix For: 1.13.0 > > Time Spent: 7h 50m > Remaining Estimate: 0h > > Functionally it is equivalent to PutMongoRecord Processor in > nifi-mongodb-bundle, but this processor will use the cosmos native sql API. > Cosmos SDK, part of azure > SDK([https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos]), > will be leveraged in this processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7406) PutAzureCosmosRecord processor to provide the cosmos sql api support for Azure Cosmos DB
[ https://issues.apache.org/jira/browse/NIFI-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joey Frazee updated NIFI-7406: -- Fix Version/s: 1.13.0 > PutAzureCosmosRecord processor to provide the cosmos sql api support for > Azure Cosmos DB > > > Key: NIFI-7406 > URL: https://issues.apache.org/jira/browse/NIFI-7406 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Seokwon Yang >Assignee: Seokwon Yang >Priority: Major > Labels: azure > Fix For: 1.13.0 > > Time Spent: 7h 50m > Remaining Estimate: 0h > > Functionally it is equivalent to PutMongoRecord Processor in > nifi-mongodb-bundle, but this processor will use the cosmos native sql API. > Cosmos SDK, part of azure > SDK([https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos]), > will be leveraged in this processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7914) Upgrade dependencies
[ https://issues.apache.org/jira/browse/NIFI-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-7914: - Fix Version/s: 1.13.0 > Upgrade dependencies > > > Key: NIFI-7914 > URL: https://issues.apache.org/jira/browse/NIFI-7914 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.12.1 >Reporter: M Tien >Assignee: M Tien >Priority: Major > Labels: dependencies > Fix For: 1.13.0 > > > Upgrade dependencies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7914) Upgrade dependencies
[ https://issues.apache.org/jira/browse/NIFI-7914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225292#comment-17225292 ] ASF subversion and git services commented on NIFI-7914: --- Commit 8b78277a4500ad35d76e87faf6eea3a0d502df4d in nifi's branch refs/heads/main from mtien [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8b78277 ] NIFI-7914 Bumped H2 dependency to 1.4.199. Bumped icu4j dependency to 60.2. Replaced jackson-mapper-asl dependency with jackson-databind. Fixed an error comparing key identities in TestKeyService. Replaced jackson-mapper-asl ObjectMapper with jackson-databind ObjectMapper in LivySessionController. Signed-off-by: Pierre Villard This closes #4640. > Upgrade dependencies > > > Key: NIFI-7914 > URL: https://issues.apache.org/jira/browse/NIFI-7914 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.12.1 >Reporter: M Tien >Assignee: M Tien >Priority: Major > Labels: dependencies > > Upgrade dependencies. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7406) PutAzureCosmosRecord processor to provide the cosmos sql api support for Azure Cosmos DB
[ https://issues.apache.org/jira/browse/NIFI-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225218#comment-17225218 ] ASF subversion and git services commented on NIFI-7406: --- Commit 80f49eb7bdcd761cdf42631d4d9378c4004fb4e7 in nifi's branch refs/heads/main from sjyang18 [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=80f49eb ] NIFI-7406 Added PutAzureCosmosRecord processor for Azure Cosmos DB This closes #4253 Signed-off-by: Joey Frazee > PutAzureCosmosRecord processor to provide the cosmos sql api support for > Azure Cosmos DB > > > Key: NIFI-7406 > URL: https://issues.apache.org/jira/browse/NIFI-7406 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Seokwon Yang >Assignee: Seokwon Yang >Priority: Major > Labels: azure > Time Spent: 7h 50m > Remaining Estimate: 0h > > Functionally it is equivalent to PutMongoRecord Processor in > nifi-mongodb-bundle, but this processor will use the cosmos native sql API. > Cosmos SDK, part of azure > SDK([https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos]), > will be leveraged in this processor. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7975) MergeContent closes almost empty bins
[ https://issues.apache.org/jira/browse/NIFI-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225482#comment-17225482 ] Joe Witt commented on NIFI-7975: The language in that tooltip could be improved to make it clear that bins are made consistent with the rest of the properties of the processor. But you need to review the properties of MergeContent as shown in http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.12.1/org.apache.nifi.processors.standard.MergeContent/index.html Minimum number of entries AND minimum size of those entries must be reached and if so we'll make a bin within this run If not then... Maximum entires must be reached --OR-- maximum size those entries represent must be reached and if so then we'll make a bin with this run IF none of the above happens then we will check whether the things we're putting together have been in progress of binning for 'max bin age' and if so then we'll merge/send forward regardless of above terms. This is how it works. This processor is fairly complex but it is also extremely powerful and incredibly popular in use. I'm going to resolve this as information provided. If you believe the above is NOT true/not happening and you can verify that on 1.12.x then we can revisit. > MergeContent closes almost empty bins > - > > Key: NIFI-7975 > URL: https://issues.apache.org/jira/browse/NIFI-7975 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Jiri Meluzin >Priority: Major > Attachments: BinPackingAlgorithmDoc.png > > > In case of Bin-Packing Algorithm Merge Strategy the MergeContent processors > marks bins as processed even if bin contains only 1 flow-file. > Expected behavior would be that MergeContent waits for other flow-files so > the bin contains Maximum Number of Entries as it is documented at the > strategy - see BinPackingAlgorithmDoc.png > It is prety easy to reproduce - just use one GenerateFlowFile processor with > Run Schedule set to 1 sec and connect it with MergeContent processor with > default settings. Then run it and MergeContent processor produces same numbed > of Merged flow-files as the GenerateFlowFile. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-7975) MergeContent closes almost empty bins
[ https://issues.apache.org/jira/browse/NIFI-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt resolved NIFI-7975. Resolution: Information Provided > MergeContent closes almost empty bins > - > > Key: NIFI-7975 > URL: https://issues.apache.org/jira/browse/NIFI-7975 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Jiri Meluzin >Priority: Major > Attachments: BinPackingAlgorithmDoc.png > > > In case of Bin-Packing Algorithm Merge Strategy the MergeContent processors > marks bins as processed even if bin contains only 1 flow-file. > Expected behavior would be that MergeContent waits for other flow-files so > the bin contains Maximum Number of Entries as it is documented at the > strategy - see BinPackingAlgorithmDoc.png > It is prety easy to reproduce - just use one GenerateFlowFile processor with > Run Schedule set to 1 sec and connect it with MergeContent processor with > default settings. Then run it and MergeContent processor produces same numbed > of Merged flow-files as the GenerateFlowFile. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] bbende commented on a change in pull request #4613: NiFi-7819 - Add Zookeeper client TLS (external zookeeper) for cluster state management
bbende commented on a change in pull request #4613: URL: https://github.com/apache/nifi/pull/4613#discussion_r516796958 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/state/providers/zookeeper/StateProviderContext.java ## @@ -0,0 +1,35 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.controller.state.providers.zookeeper; + +import java.lang.annotation.Documented; +import java.lang.annotation.ElementType; +import java.lang.annotation.Inherited; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; + +/** + * + * + */ +@Documented +@Target({ElementType.FIELD, ElementType.METHOD}) +@Retention(RetentionPolicy.RUNTIME) +@Inherited +public @interface StateProviderContext { Review comment: This should be moved to `nifi-framework-api` in the same package as `StateProvider`, since it is a general part of the API that any other implementation may want to use besides just the ZK impl. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] bbende commented on pull request #4614: NIFI-7888 Add support for SAML authentication
bbende commented on pull request #4614: URL: https://github.com/apache/nifi/pull/4614#issuecomment-721184741 @mcgilman @thenatog added some additional commits to address some of the review feedback and improve a few things I ran into while testing, here is a summary of the changes... - Refactored some of the DB operations to have "replace" methods instead of calling "delete" and "create" in separate transactions where one could succeed and the second could fail - Added properties for configuring the values of `AuthnRequestsSigned` and `WantAssertionsSigned` for the service provider metadata that is generated for nifi at /nifi-api/access/saml/metadata ``` nifi.security.user.saml.request.signing.enabled=false nifi.security.user.saml.want.assertions.signed=true ``` - Remove the property for specifying the signing key alias, it now inspects the keystore and finds the private key entry and gets the alias automatically, if more than one private key entry exists then an exception is thrown (nifi already assumes a single private key in the keystore) - Added a property for specifying an attribute to obtain the user identity from, if an attribute is not specified or if the attribute is not found in the response, then the Subject NameID is used by default `nifi.security.user.saml.identity.attribute.name=` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7975) MergeContent closes almost empty bins
[ https://issues.apache.org/jira/browse/NIFI-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225483#comment-17225483 ] Joe Witt commented on NIFI-7975: I'll add if you're values are as follows: min entries: 1 min size: 0 MB max entries: 100 max size: 100 MB max age: 60 secs Then after the first run, even if there is just a single flowfile, it will produce a merge result. If there are many flowfiles in queue ready to go it will grab as many as it can in that run. If you want instead for it to really focus on efficient bin packing (near max size) then make your minimums close to your maximum's and set the max age. > MergeContent closes almost empty bins > - > > Key: NIFI-7975 > URL: https://issues.apache.org/jira/browse/NIFI-7975 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Jiri Meluzin >Priority: Major > Attachments: BinPackingAlgorithmDoc.png > > > In case of Bin-Packing Algorithm Merge Strategy the MergeContent processors > marks bins as processed even if bin contains only 1 flow-file. > Expected behavior would be that MergeContent waits for other flow-files so > the bin contains Maximum Number of Entries as it is documented at the > strategy - see BinPackingAlgorithmDoc.png > It is prety easy to reproduce - just use one GenerateFlowFile processor with > Run Schedule set to 1 sec and connect it with MergeContent processor with > default settings. Then run it and MergeContent processor produces same numbed > of Merged flow-files as the GenerateFlowFile. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-7976) JSON Schema Inference infers long type when all values fit in integer type
[ https://issues.apache.org/jira/browse/NIFI-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess reassigned NIFI-7976: -- Assignee: Matt Burgess > JSON Schema Inference infers long type when all values fit in integer type > -- > > Key: NIFI-7976 > URL: https://issues.apache.org/jira/browse/NIFI-7976 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > For CSV and XML Schema Inference engines, integral values are checked to see > if they fit into an integer type, and if so, the engine infers "int" type, > otherwise it infers "long" type. However the JSON inference engine only > checks for "bigint" and everything else is "long". > It should also check to see if the values fit in an integer type and infer > "int" in that case. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7976) JSON Schema Inference infers long type when all values fit in integer type
Matt Burgess created NIFI-7976: -- Summary: JSON Schema Inference infers long type when all values fit in integer type Key: NIFI-7976 URL: https://issues.apache.org/jira/browse/NIFI-7976 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Matt Burgess For CSV and XML Schema Inference engines, integral values are checked to see if they fit into an integer type, and if so, the engine infers "int" type, otherwise it infers "long" type. However the JSON inference engine only checks for "bigint" and everything else is "long". It should also check to see if the values fit in an integer type and infer "int" in that case. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7975) MergeContent closes almost empty bins
[ https://issues.apache.org/jira/browse/NIFI-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiri Meluzin updated NIFI-7975: --- Attachment: BinPackingAlgorithmDoc.png > MergeContent closes almost empty bins > - > > Key: NIFI-7975 > URL: https://issues.apache.org/jira/browse/NIFI-7975 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Jiri Meluzin >Priority: Major > Attachments: BinPackingAlgorithmDoc.png > > > In case of Bin-Packing Algorithm Merge Strategy the MergeContent processors > marks bins as processed even if bin contains only 1 flow-file. > Expected behavior would be that MergeContent waits for other flow-files so > the bin contains Maximum Number of Entries as it is documented at the > strategy - see BinPackingAlgorithmDoc.png > It is prety easy to reproduce - just use one GenerateFlowFile processor with > Run Schedule set to 1 sec and connect it with MergeContent processor with > default settings. Then run it and MergeContent processor produces same numbed > of Merged flow-files as the GenerateFlowFile. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7975) MergeContent closes almost empty bins
Jiri Meluzin created NIFI-7975: -- Summary: MergeContent closes almost empty bins Key: NIFI-7975 URL: https://issues.apache.org/jira/browse/NIFI-7975 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.9.2 Reporter: Jiri Meluzin Attachments: BinPackingAlgorithmDoc.png In case of Bin-Packing Algorithm Merge Strategy the MergeContent processors marks bins as processed even if bin contains only 1 flow-file. Expected behavior would be that MergeContent waits for other flow-files so the bin contains Maximum Number of Entries as it is documented at the strategy - see BinPackingAlgorithmDoc.png It is prety easy to reproduce - just use one GenerateFlowFile processor with Run Schedule set to 1 sec and connect it with MergeContent processor with default settings. Then run it and MergeContent processor produces same numbed of Merged flow-files as the GenerateFlowFile. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7976) JSON Schema Inference infers long type when all values fit in integer type
[ https://issues.apache.org/jira/browse/NIFI-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-7976: --- Status: Patch Available (was: In Progress) > JSON Schema Inference infers long type when all values fit in integer type > -- > > Key: NIFI-7976 > URL: https://issues.apache.org/jira/browse/NIFI-7976 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > For CSV and XML Schema Inference engines, integral values are checked to see > if they fit into an integer type, and if so, the engine infers "int" type, > otherwise it infers "long" type. However the JSON inference engine only > checks for "bigint" and everything else is "long". > It should also check to see if the values fit in an integer type and infer > "int" in that case. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #4640: NIFI-7914 Bumped H2 dependency to 1.4.199.
asfgit closed pull request #4640: URL: https://github.com/apache/nifi/pull/4640 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #1403: NIFI-3293 - Expose counters to reporting tasks, and send counters dat…
MikeThomsen commented on pull request #1403: URL: https://github.com/apache/nifi/pull/1403#issuecomment-720797504 @pvillard31 LGTM as well, but it's way out of sync with `main` now. How do you want to proceed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on pull request #4639: Update PutFile.java: fix path traversal vulnerability
joewitt commented on pull request #4639: URL: https://github.com/apache/nifi/pull/4639#issuecomment-720618061 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7968) PutHDFS/PutParquet fail to write to Kerberized HDFS with KMS enabled
[ https://issues.apache.org/jira/browse/NIFI-7968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-7968: -- Status: Patch Available (was: Open) > PutHDFS/PutParquet fail to write to Kerberized HDFS with KMS enabled > > > Key: NIFI-7968 > URL: https://issues.apache.org/jira/browse/NIFI-7968 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.12.1, 1.12.0 >Reporter: Bryan Bende >Priority: Minor > > From apache slack... > {color:#1d1c1d}My PutHDFS and PutParquet processors are configured to use a > KeytabCredentialsService. I've confirmed that that service is configured > correctly. The server also has the latest core-site and hdfs-site XML > configuration files from the HDFS cluster. However, whenever either of those > processors run, we receive the attached error message.{color} > {code:java} > 2020-10-13 21:37:33,547 WARN [Timer-Driven Process Thread-100] > o.a.h.c.k.k.LoadBalancingKMSClientProvider KMS provider at [https:// SERVER>:9393/kms/v1/] threw an IOException:java.io.IOException: > org.apache.hadoop.security.authentication.client.AuthenticationException: > Error while authenticating with endpoint: https:// SERVER>:9393/kms/v1/keyversion/keyname/_eek?eek_op=decryptat > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:525) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:826) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:351) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:347) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:172) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:347) > at > org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532) > at > org.apache.hadoop.hdfs.HdfsKMSUtil.decryptEncryptedDataEncryptionKey(HdfsKMSUtil.java:206) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:966) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:947) > at > org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533) > at > org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:527) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:541) > at > org.apache.nifi.processors.hadoop.PutHDFS$1$1.process(PutHDFS.java:337) > at > org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2324) > at > org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2292) > at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:320) > at java.security.AccessController.doPrivileged(Native Method)at > javax.security.auth.Subject.doAs(Subject.java:360)at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710) > at > org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:250) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1174) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748)Caused by: > org.apache.hadoop.security.authentication.client.AuthenticationException:
[GitHub] [nifi] MikeThomsen closed pull request #2635: NIFI-5071 Create a Processor to write the contents of a FlowFile to OpenTSDB
MikeThomsen closed pull request #2635: URL: https://github.com/apache/nifi/pull/2635 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ottobackwards commented on pull request #4513: NIFI-7761 Allow HandleHttpRequest to add specified form data to FlowF…
ottobackwards commented on pull request #4513: URL: https://github.com/apache/nifi/pull/4513#issuecomment-721154421 Thanks @grego33, hopefully we can get some eyes on this soon This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] aminadinari19 opened a new pull request #934: MINIFICPP-1330-add conversion from microseconds
aminadinari19 opened a new pull request #934: URL: https://github.com/apache/nifi-minifi-cpp/pull/934 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [Y] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [Y] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [Y] Has your PR been rebased against the latest commit within the target branch (typically main)? - [Y] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mattyb149 opened a new pull request #4645: NIFI-7976: Infer ints in JSON when all values fit in integer type
mattyb149 opened a new pull request #4645: URL: https://github.com/apache/nifi/pull/4645 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR Check isLong() before returning LONG type, if false return INT type In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [x] Have you written or updated unit tests to verify your changes? - [x] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] simonbence opened a new pull request #4642: NIFI-7959 Handling node disconnection in MonitorActivity processor
simonbence opened a new pull request #4642: URL: https://github.com/apache/nifi/pull/4642 [NIFI-7959](https://issues.apache.org/jira/browse/NIFI-7959) By making connection data as part of the MonitorActivity's state, we can avoid sending invalid inactivity notifications when the given node is not part of the cluster. Also, when the node is connected again, it might reconcile shared state based on the possible activity during the disconnection. _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen closed pull request #2671: NiFi-5102 - Adding Processors for MarkLogic DB
MikeThomsen closed pull request #2671: URL: https://github.com/apache/nifi/pull/2671 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] aminadinari19 opened a new pull request #933: MINIFICPP-1330-add conversion from microseconds
aminadinari19 opened a new pull request #933: URL: https://github.com/apache/nifi-minifi-cpp/pull/933 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ Y] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ Y] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ Y] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ Y] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #4639: Update PutFile.java: fix path traversal vulnerability
MikeThomsen commented on pull request #4639: URL: https://github.com/apache/nifi/pull/4639#issuecomment-720795906 I agree with @joewitt. I wouldn't call this a vulnerability unless you're running NiFi as root or something (and let's be honest, this is the least of your worries if that's the case). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] anaylor commented on pull request #4641: NIFI-6394
anaylor commented on pull request #4641: URL: https://github.com/apache/nifi/pull/4641#issuecomment-720714814 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7974) Upgrading calcite, hbase, geo2ip deps
Pierre Villard created NIFI-7974: Summary: Upgrading calcite, hbase, geo2ip deps Key: NIFI-7974 URL: https://issues.apache.org/jira/browse/NIFI-7974 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Pierre Villard Assignee: Pierre Villard Fix For: 1.13.0 Upgrading calcite, hbase, geo2ip deps -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] MikeThomsen closed pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor
MikeThomsen closed pull request #3317: URL: https://github.com/apache/nifi/pull/3317 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pvillard31 opened a new pull request #4644: NIFI-7974 - Upgrading calcite, hbase, geo2ip deps
pvillard31 opened a new pull request #4644: URL: https://github.com/apache/nifi/pull/4644 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #2671: NiFi-5102 - Adding Processors for MarkLogic DB
MikeThomsen commented on pull request #2671: URL: https://github.com/apache/nifi/pull/2671#issuecomment-720793824 For anyone who wants to use this functionality, it appears that they have set up their own bundle download [here](https://github.com/marklogic/nifi/releases). Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-7968) PutHDFS/PutParquet fail to write to Kerberized HDFS with KMS enabled
[ https://issues.apache.org/jira/browse/NIFI-7968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende reassigned NIFI-7968: - Assignee: Bryan Bende > PutHDFS/PutParquet fail to write to Kerberized HDFS with KMS enabled > > > Key: NIFI-7968 > URL: https://issues.apache.org/jira/browse/NIFI-7968 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.12.0, 1.12.1 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > > From apache slack... > {color:#1d1c1d}My PutHDFS and PutParquet processors are configured to use a > KeytabCredentialsService. I've confirmed that that service is configured > correctly. The server also has the latest core-site and hdfs-site XML > configuration files from the HDFS cluster. However, whenever either of those > processors run, we receive the attached error message.{color} > {code:java} > 2020-10-13 21:37:33,547 WARN [Timer-Driven Process Thread-100] > o.a.h.c.k.k.LoadBalancingKMSClientProvider KMS provider at [https:// SERVER>:9393/kms/v1/] threw an IOException:java.io.IOException: > org.apache.hadoop.security.authentication.client.AuthenticationException: > Error while authenticating with endpoint: https:// SERVER>:9393/kms/v1/keyversion/keyname/_eek?eek_op=decryptat > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:525) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:826) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:351) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:347) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:172) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:347) > at > org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532) > at > org.apache.hadoop.hdfs.HdfsKMSUtil.decryptEncryptedDataEncryptionKey(HdfsKMSUtil.java:206) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:966) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:947) > at > org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533) > at > org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:527) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:541) > at > org.apache.nifi.processors.hadoop.PutHDFS$1$1.process(PutHDFS.java:337) > at > org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2324) > at > org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2292) > at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:320) > at java.security.AccessController.doPrivileged(Native Method)at > javax.security.auth.Subject.doAs(Subject.java:360)at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710) > at > org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:250) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1174) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748)Caused by: >
[GitHub] [nifi] thenatog commented on pull request #4613: NiFi-7819 - Add Zookeeper client TLS (external zookeeper) for cluster state management
thenatog commented on pull request #4613: URL: https://github.com/apache/nifi/pull/4613#issuecomment-720756920 I believe I have resolved all the requests made so far. Let me know if any further changes are required before we merge this one in. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mtien-apache opened a new pull request #4640: NIFI-7914 Bumped H2 dependency to 1.4.199.
mtien-apache opened a new pull request #4640: URL: https://github.com/apache/nifi/pull/4640 Bumped icu4j dependency to 60.2. Replaced jackson-mapper-asl dependency with jackson-databind. Fixed an error comparing key identities in TestKeyService. Replaced jackson-mapper-asl ObjectMapper with jackson-databind ObjectMapper in LivySessionController. Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [x] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [x] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] bbende commented on a change in pull request #4629: NIFI-7954 Wrapping HBase_*_ClientService calls in getUgi().doAs()
bbende commented on a change in pull request #4629: URL: https://github.com/apache/nifi/pull/4629#discussion_r516688495 ## File path: nifi-nar-bundles/nifi-extension-utils/nifi-hadoop-utils/src/main/java/org/apache/nifi/hadoop/CallableThrowingException.java ## @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hadoop; + +@FunctionalInterface +public interface CallableThrowingException { Review comment: This can be removed now that it is not used right? ## File path: nifi-nar-bundles/nifi-extension-utils/nifi-hadoop-utils/src/main/java/org/apache/nifi/hadoop/SecurityUtil.java ## @@ -141,4 +143,37 @@ public static boolean isSecurityEnabled(final Configuration config) { Validate.notNull(config); return KERBEROS.equalsIgnoreCase(config.get(HADOOP_SECURITY_AUTHENTICATION)); } + +public static T callWithUgi(UserGroupInformation ugi, PrivilegedExceptionAction action) throws IOException { +try { +return ugi.doAs(action); +} catch (InterruptedException e) { +throw new IOException(e); +} +} + +public static void checkTGTAndRelogin(ComponentLog log, KerberosUser kerberosUser, UserGroupInformation ugi) throws IOException { Review comment: It looks like only the method below that takes log and kerberosUser is used, can we remove this one if it is unused? ## File path: nifi-nar-bundles/nifi-extension-utils/nifi-hadoop-utils/src/main/java/org/apache/nifi/hadoop/SecurityUtil.java ## @@ -141,4 +143,37 @@ public static boolean isSecurityEnabled(final Configuration config) { Validate.notNull(config); return KERBEROS.equalsIgnoreCase(config.get(HADOOP_SECURITY_AUTHENTICATION)); } + +public static T callWithUgi(UserGroupInformation ugi, PrivilegedExceptionAction action) throws IOException { Review comment: It doesn't seem like we really need a utility method here just to call one line `return ugi.doAs(action);` , I think it would be simpler to just call `ugi.doAs` from all the places where this method is being called This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] azgron opened a new pull request #4639: Update PutFile.java: fix path traversal vulnerability
azgron opened a new pull request #4639: URL: https://github.com/apache/nifi/pull/4639 Check if filename contains path traversal Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #2635: NIFI-5071 Create a Processor to write the contents of a FlowFile to OpenTSDB
MikeThomsen commented on pull request #2635: URL: https://github.com/apache/nifi/pull/2635#issuecomment-720795029 > However I'm a little puzzled about why you'd use this processor over InvokeHttp? I did a cursory review and have the same concern. Appreciate the contribution, but I think we're going to have to pass on this because a REST flow could accomplish the same thing with less functionality to maintain unless I'm missing something. Feel free to reopen if you're still tracking. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] anaylor opened a new pull request #4641: NIFI-6394
anaylor opened a new pull request #4641: URL: https://github.com/apache/nifi/pull/4641 Description of PR Development has stopped on this [PR](https://github.com/apache/nifi/pull/4221) so I picked up the changes and addressed the comments. This PR removes the hardcoded limit for viewing files in the flow file list queue. Adds a "View All" button to load all available flow files for a given connection, then has an option for exporting those files to CSV. The files could then be sorted/explored in another data analysis tool. ![image](https://user-images.githubusercontent.com/61026014/97916880-9ec88600-1d21-11eb-86fa-d812b31fcaea.png) _Enables Flowfile list queue CSV Export In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [x] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] jfrazee commented on a change in pull request #4253: NIFI-7406: PutAzureCosmosRecord Processor
jfrazee commented on a change in pull request #4253: URL: https://github.com/apache/nifi/pull/4253#discussion_r516238042 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/cosmos/document/PutAzureCosmosDBRecord.java ## @@ -0,0 +1,223 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.cosmos.document; + +import com.azure.cosmos.CosmosContainer; +import com.azure.cosmos.CosmosException; +import com.azure.cosmos.implementation.ConflictException; + +import java.io.IOException; +import java.io.InputStream; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.SystemResource; +import org.apache.nifi.annotation.behavior.SystemResourceConsideration; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.AllowableValue; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.schema.access.SchemaNotFoundException; +import org.apache.nifi.serialization.MalformedRecordException; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.record.Record; +import org.apache.nifi.serialization.record.RecordFieldType; +import org.apache.nifi.serialization.record.RecordSchema; +import org.apache.nifi.serialization.record.util.DataTypeUtils; + +@EventDriven +@Tags({ "azure", "cosmos", "insert", "record", "put" }) +@InputRequirement(Requirement.INPUT_REQUIRED) +@CapabilityDescription("This processor is a record-aware processor for inserting data into Cosmos DB with Core SQL API. It uses a configured record reader and " + +"schema to read an incoming record set from the body of a Flowfile and then inserts those records into " + +"a configured Cosmos DB Container.") +@SystemResourceConsideration(resource = SystemResource.MEMORY) +public class PutAzureCosmosDBRecord extends AbstractAzureCosmosDBProcessor { + +private String conflictHandlingStrategy; +static final AllowableValue IGNORE_CONFLICT = new AllowableValue("ignore", "Ignore", "Conflicting records will not be inserted, and FlowFile will not be routed to failure"); +static final AllowableValue UPSERT_CONFLICT = new AllowableValue("upsert", "Upsert", "Conflicting records will be upserted, and FlowFile will not be routed to failure"); + +static final PropertyDescriptor RECORD_READER_FACTORY = new PropertyDescriptor.Builder() +.name("record-reader") +.displayName("Record Reader") +.description("Specifies the Controller Service to use for parsing incoming data and determining the data's schema") +.identifiesControllerService(RecordReaderFactory.class) +.required(true) +.build(); + +static final PropertyDescriptor INSERT_BATCH_SIZE = new PropertyDescriptor.Builder() +.name("insert-batch-size") +.displayName("Insert Batch Size") +.description("The number of records to group together for one single insert operation against Cosmos DB") +.defaultValue("20") +.required(false) +.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.build(); + +static final PropertyDescriptor CONFLICT_HANDLE_STRATEGY = new PropertyDescriptor.Builder() +
[GitHub] [nifi] MikeThomsen commented on pull request #4641: NIFI-6394
MikeThomsen commented on pull request #4641: URL: https://github.com/apache/nifi/pull/4641#issuecomment-720837003 Have you tested this with a queue that has, say, 100k-1M flowfiles in it? That's not a best practice, but not uncommon. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] jfrazee closed pull request #4253: NIFI-7406: PutAzureCosmosRecord Processor
jfrazee closed pull request #4253: URL: https://github.com/apache/nifi/pull/4253 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #3051: NIFI-5642: QueryCassandra processor : output FlowFiles as soon fetch_size is reached
MikeThomsen commented on pull request #3051: URL: https://github.com/apache/nifi/pull/3051#issuecomment-721094850 @aglotero what happened was is you did a merge of master instead of a rebase. It's a common mistake, but one that can cause a lot of problems for you in cases like this. The proper work flow in cases like this is: 1. git checkout main 2. git pull apache main 3. git checkout 4. git rebase main 5. git push origin --force (this is necessary since you updated the base of the feature branch) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] bbende opened a new pull request #4643: NIFI-7968 Ensure the status login user is set in UserGroupInformation…
bbende opened a new pull request #4643: URL: https://github.com/apache/nifi/pull/4643 … after creating a UGI from a Subject Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy commented on a change in pull request #4629: NIFI-7954 Wrapping HBase_*_ClientService calls in getUgi().doAs()
tpalfy commented on a change in pull request #4629: URL: https://github.com/apache/nifi/pull/4629#discussion_r515980827 ## File path: nifi-nar-bundles/nifi-extension-utils/nifi-hadoop-utils/src/main/java/org/apache/nifi/hadoop/SecurityUtil.java ## @@ -141,4 +143,37 @@ public static boolean isSecurityEnabled(final Configuration config) { Validate.notNull(config); return KERBEROS.equalsIgnoreCase(config.get(HADOOP_SECURITY_AUTHENTICATION)); } + +public static T callWithUgi(CallableThrowingException ugiProvider, CallableThrowingException function) throws IOException { Review comment: Yeah, that would work. I overcomplicated it a little it seems. Updated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #4253: NIFI-7406: PutAzureCosmosRecord Processor
MikeThomsen commented on pull request #4253: URL: https://github.com/apache/nifi/pull/4253#issuecomment-720798373 For Cosmos graph support, take a look at the nifi-graph-bundle folder under nifi-nar-modules and play around with the gremlin support there. Would be interested to know how well it works (or doesn't) with the Cosmos gremlin API. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] grego33 commented on pull request #4513: NIFI-7761 Allow HandleHttpRequest to add specified form data to FlowF…
grego33 commented on pull request #4513: URL: https://github.com/apache/nifi/pull/4513#issuecomment-720715556 Checked this out and ran it locally today. Worked great! Thanks so much. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #932: MINIFICPP-1395 - Use Identifier instead of its stringified form wherever possible
arpadboda commented on a change in pull request #932: URL: https://github.com/apache/nifi-minifi-cpp/pull/932#discussion_r515886712 ## File path: libminifi/src/ThreadedSchedulingAgent.cpp ## @@ -110,16 +110,15 @@ void ThreadedSchedulingAgent::schedule(std::shared_ptr processo thread_pool_.execute(std::move(functor), future); } logger_->log_debug("Scheduled thread %d concurrent workers for for process %s", processor->getMaxConcurrentTasks(), processor->getName()); - processors_running_.insert(processor->getUUIDStr()); - return; + processors_running_.insert(processor->getUUID()); } void ThreadedSchedulingAgent::stop() { SchedulingAgent::stop(); std::lock_guard lock(mutex_); - for (const auto& p : processors_running_) { -logger_->log_error("SchedulingAgent is stopped before processor was unscheduled: %s", p); -thread_pool_.stopTasks(p); + for (const auto& processor_id : processors_running_) { +logger_->log_error("SchedulingAgent is stopped before processor was unscheduled: %s", processor_id.to_string()); +thread_pool_.stopTasks(processor_id.to_string()); Review comment: I think it's intentional. The IDs are only used to identify the tasks belonging to a given component, the threadpool otherwise doesn't care. I wouldn't call it a beautiful design, but works as expected, so I kept it when the code was refactored. ## File path: libminifi/src/ThreadedSchedulingAgent.cpp ## @@ -110,16 +110,15 @@ void ThreadedSchedulingAgent::schedule(std::shared_ptr processo thread_pool_.execute(std::move(functor), future); } logger_->log_debug("Scheduled thread %d concurrent workers for for process %s", processor->getMaxConcurrentTasks(), processor->getName()); - processors_running_.insert(processor->getUUIDStr()); - return; + processors_running_.insert(processor->getUUID()); } void ThreadedSchedulingAgent::stop() { SchedulingAgent::stop(); std::lock_guard lock(mutex_); - for (const auto& p : processors_running_) { -logger_->log_error("SchedulingAgent is stopped before processor was unscheduled: %s", p); -thread_pool_.stopTasks(p); + for (const auto& processor_id : processors_running_) { +logger_->log_error("SchedulingAgent is stopped before processor was unscheduled: %s", processor_id.to_string()); +thread_pool_.stopTasks(processor_id.to_string()); Review comment: I think it's intentional. The IDs are only used to identify the tasks belonging to a given component, the threadpool otherwise doesn't care. So they are not required to be unique. I wouldn't call it a beautiful design, but works as expected, so I kept it when the code was refactored. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7959) Implement MonitorActivity disconnection management
[ https://issues.apache.org/jira/browse/NIFI-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Bence updated NIFI-7959: -- Status: Patch Available (was: In Progress) https://github.com/apache/nifi/pull/4642 > Implement MonitorActivity disconnection management > -- > > Key: NIFI-7959 > URL: https://issues.apache.org/jira/browse/NIFI-7959 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: Simon Bence >Assignee: Simon Bence >Priority: Major > Fix For: 1.13.0 > > > MonitorActivity processor, when the node disconnects from the cluster, might > still emit "inactive" message. This can happen when the scope is set to > cluster level monitoring and the given node cannot reach shared state but has > information only based on the local activity. This might lead to false > inactivity alert, so it is desirable to prevent reporting until the node is > disconnected and reconcile state after reconnection. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] MikeThomsen commented on pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor
MikeThomsen commented on pull request #3317: URL: https://github.com/apache/nifi/pull/3317#issuecomment-720701577 @adamfisher I've spent a few hours today playing around with this and integrating our two use cases into a single processor. Barring unforeseen circumstances, I should have a PR up by about tomorrow lunch time that has your contribution with my additions layered on top. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #932: MINIFICPP-1395 - Use Identifier instead of its stringified form wherever possible
arpadboda closed pull request #932: URL: https://github.com/apache/nifi-minifi-cpp/pull/932 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] aminadinari19 closed pull request #933: MINIFICPP-1330-add conversion from microseconds
aminadinari19 closed pull request #933: URL: https://github.com/apache/nifi-minifi-cpp/pull/933 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on pull request #4512: NIFI-1121: Support making properties dependent upon one another
markap14 commented on pull request #4512: URL: https://github.com/apache/nifi/pull/4512#issuecomment-721263080 @bbende I rebased the branch against `main` and included the patch from @mtien-apache . Can you check it again? Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on pull request #4634: NIFI-7967: Added regex support for attribute header selection on HandleHTTPResponse
markap14 commented on pull request #4634: URL: https://github.com/apache/nifi/pull/4634#issuecomment-721268257 Thanks @r65535, the changes look good to me. +1 merging to main. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7967) Regex support for response headers on HandleHTTPResponse
[ https://issues.apache.org/jira/browse/NIFI-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-7967: - Status: Patch Available (was: Open) > Regex support for response headers on HandleHTTPResponse > > > Key: NIFI-7967 > URL: https://issues.apache.org/jira/browse/NIFI-7967 > Project: Apache NiFi > Issue Type: Improvement >Reporter: r65535 >Assignee: r65535 >Priority: Trivial > Time Spent: 0.5h > Remaining Estimate: 0h > > HandleHTTPResponse supports adding attributes as HTTP headers using dynamic > properties, but it'd be nice to also support header selection by regex (like > PostHTTP). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7967) Regex support for response headers on HandleHTTPResponse
[ https://issues.apache.org/jira/browse/NIFI-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-7967: - Fix Version/s: 1.13.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Regex support for response headers on HandleHTTPResponse > > > Key: NIFI-7967 > URL: https://issues.apache.org/jira/browse/NIFI-7967 > Project: Apache NiFi > Issue Type: Improvement >Reporter: r65535 >Assignee: r65535 >Priority: Trivial > Fix For: 1.13.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > HandleHTTPResponse supports adding attributes as HTTP headers using dynamic > properties, but it'd be nice to also support header selection by regex (like > PostHTTP). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] joewitt commented on pull request #4639: Update PutFile.java: fix path traversal vulnerability
joewitt commented on pull request #4639: URL: https://github.com/apache/nifi/pull/4639#issuecomment-721276193 Yep I am fine with that - perhaps 'prevent' is better than 'avoid' The display name would be 'Prevent Path Escape' and the name would be 'preventpathescape' or something consistent with how others do it. Default of false is good. Description will let the user know the purpose of the property if true is to detect whether the resolved path (including following symlinks) still appears inline with the intended specified target directory. Where this could be confusing is when the base dir itself is a symlink and resolves to something else. But the more context you can give the user on the intent and so long as this is purely optional it is fine. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #4646: NIFI-6047 Add DeduplicateRecords (combines 6047 and 6014)
MikeThomsen commented on pull request #4646: URL: https://github.com/apache/nifi/pull/4646#issuecomment-721282868 @adamfisher FYSA This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7977) MonitorActivity erroneously logs warnings about being in standalone mode when configured for cluster
Mark Payne created NIFI-7977: Summary: MonitorActivity erroneously logs warnings about being in standalone mode when configured for cluster Key: NIFI-7977 URL: https://issues.apache.org/jira/browse/NIFI-7977 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Mark Payne When MonitorActivity is configured with a scope of cluster, if a node gets disconnected from the cluster, it constantly spams the logs with inaccurate warnings: {code:java} 2020-11-03 11:50:09,507 WARN [Timer-Driven Process Thread-4] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:10,508 WARN [Timer-Driven Process Thread-4] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:11,510 WARN [Timer-Driven Process Thread-6] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:12,510 WARN [Timer-Driven Process Thread-7] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:13,511 WARN [Timer-Driven Process Thread-4] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:14,511 WARN [Timer-Driven Process Thread-6] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:15,512 WARN [Timer-Driven Process Thread-9] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:16,513 WARN [Timer-Driven Process Thread-8] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:17,514 WARN [Timer-Driven Process Thread-9] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:18,515 WARN [Timer-Driven Process Thread-5] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:19,515 WARN [Timer-Driven Process Thread-2] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:20,516 WARN [Timer-Driven Process Thread-6] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:21,516 WARN [Timer-Driven Process Thread-4] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:22,516 WARN [Timer-Driven Process Thread-8] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:23,517 WARN [Timer-Driven Process Thread-6] o.a.n.p.standard.MonitorActivity MonitorActivity[id=8f020525-0175-1000--eddd695c] NiFi is running as a Standalone mode, but 'cluster' scope is set. Fallback to 'node' scope. Fix configuration to stop this message. 2020-11-03 11:50:24,518 WARN [Timer-Driven Process Thread-8] o.a.n.p.standard.MonitorActivity
[GitHub] [nifi] azgron commented on pull request #4639: Update PutFile.java: fix path traversal vulnerability
azgron commented on pull request #4639: URL: https://github.com/apache/nifi/pull/4639#issuecomment-721291253 I have done the changes. What do you think? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] azgron commented on pull request #4639: Update PutFile.java: fix path traversal vulnerability
azgron commented on pull request #4639: URL: https://github.com/apache/nifi/pull/4639#issuecomment-721274476 Thanks for your comments. Do you agree to add a boolean property witch called "AvoidPathEscapes"? The default value will be false as @joewitt mentioned. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on a change in pull request #4639: Update PutFile.java: fix path traversal vulnerability
joewitt commented on a change in pull request #4639: URL: https://github.com/apache/nifi/pull/4639#discussion_r516862513 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutFile.java ## @@ -232,8 +240,14 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final Path rootDirPath = configuredRootDirPath.toAbsolutePath(); String filename = flowFile.getAttribute(CoreAttributes.FILENAME.key()); final Path tempCopyFile = rootDirPath.resolve("." + filename); -final Path copyFile = rootDirPath.resolve(filename); - +final Path copyFile = rootDirPath.resolve(filename) +if (context.getProperty(PREVENT_PATH_ESCAPE).asBoolean() && !copyFile.startsWith(rootDirPath)) { +flowFile = session.penalize(flowFile); +session.transfer(flowFile, REL_FAILURE); +logger.error("Resolved path escapes the root dir path"); Review comment: I would provide the two path values that were compared/tested so the user can understand what really happened. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7856) Provenance failed to be compressed after nifi upgrade to 1.12
[ https://issues.apache.org/jira/browse/NIFI-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225609#comment-17225609 ] Mark Payne commented on NIFI-7856: -- [~leeyoda] Patch Available means that there's a fix/PR available but it hasn't been merged yet. > Provenance failed to be compressed after nifi upgrade to 1.12 > - > > Key: NIFI-7856 > URL: https://issues.apache.org/jira/browse/NIFI-7856 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.12.0 >Reporter: Mengze Li >Assignee: Mark Payne >Priority: Major > Fix For: 1.13.0 > > Attachments: 1683472.prov, NIFI-7856.xml, ls.png, screenshot-1.png, > screenshot-2.png, screenshot-3.png > > Time Spent: 20m > Remaining Estimate: 0h > > We upgraded our nifi cluster from 1.11.3 to 1.12.0. > The nodes come up and everything looks to be functional. I can see 1.12.0 is > running. > Later on, we discovered that the data provenance is missing. From checking > our logs, we see tons of errors compressing the logs. > {code} > 2020-09-28 03:38:35,205 ERROR [Compress Provenance Logs-1-thread-1] > o.a.n.p.s.EventFileCompressor Failed to compress > ./provenance_repository/2752821.prov on rollover > {code} > This didn't happen in 1.11.3. > Is this a known issue? We are considering reverting back if there is no > solution for this since we can't go prod with no/broken data provenance. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] azgron commented on a change in pull request #4639: Update PutFile.java: fix path traversal vulnerability
azgron commented on a change in pull request #4639: URL: https://github.com/apache/nifi/pull/4639#discussion_r516893887 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutFile.java ## @@ -232,8 +240,14 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final Path rootDirPath = configuredRootDirPath.toAbsolutePath(); String filename = flowFile.getAttribute(CoreAttributes.FILENAME.key()); final Path tempCopyFile = rootDirPath.resolve("." + filename); -final Path copyFile = rootDirPath.resolve(filename); - +final Path copyFile = rootDirPath.resolve(filename) +if (context.getProperty(PREVENT_PATH_ESCAPE).asBoolean() && !copyFile.startsWith(rootDirPath)) { +flowFile = session.penalize(flowFile); +session.transfer(flowFile, REL_FAILURE); +logger.error("Resolved path escapes the root dir path"); Review comment: Do you mean: !(copyFile.startsWith(rootDirPath) || tempCopyFile.startsWith(rootDirPath)) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 closed pull request #3429: NIFI-6182: Updated dependency on Lucene to Lucene 8.0.0. Updated code…
markap14 closed pull request #3429: URL: https://github.com/apache/nifi/pull/3429 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 closed pull request #3357: NIFI-5922: NiFi-Fn pull request that builds upon work in PR 3241
markap14 closed pull request #3357: URL: https://github.com/apache/nifi/pull/3357 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] jfrazee closed pull request #4092: NIFI-7115 Add ZooKeeper TLS configuration to Administration Guide
jfrazee closed pull request #4092: URL: https://github.com/apache/nifi/pull/4092 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 merged pull request #4634: NIFI-7967: Added regex support for attribute header selection on HandleHTTPResponse
markap14 merged pull request #4634: URL: https://github.com/apache/nifi/pull/4634 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen opened a new pull request #4646: NIFI-6047 Add DeduplicateRecords (combines 6047 and 6014)
MikeThomsen opened a new pull request #4646: URL: https://github.com/apache/nifi/pull/4646 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] azgron commented on a change in pull request #4639: Update PutFile.java: fix path traversal vulnerability
azgron commented on a change in pull request #4639: URL: https://github.com/apache/nifi/pull/4639#discussion_r516893887 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutFile.java ## @@ -232,8 +240,14 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final Path rootDirPath = configuredRootDirPath.toAbsolutePath(); String filename = flowFile.getAttribute(CoreAttributes.FILENAME.key()); final Path tempCopyFile = rootDirPath.resolve("." + filename); -final Path copyFile = rootDirPath.resolve(filename); - +final Path copyFile = rootDirPath.resolve(filename) +if (context.getProperty(PREVENT_PATH_ESCAPE).asBoolean() && !copyFile.startsWith(rootDirPath)) { +flowFile = session.penalize(flowFile); +session.transfer(flowFile, REL_FAILURE); +logger.error("Resolved path escapes the root dir path"); Review comment: Do you mean: !(copyFile.startsWith(rootDirPath) || tempCopyFile.startsWith(rootDirPath)) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-4344) Improve bulletin messaging with exception details
[ https://issues.apache.org/jira/browse/NIFI-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225648#comment-17225648 ] Hans Deragon commented on NIFI-4344: Why not show all the exception messages one after the other (not the full stacktrace, but only what is returned by e.getMessage())? I believe that it would be short enough to not require any UI artefact to hide them. Bellow is an example. Notice that I would show the ultimate cause at the top (since that is was is most interesting and gives the most information): {{java.lang.NumberFormatException: For input string: "SomeField"}} {{ ┗━▶ Causes org.apache.nifi.serialization.MalformedRecordException: Error while getting next record. Root cause: java.lang.NumberFormatException: For input string: "SomeField"}} {{ ┗━▶ Causes org.apache.nifi.processor.exception.ProcessException: Could not parse incoming data}} BTW, this text should also show up on the top right corner of a processor when the error occurs. Bellow, the full stacktrace as found in one of Nifi's log file: {{2020-10-23 13:12:05,937 ERROR [Timer-Driven Process Thread-99] o.a.n.processors.standard.ConvertRecord ConvertRecord[id=62133e96-869c-1fe8-8544-23da01ef508e] Failed to process StandardFlowFileRecord[uuid=77efbd38-8f4e-4f56-9a04-0e43e38b3d36,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1603473042750-53511752, container=default, section=584], offset=411236, length=898749],offset=0,name=CDFRAF1CANVT21226_20200716115825.info,size=898749]; will route to failure: org.apache.nifi.processor.exception.ProcessException: Could not parse incoming data}} {{org.apache.nifi.processor.exception.ProcessException: Could not parse incoming data}} {{ at org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:171)}} {{ at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3006)}} {{ at org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:122)}} {{ at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)}} {{ at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)}} {{ at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)}} {{ at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)}} {{ at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)}} {{ at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}} {{ at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}} {{ at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}} {{ at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}} {{ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}} {{ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}} {{ at java.lang.Thread.run(Thread.java:748)}} {{Caused by: org.apache.nifi.serialization.MalformedRecordException: Error while getting next record. Root cause: java.lang.NumberFormatException: For input string: "SomeField"}} {{ at org.apache.nifi.csv.CSVRecordReader.nextRecord(CSVRecordReader.java:119)}} {{ at org.apache.nifi.serialization.RecordReader.nextRecord(RecordReader.java:50)}} {{ at org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:131)}} {{ ... 14 common frames omitted}} {{Caused by: java.lang.NumberFormatException: For input string: "SomeField"}} {{ at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)}} {{ at java.lang.Integer.parseInt(Integer.java:580)}} {{ at java.lang.Integer.parseInt(Integer.java:615)}} {{ at org.apache.nifi.serialization.record.util.DataTypeUtils.toInteger(DataTypeUtils.java:1429)}} {{ at org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:153)}} {{ at org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:127)}} {{ at org.apache.nifi.csv.AbstractCSVRecordReader.convert(AbstractCSVRecordReader.java:86)}} {{ at org.apache.nifi.csv.CSVRecordReader.nextRecord(CSVRecordReader.java:105)}} {{ ... 16 common frames omitted}} > Improve bulletin messaging with exception details > - > > Key: NIFI-4344 > URL: https://issues.apache.org/jira/browse/NIFI-4344 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Core UI >Reporter: Pierre Villard >Priority: Major > > In some environments it is not possible/allowed to access the NiFi nodes (and > consequently the log
[GitHub] [nifi] jfrazee commented on a change in pull request #4613: NiFi-7819 - Add Zookeeper client TLS (external zookeeper) for cluster state management
jfrazee commented on a change in pull request #4613: URL: https://github.com/apache/nifi/pull/4613#discussion_r516834908 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/state/providers/zookeeper/ZooKeeperStateProvider.java ## @@ -133,20 +146,54 @@ public ZooKeeperStateProvider() { return properties; } - @Override public synchronized void init(final StateProviderInitializationContext context) { connectionString = context.getProperty(CONNECTION_STRING).getValue(); rootNode = context.getProperty(ROOT_NODE).getValue(); timeoutMillis = context.getProperty(SESSION_TIMEOUT).asTimePeriod(TimeUnit.MILLISECONDS).intValue(); +final Properties stateProviderProperties = new Properties(); + stateProviderProperties.setProperty(NiFiProperties.ZOOKEEPER_SESSION_TIMEOUT, String.valueOf(timeoutMillis)); + stateProviderProperties.setProperty(NiFiProperties.ZOOKEEPER_CONNECT_TIMEOUT, String.valueOf(timeoutMillis)); + stateProviderProperties.setProperty(NiFiProperties.ZOOKEEPER_ROOT_NODE, rootNode); + stateProviderProperties.setProperty(NiFiProperties.ZOOKEEPER_CONNECT_STRING, connectionString); + +zooKeeperClientConfig = ZooKeeperClientConfig.createConfig(combineProperties(nifiProperties, stateProviderProperties)); if (context.getProperty(ACCESS_CONTROL).getValue().equalsIgnoreCase(CREATOR_ONLY.getValue())) { acl = Ids.CREATOR_ALL_ACL; } else { acl = Ids.OPEN_ACL_UNSAFE; } } +/** + * Combine properties from NiFiProperties and additional properties, allowing these additional properties to override settings + * in the given NiFiProperties. + * @param nifiProps A NiFiProperties to be combined with some additional properties + * @param additionalProperties Additional properties that can be used to override properties in the given NiFiProperties + * @return NiFiProperties containing the combined properties + */ +static NiFiProperties combineProperties(NiFiProperties nifiProps, Properties additionalProperties) { +return new NiFiProperties() { +@Override +public String getProperty(String key) { +String property = additionalProperties.getProperty(key); +if(nifiProps != null) { +return property != null ? property : nifiProps.getProperty(key); +} else { +return null; +} +} Review comment: If I'm reading this right, when nifiProps is null then this returns null no matter whether there's an additional property set or not. Is this the desired behavior? As an override I would have expected the additional property to be returned and this could simplify to: ``` return additionalProperties.getProperty(key, nifiProps != null ? nifiProps.getProperty(key) : null); ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog commented on a change in pull request #4613: NiFi-7819 - Add Zookeeper client TLS (external zookeeper) for cluster state management
thenatog commented on a change in pull request #4613: URL: https://github.com/apache/nifi/pull/4613#discussion_r516986718 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/state/providers/zookeeper/ZooKeeperStateProvider.java ## @@ -133,20 +146,54 @@ public ZooKeeperStateProvider() { return properties; } - @Override public synchronized void init(final StateProviderInitializationContext context) { connectionString = context.getProperty(CONNECTION_STRING).getValue(); rootNode = context.getProperty(ROOT_NODE).getValue(); timeoutMillis = context.getProperty(SESSION_TIMEOUT).asTimePeriod(TimeUnit.MILLISECONDS).intValue(); +final Properties stateProviderProperties = new Properties(); + stateProviderProperties.setProperty(NiFiProperties.ZOOKEEPER_SESSION_TIMEOUT, String.valueOf(timeoutMillis)); + stateProviderProperties.setProperty(NiFiProperties.ZOOKEEPER_CONNECT_TIMEOUT, String.valueOf(timeoutMillis)); + stateProviderProperties.setProperty(NiFiProperties.ZOOKEEPER_ROOT_NODE, rootNode); + stateProviderProperties.setProperty(NiFiProperties.ZOOKEEPER_CONNECT_STRING, connectionString); + +zooKeeperClientConfig = ZooKeeperClientConfig.createConfig(combineProperties(nifiProperties, stateProviderProperties)); if (context.getProperty(ACCESS_CONTROL).getValue().equalsIgnoreCase(CREATOR_ONLY.getValue())) { acl = Ids.CREATOR_ALL_ACL; } else { acl = Ids.OPEN_ACL_UNSAFE; } } +/** + * Combine properties from NiFiProperties and additional properties, allowing these additional properties to override settings + * in the given NiFiProperties. + * @param nifiProps A NiFiProperties to be combined with some additional properties + * @param additionalProperties Additional properties that can be used to override properties in the given NiFiProperties + * @return NiFiProperties containing the combined properties + */ +static NiFiProperties combineProperties(NiFiProperties nifiProps, Properties additionalProperties) { +return new NiFiProperties() { +@Override +public String getProperty(String key) { +String property = additionalProperties.getProperty(key); +if(nifiProps != null) { +return property != null ? property : nifiProps.getProperty(key); +} else { +return null; +} +} Review comment: You're right, I think this is wrong. I wrote it like this to solve an issue I was having with tests but it wasn't the best way to handle it. Will fix. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-7976) JSON Schema Inference infers long type when all values fit in integer type
[ https://issues.apache.org/jira/browse/NIFI-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225672#comment-17225672 ] ASF subversion and git services commented on NIFI-7976: --- Commit 71a5735f63182e93418a36af1946645805938c44 in nifi's branch refs/heads/main from Matt Burgess [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=71a5735 ] NIFI-7976: Infer ints in JSON when all values fit in integer type NIFI-7976: Fixed unit test This closes #4645 Signed-off-by: Mike Thomsen > JSON Schema Inference infers long type when all values fit in integer type > -- > > Key: NIFI-7976 > URL: https://issues.apache.org/jira/browse/NIFI-7976 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > For CSV and XML Schema Inference engines, integral values are checked to see > if they fit into an integer type, and if so, the engine infers "int" type, > otherwise it infers "long" type. However the JSON inference engine only > checks for "bigint" and everything else is "long". > It should also check to see if the values fit in an integer type and infer > "int" in that case. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7976) JSON Schema Inference infers long type when all values fit in integer type
[ https://issues.apache.org/jira/browse/NIFI-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225671#comment-17225671 ] ASF subversion and git services commented on NIFI-7976: --- Commit 71a5735f63182e93418a36af1946645805938c44 in nifi's branch refs/heads/main from Matt Burgess [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=71a5735 ] NIFI-7976: Infer ints in JSON when all values fit in integer type NIFI-7976: Fixed unit test This closes #4645 Signed-off-by: Mike Thomsen > JSON Schema Inference infers long type when all values fit in integer type > -- > > Key: NIFI-7976 > URL: https://issues.apache.org/jira/browse/NIFI-7976 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > For CSV and XML Schema Inference engines, integral values are checked to see > if they fit into an integer type, and if so, the engine infers "int" type, > otherwise it infers "long" type. However the JSON inference engine only > checks for "bigint" and everything else is "long". > It should also check to see if the values fit in an integer type and infer > "int" in that case. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #4645: NIFI-7976: Infer ints in JSON when all values fit in integer type
asfgit closed pull request #4645: URL: https://github.com/apache/nifi/pull/4645 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7976) JSON Schema Inference infers long type when all values fit in integer type
[ https://issues.apache.org/jira/browse/NIFI-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Thomsen updated NIFI-7976: --- Resolution: Fixed Status: Resolved (was: Patch Available) > JSON Schema Inference infers long type when all values fit in integer type > -- > > Key: NIFI-7976 > URL: https://issues.apache.org/jira/browse/NIFI-7976 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > For CSV and XML Schema Inference engines, integral values are checked to see > if they fit into an integer type, and if so, the engine infers "int" type, > otherwise it infers "long" type. However the JSON inference engine only > checks for "bigint" and everything else is "long". > It should also check to see if the values fit in an integer type and infer > "int" in that case. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7976) JSON Schema Inference infers long type when all values fit in integer type
[ https://issues.apache.org/jira/browse/NIFI-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Thomsen updated NIFI-7976: --- Fix Version/s: 1.13.0 > JSON Schema Inference infers long type when all values fit in integer type > -- > > Key: NIFI-7976 > URL: https://issues.apache.org/jira/browse/NIFI-7976 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.13.0 > > Time Spent: 20m > Remaining Estimate: 0h > > For CSV and XML Schema Inference engines, integral values are checked to see > if they fit into an integer type, and if so, the engine infers "int" type, > otherwise it infers "long" type. However the JSON inference engine only > checks for "bigint" and everything else is "long". > It should also check to see if the values fit in an integer type and infer > "int" in that case. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] bbende commented on pull request #4512: NIFI-1121: Support making properties dependent upon one another
bbende commented on pull request #4512: URL: https://github.com/apache/nifi/pull/4512#issuecomment-721359149 Thanks, latest updates look good. I noticed one minor issue with the dependency setup for the "Schema Branch" and "Schema Version" properties, the PR has them set to depend on the two HWX strategies, but really they go with the "Schema Name" strategy. I'm going to include a small commit that corrects that, the diff looks like this... ```+++ b/nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/schema/access/SchemaAccessUtils.java @@ -88,7 +88,7 @@ public class SchemaAccessUtils { "If the chosen Schema Registry does not support branching, this value will be ignored.") .addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) -.dependsOn(SCHEMA_ACCESS_STRATEGY, HWX_SCHEMA_REF_ATTRIBUTES, HWX_CONTENT_ENCODED_SCHEMA) +.dependsOn(SCHEMA_ACCESS_STRATEGY, SCHEMA_NAME_PROPERTY) .required(false) .build(); @@ -99,7 +99,7 @@ public class SchemaAccessUtils { "If not specified then the latest version of the schema will be retrieved.") .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) -.dependsOn(SCHEMA_ACCESS_STRATEGY, HWX_SCHEMA_REF_ATTRIBUTES, HWX_CONTENT_ENCODED_SCHEMA) +.dependsOn(SCHEMA_ACCESS_STRATEGY, SCHEMA_NAME_PROPERTY) .required(false) .build(); This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-1121) Allow components' properties to depend on one another
[ https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225662#comment-17225662 ] ASF subversion and git services commented on NIFI-1121: --- Commit 4bd9d7b4139bb05db96a15c9cf770ccd3c33bb38 in nifi's branch refs/heads/main from mtien [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4bd9d7b ] NIFI-1121 Show and hide properties that depend on another property. Co-authored-by: Scott Aslan Signed-off-by: Bryan Bende > Allow components' properties to depend on one another > - > > Key: NIFI-1121 > URL: https://issues.apache.org/jira/browse/NIFI-1121 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Mark Payne >Assignee: M Tien >Priority: Major > Time Spent: 2h 20m > Remaining Estimate: 0h > > Concept: A Processor developer (or Controller Service or Reporting Task > developer) should be able to indicate when building a PropertyDescriptor that > the property is "dependent on" another Property. If Property A depends on > Property B, then the following should happen: > Property A should not be shown in the Configure dialog unless a value is > selected for Property B. Additionally, if Property A is dependent on > particular values of Property B, then Property A should be shown only if > Property B is set to one of those values. > For example, in Compress Content, the "Compression Level" property should be > dependent on the "Mode" property being set to "Compress." This means that if > the "Mode" property is set to Decompress, then the UI would not show the > Compression Level property. This will be far less confusing for users, as it > will allow the UI to hide properties that irrelevant based on the > configuration. > Additionally, if Property A depends on Property B and Property A is required, > then a valid value must be set for Property A ONLY if Property B is set to a > value that Property A depends on. I.e., in the example above, the Compression > Level property can be required, but if the Mode is not set to Compress, then > it doesn't matter if the Compression Level property is set to a valid value - > the Processor will still be valid, because Compression Level is not a > relevant property in this case. > This provides developers to provide validation much more easily, as many > times the developer currently must implement the customValidate method to > ensure that if Property A is set that Property B must also be set. In this > case, it is taken care of by the framework simply by adding a dependency. > From an API perspective, it would manifest itself as having a new "dependsOn" > method added to the PropertyDescriptor.Builder class: > {code} > /** > * Indicates that this Property is relevant if and only if the parent property > has some (any) value set. > **/ > Builder dependsOn(PropertyDescriptor parent); > {code} > {code} > /** > * Indicates that this Property is relevant if and only if the parent > property is set to one of the values included in the 'relevantValues' > Collection. > **/ > Builder dependsOn(PropertyDescriptor parent, Collection > relevantValues); > {code} > In providing this capability, we will not only be able to hide properties > that are not valid based on the Processor's other configuration but will also > make the notion of "Strategy Properties" far more powerful/easy to use. This > is because we can now have a Property such as "My Capability Strategy" and > then have properties that are shown for each of the allowed strategies. > For example, in MergeContent, the Header, Footer, Demarcator could become > dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can > then be thought of logically as properties of that strategy itself. > This will require a few different parts of the application to be updated: > * nifi-api - must be updated to support the new methods. > * nifi-framework-core - must be updated to handle new validation logic for > components > * nifi-web - must be updated to show/hide properties based on other > properties' values > * nifi-mock - needs to handle the validation logic and ensure that developers > are using the API properly, throwing AssertionErrors if not > * nifi-docs - need to update the Developer Guide to explain how this works > * processors - many processors can be updated to take advantage of this new > capability -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-1121) Allow components' properties to depend on one another
[ https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225666#comment-17225666 ] ASF subversion and git services commented on NIFI-1121: --- Commit 42c2cda9a21fc1d66040c6b95dcff1164515e2a6 in nifi's branch refs/heads/main from mtien [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=42c2cda ] NIFI-1121 Fixed a dependent value check error. Signed-off-by: Bryan Bende > Allow components' properties to depend on one another > - > > Key: NIFI-1121 > URL: https://issues.apache.org/jira/browse/NIFI-1121 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Mark Payne >Assignee: M Tien >Priority: Major > Time Spent: 2h 20m > Remaining Estimate: 0h > > Concept: A Processor developer (or Controller Service or Reporting Task > developer) should be able to indicate when building a PropertyDescriptor that > the property is "dependent on" another Property. If Property A depends on > Property B, then the following should happen: > Property A should not be shown in the Configure dialog unless a value is > selected for Property B. Additionally, if Property A is dependent on > particular values of Property B, then Property A should be shown only if > Property B is set to one of those values. > For example, in Compress Content, the "Compression Level" property should be > dependent on the "Mode" property being set to "Compress." This means that if > the "Mode" property is set to Decompress, then the UI would not show the > Compression Level property. This will be far less confusing for users, as it > will allow the UI to hide properties that irrelevant based on the > configuration. > Additionally, if Property A depends on Property B and Property A is required, > then a valid value must be set for Property A ONLY if Property B is set to a > value that Property A depends on. I.e., in the example above, the Compression > Level property can be required, but if the Mode is not set to Compress, then > it doesn't matter if the Compression Level property is set to a valid value - > the Processor will still be valid, because Compression Level is not a > relevant property in this case. > This provides developers to provide validation much more easily, as many > times the developer currently must implement the customValidate method to > ensure that if Property A is set that Property B must also be set. In this > case, it is taken care of by the framework simply by adding a dependency. > From an API perspective, it would manifest itself as having a new "dependsOn" > method added to the PropertyDescriptor.Builder class: > {code} > /** > * Indicates that this Property is relevant if and only if the parent property > has some (any) value set. > **/ > Builder dependsOn(PropertyDescriptor parent); > {code} > {code} > /** > * Indicates that this Property is relevant if and only if the parent > property is set to one of the values included in the 'relevantValues' > Collection. > **/ > Builder dependsOn(PropertyDescriptor parent, Collection > relevantValues); > {code} > In providing this capability, we will not only be able to hide properties > that are not valid based on the Processor's other configuration but will also > make the notion of "Strategy Properties" far more powerful/easy to use. This > is because we can now have a Property such as "My Capability Strategy" and > then have properties that are shown for each of the allowed strategies. > For example, in MergeContent, the Header, Footer, Demarcator could become > dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can > then be thought of logically as properties of that strategy itself. > This will require a few different parts of the application to be updated: > * nifi-api - must be updated to support the new methods. > * nifi-framework-core - must be updated to handle new validation logic for > components > * nifi-web - must be updated to show/hide properties based on other > properties' values > * nifi-mock - needs to handle the validation logic and ensure that developers > are using the API properly, throwing AssertionErrors if not > * nifi-docs - need to update the Developer Guide to explain how this works > * processors - many processors can be updated to take advantage of this new > capability -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-1121) Allow components' properties to depend on one another
[ https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225664#comment-17225664 ] ASF subversion and git services commented on NIFI-1121: --- Commit 535cab3167a5c2cd88ef560494628186317bf0d3 in nifi's branch refs/heads/main from mtien [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=535cab3 ] NIFI-1121: Added an additional check for hidden properties to account for transitive dependent properties. - Added a 'dependent' attribute to determine whether or not to save dependent property values Co-authored-by: Scott Aslan Signed-off-by: Bryan Bende > Allow components' properties to depend on one another > - > > Key: NIFI-1121 > URL: https://issues.apache.org/jira/browse/NIFI-1121 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Mark Payne >Assignee: M Tien >Priority: Major > Time Spent: 2h 20m > Remaining Estimate: 0h > > Concept: A Processor developer (or Controller Service or Reporting Task > developer) should be able to indicate when building a PropertyDescriptor that > the property is "dependent on" another Property. If Property A depends on > Property B, then the following should happen: > Property A should not be shown in the Configure dialog unless a value is > selected for Property B. Additionally, if Property A is dependent on > particular values of Property B, then Property A should be shown only if > Property B is set to one of those values. > For example, in Compress Content, the "Compression Level" property should be > dependent on the "Mode" property being set to "Compress." This means that if > the "Mode" property is set to Decompress, then the UI would not show the > Compression Level property. This will be far less confusing for users, as it > will allow the UI to hide properties that irrelevant based on the > configuration. > Additionally, if Property A depends on Property B and Property A is required, > then a valid value must be set for Property A ONLY if Property B is set to a > value that Property A depends on. I.e., in the example above, the Compression > Level property can be required, but if the Mode is not set to Compress, then > it doesn't matter if the Compression Level property is set to a valid value - > the Processor will still be valid, because Compression Level is not a > relevant property in this case. > This provides developers to provide validation much more easily, as many > times the developer currently must implement the customValidate method to > ensure that if Property A is set that Property B must also be set. In this > case, it is taken care of by the framework simply by adding a dependency. > From an API perspective, it would manifest itself as having a new "dependsOn" > method added to the PropertyDescriptor.Builder class: > {code} > /** > * Indicates that this Property is relevant if and only if the parent property > has some (any) value set. > **/ > Builder dependsOn(PropertyDescriptor parent); > {code} > {code} > /** > * Indicates that this Property is relevant if and only if the parent > property is set to one of the values included in the 'relevantValues' > Collection. > **/ > Builder dependsOn(PropertyDescriptor parent, Collection > relevantValues); > {code} > In providing this capability, we will not only be able to hide properties > that are not valid based on the Processor's other configuration but will also > make the notion of "Strategy Properties" far more powerful/easy to use. This > is because we can now have a Property such as "My Capability Strategy" and > then have properties that are shown for each of the allowed strategies. > For example, in MergeContent, the Header, Footer, Demarcator could become > dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can > then be thought of logically as properties of that strategy itself. > This will require a few different parts of the application to be updated: > * nifi-api - must be updated to support the new methods. > * nifi-framework-core - must be updated to handle new validation logic for > components > * nifi-web - must be updated to show/hide properties based on other > properties' values > * nifi-mock - needs to handle the validation logic and ensure that developers > are using the API properly, throwing AssertionErrors if not > * nifi-docs - need to update the Developer Guide to explain how this works > * processors - many processors can be updated to take advantage of this new > capability -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-1121) Allow components' properties to depend on one another
[ https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225667#comment-17225667 ] ASF subversion and git services commented on NIFI-1121: --- Commit d773521ee01f3a34fdd21365a86c8de7b82077a6 in nifi's branch refs/heads/main from Bryan Bende [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=d773521 ] NIFI-1121 Fix Schema Name and Schema Branch properties This closes #4512. Signed-off-by: Bryan Bende > Allow components' properties to depend on one another > - > > Key: NIFI-1121 > URL: https://issues.apache.org/jira/browse/NIFI-1121 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Mark Payne >Assignee: M Tien >Priority: Major > Time Spent: 2h 20m > Remaining Estimate: 0h > > Concept: A Processor developer (or Controller Service or Reporting Task > developer) should be able to indicate when building a PropertyDescriptor that > the property is "dependent on" another Property. If Property A depends on > Property B, then the following should happen: > Property A should not be shown in the Configure dialog unless a value is > selected for Property B. Additionally, if Property A is dependent on > particular values of Property B, then Property A should be shown only if > Property B is set to one of those values. > For example, in Compress Content, the "Compression Level" property should be > dependent on the "Mode" property being set to "Compress." This means that if > the "Mode" property is set to Decompress, then the UI would not show the > Compression Level property. This will be far less confusing for users, as it > will allow the UI to hide properties that irrelevant based on the > configuration. > Additionally, if Property A depends on Property B and Property A is required, > then a valid value must be set for Property A ONLY if Property B is set to a > value that Property A depends on. I.e., in the example above, the Compression > Level property can be required, but if the Mode is not set to Compress, then > it doesn't matter if the Compression Level property is set to a valid value - > the Processor will still be valid, because Compression Level is not a > relevant property in this case. > This provides developers to provide validation much more easily, as many > times the developer currently must implement the customValidate method to > ensure that if Property A is set that Property B must also be set. In this > case, it is taken care of by the framework simply by adding a dependency. > From an API perspective, it would manifest itself as having a new "dependsOn" > method added to the PropertyDescriptor.Builder class: > {code} > /** > * Indicates that this Property is relevant if and only if the parent property > has some (any) value set. > **/ > Builder dependsOn(PropertyDescriptor parent); > {code} > {code} > /** > * Indicates that this Property is relevant if and only if the parent > property is set to one of the values included in the 'relevantValues' > Collection. > **/ > Builder dependsOn(PropertyDescriptor parent, Collection > relevantValues); > {code} > In providing this capability, we will not only be able to hide properties > that are not valid based on the Processor's other configuration but will also > make the notion of "Strategy Properties" far more powerful/easy to use. This > is because we can now have a Property such as "My Capability Strategy" and > then have properties that are shown for each of the allowed strategies. > For example, in MergeContent, the Header, Footer, Demarcator could become > dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can > then be thought of logically as properties of that strategy itself. > This will require a few different parts of the application to be updated: > * nifi-api - must be updated to support the new methods. > * nifi-framework-core - must be updated to handle new validation logic for > components > * nifi-web - must be updated to show/hide properties based on other > properties' values > * nifi-mock - needs to handle the validation logic and ensure that developers > are using the API properly, throwing AssertionErrors if not > * nifi-docs - need to update the Developer Guide to explain how this works > * processors - many processors can be updated to take advantage of this new > capability -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-1121) Allow components' properties to depend on one another
[ https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225661#comment-17225661 ] ASF subversion and git services commented on NIFI-1121: --- Commit f7f336a4b0ad121e127a7c8c539bbb110ca53b40 in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=f7f336a ] NIFI-1121: Added API changes for having one Property depend on another Signed-off-by: Bryan Bende > Allow components' properties to depend on one another > - > > Key: NIFI-1121 > URL: https://issues.apache.org/jira/browse/NIFI-1121 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Mark Payne >Assignee: M Tien >Priority: Major > Time Spent: 2h 20m > Remaining Estimate: 0h > > Concept: A Processor developer (or Controller Service or Reporting Task > developer) should be able to indicate when building a PropertyDescriptor that > the property is "dependent on" another Property. If Property A depends on > Property B, then the following should happen: > Property A should not be shown in the Configure dialog unless a value is > selected for Property B. Additionally, if Property A is dependent on > particular values of Property B, then Property A should be shown only if > Property B is set to one of those values. > For example, in Compress Content, the "Compression Level" property should be > dependent on the "Mode" property being set to "Compress." This means that if > the "Mode" property is set to Decompress, then the UI would not show the > Compression Level property. This will be far less confusing for users, as it > will allow the UI to hide properties that irrelevant based on the > configuration. > Additionally, if Property A depends on Property B and Property A is required, > then a valid value must be set for Property A ONLY if Property B is set to a > value that Property A depends on. I.e., in the example above, the Compression > Level property can be required, but if the Mode is not set to Compress, then > it doesn't matter if the Compression Level property is set to a valid value - > the Processor will still be valid, because Compression Level is not a > relevant property in this case. > This provides developers to provide validation much more easily, as many > times the developer currently must implement the customValidate method to > ensure that if Property A is set that Property B must also be set. In this > case, it is taken care of by the framework simply by adding a dependency. > From an API perspective, it would manifest itself as having a new "dependsOn" > method added to the PropertyDescriptor.Builder class: > {code} > /** > * Indicates that this Property is relevant if and only if the parent property > has some (any) value set. > **/ > Builder dependsOn(PropertyDescriptor parent); > {code} > {code} > /** > * Indicates that this Property is relevant if and only if the parent > property is set to one of the values included in the 'relevantValues' > Collection. > **/ > Builder dependsOn(PropertyDescriptor parent, Collection > relevantValues); > {code} > In providing this capability, we will not only be able to hide properties > that are not valid based on the Processor's other configuration but will also > make the notion of "Strategy Properties" far more powerful/easy to use. This > is because we can now have a Property such as "My Capability Strategy" and > then have properties that are shown for each of the allowed strategies. > For example, in MergeContent, the Header, Footer, Demarcator could become > dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can > then be thought of logically as properties of that strategy itself. > This will require a few different parts of the application to be updated: > * nifi-api - must be updated to support the new methods. > * nifi-framework-core - must be updated to handle new validation logic for > components > * nifi-web - must be updated to show/hide properties based on other > properties' values > * nifi-mock - needs to handle the validation logic and ensure that developers > are using the API properly, throwing AssertionErrors if not > * nifi-docs - need to update the Developer Guide to explain how this works > * processors - many processors can be updated to take advantage of this new > capability -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-1121) Allow components' properties to depend on one another
[ https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225663#comment-17225663 ] ASF subversion and git services commented on NIFI-1121: --- Commit e2e901b6b97c8099038230b430905bd5c85bf6ae in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e2e901b ] NIFI-1121: Added property dependencies to MergeContent Signed-off-by: Bryan Bende > Allow components' properties to depend on one another > - > > Key: NIFI-1121 > URL: https://issues.apache.org/jira/browse/NIFI-1121 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Mark Payne >Assignee: M Tien >Priority: Major > Time Spent: 2h 20m > Remaining Estimate: 0h > > Concept: A Processor developer (or Controller Service or Reporting Task > developer) should be able to indicate when building a PropertyDescriptor that > the property is "dependent on" another Property. If Property A depends on > Property B, then the following should happen: > Property A should not be shown in the Configure dialog unless a value is > selected for Property B. Additionally, if Property A is dependent on > particular values of Property B, then Property A should be shown only if > Property B is set to one of those values. > For example, in Compress Content, the "Compression Level" property should be > dependent on the "Mode" property being set to "Compress." This means that if > the "Mode" property is set to Decompress, then the UI would not show the > Compression Level property. This will be far less confusing for users, as it > will allow the UI to hide properties that irrelevant based on the > configuration. > Additionally, if Property A depends on Property B and Property A is required, > then a valid value must be set for Property A ONLY if Property B is set to a > value that Property A depends on. I.e., in the example above, the Compression > Level property can be required, but if the Mode is not set to Compress, then > it doesn't matter if the Compression Level property is set to a valid value - > the Processor will still be valid, because Compression Level is not a > relevant property in this case. > This provides developers to provide validation much more easily, as many > times the developer currently must implement the customValidate method to > ensure that if Property A is set that Property B must also be set. In this > case, it is taken care of by the framework simply by adding a dependency. > From an API perspective, it would manifest itself as having a new "dependsOn" > method added to the PropertyDescriptor.Builder class: > {code} > /** > * Indicates that this Property is relevant if and only if the parent property > has some (any) value set. > **/ > Builder dependsOn(PropertyDescriptor parent); > {code} > {code} > /** > * Indicates that this Property is relevant if and only if the parent > property is set to one of the values included in the 'relevantValues' > Collection. > **/ > Builder dependsOn(PropertyDescriptor parent, Collection > relevantValues); > {code} > In providing this capability, we will not only be able to hide properties > that are not valid based on the Processor's other configuration but will also > make the notion of "Strategy Properties" far more powerful/easy to use. This > is because we can now have a Property such as "My Capability Strategy" and > then have properties that are shown for each of the allowed strategies. > For example, in MergeContent, the Header, Footer, Demarcator could become > dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can > then be thought of logically as properties of that strategy itself. > This will require a few different parts of the application to be updated: > * nifi-api - must be updated to support the new methods. > * nifi-framework-core - must be updated to handle new validation logic for > components > * nifi-web - must be updated to show/hide properties based on other > properties' values > * nifi-mock - needs to handle the validation logic and ensure that developers > are using the API properly, throwing AssertionErrors if not > * nifi-docs - need to update the Developer Guide to explain how this works > * processors - many processors can be updated to take advantage of this new > capability -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-1121) Allow components' properties to depend on one another
[ https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-1121: -- Fix Version/s: 1.13.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Allow components' properties to depend on one another > - > > Key: NIFI-1121 > URL: https://issues.apache.org/jira/browse/NIFI-1121 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Mark Payne >Assignee: M Tien >Priority: Major > Fix For: 1.13.0 > > Time Spent: 2.5h > Remaining Estimate: 0h > > Concept: A Processor developer (or Controller Service or Reporting Task > developer) should be able to indicate when building a PropertyDescriptor that > the property is "dependent on" another Property. If Property A depends on > Property B, then the following should happen: > Property A should not be shown in the Configure dialog unless a value is > selected for Property B. Additionally, if Property A is dependent on > particular values of Property B, then Property A should be shown only if > Property B is set to one of those values. > For example, in Compress Content, the "Compression Level" property should be > dependent on the "Mode" property being set to "Compress." This means that if > the "Mode" property is set to Decompress, then the UI would not show the > Compression Level property. This will be far less confusing for users, as it > will allow the UI to hide properties that irrelevant based on the > configuration. > Additionally, if Property A depends on Property B and Property A is required, > then a valid value must be set for Property A ONLY if Property B is set to a > value that Property A depends on. I.e., in the example above, the Compression > Level property can be required, but if the Mode is not set to Compress, then > it doesn't matter if the Compression Level property is set to a valid value - > the Processor will still be valid, because Compression Level is not a > relevant property in this case. > This provides developers to provide validation much more easily, as many > times the developer currently must implement the customValidate method to > ensure that if Property A is set that Property B must also be set. In this > case, it is taken care of by the framework simply by adding a dependency. > From an API perspective, it would manifest itself as having a new "dependsOn" > method added to the PropertyDescriptor.Builder class: > {code} > /** > * Indicates that this Property is relevant if and only if the parent property > has some (any) value set. > **/ > Builder dependsOn(PropertyDescriptor parent); > {code} > {code} > /** > * Indicates that this Property is relevant if and only if the parent > property is set to one of the values included in the 'relevantValues' > Collection. > **/ > Builder dependsOn(PropertyDescriptor parent, Collection > relevantValues); > {code} > In providing this capability, we will not only be able to hide properties > that are not valid based on the Processor's other configuration but will also > make the notion of "Strategy Properties" far more powerful/easy to use. This > is because we can now have a Property such as "My Capability Strategy" and > then have properties that are shown for each of the allowed strategies. > For example, in MergeContent, the Header, Footer, Demarcator could become > dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can > then be thought of logically as properties of that strategy itself. > This will require a few different parts of the application to be updated: > * nifi-api - must be updated to support the new methods. > * nifi-framework-core - must be updated to handle new validation logic for > components > * nifi-web - must be updated to show/hide properties based on other > properties' values > * nifi-mock - needs to handle the validation logic and ensure that developers > are using the API properly, throwing AssertionErrors if not > * nifi-docs - need to update the Developer Guide to explain how this works > * processors - many processors can be updated to take advantage of this new > capability -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-1121) Allow components' properties to depend on one another
[ https://issues.apache.org/jira/browse/NIFI-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225665#comment-17225665 ] ASF subversion and git services commented on NIFI-1121: --- Commit 4b9014b9596a8f479a32d0b3e52bc6649b0f0d1b in nifi's branch refs/heads/main from Mark Payne [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4b9014b ] NIFI-1121: Updated backend to perform appropriate validation. Added tests. Updated documentation writer. Updated dev guide to explain how PropertyDescriptor.Builder#dependsOn affects validation. Updated JavaDocs for PropertyDescriptor.Builder#dependsOn Signed-off-by: Bryan Bende > Allow components' properties to depend on one another > - > > Key: NIFI-1121 > URL: https://issues.apache.org/jira/browse/NIFI-1121 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.11.4 >Reporter: Mark Payne >Assignee: M Tien >Priority: Major > Time Spent: 2h 20m > Remaining Estimate: 0h > > Concept: A Processor developer (or Controller Service or Reporting Task > developer) should be able to indicate when building a PropertyDescriptor that > the property is "dependent on" another Property. If Property A depends on > Property B, then the following should happen: > Property A should not be shown in the Configure dialog unless a value is > selected for Property B. Additionally, if Property A is dependent on > particular values of Property B, then Property A should be shown only if > Property B is set to one of those values. > For example, in Compress Content, the "Compression Level" property should be > dependent on the "Mode" property being set to "Compress." This means that if > the "Mode" property is set to Decompress, then the UI would not show the > Compression Level property. This will be far less confusing for users, as it > will allow the UI to hide properties that irrelevant based on the > configuration. > Additionally, if Property A depends on Property B and Property A is required, > then a valid value must be set for Property A ONLY if Property B is set to a > value that Property A depends on. I.e., in the example above, the Compression > Level property can be required, but if the Mode is not set to Compress, then > it doesn't matter if the Compression Level property is set to a valid value - > the Processor will still be valid, because Compression Level is not a > relevant property in this case. > This provides developers to provide validation much more easily, as many > times the developer currently must implement the customValidate method to > ensure that if Property A is set that Property B must also be set. In this > case, it is taken care of by the framework simply by adding a dependency. > From an API perspective, it would manifest itself as having a new "dependsOn" > method added to the PropertyDescriptor.Builder class: > {code} > /** > * Indicates that this Property is relevant if and only if the parent property > has some (any) value set. > **/ > Builder dependsOn(PropertyDescriptor parent); > {code} > {code} > /** > * Indicates that this Property is relevant if and only if the parent > property is set to one of the values included in the 'relevantValues' > Collection. > **/ > Builder dependsOn(PropertyDescriptor parent, Collection > relevantValues); > {code} > In providing this capability, we will not only be able to hide properties > that are not valid based on the Processor's other configuration but will also > make the notion of "Strategy Properties" far more powerful/easy to use. This > is because we can now have a Property such as "My Capability Strategy" and > then have properties that are shown for each of the allowed strategies. > For example, in MergeContent, the Header, Footer, Demarcator could become > dependent on the "Bin-Packing Algorithm" Merge Strategy. These properties can > then be thought of logically as properties of that strategy itself. > This will require a few different parts of the application to be updated: > * nifi-api - must be updated to support the new methods. > * nifi-framework-core - must be updated to handle new validation logic for > components > * nifi-web - must be updated to show/hide properties based on other > properties' values > * nifi-mock - needs to handle the validation logic and ensure that developers > are using the API properly, throwing AssertionErrors if not > * nifi-docs - need to update the Developer Guide to explain how this works > * processors - many processors can be updated to take advantage of this new > capability -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] asfgit closed pull request #4512: NIFI-1121: Support making properties dependent upon one another
asfgit closed pull request #4512: URL: https://github.com/apache/nifi/pull/4512 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen closed pull request #2877: NIFI-5407: Add a MetricsReportingTask to send to ElasticSearch.
MikeThomsen closed pull request #2877: URL: https://github.com/apache/nifi/pull/2877 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-385) Add Kerberos support in nifi-kite-nar
[ https://issues.apache.org/jira/browse/NIFI-385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Thomsen resolved NIFI-385. --- Resolution: Won't Do Based on these update dates, Kite appears to be abandoned. https://search.maven.org/search?q=g:org.kitesdk > Add Kerberos support in nifi-kite-nar > - > > Key: NIFI-385 > URL: https://issues.apache.org/jira/browse/NIFI-385 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Ryan Blue >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Kite should be able to connect to a Kerberized Hadoop cluster to store data. > Kite's Flume connector has working code. The Kite dataset needs to be > instantiated in a {{doPrivileged}} block and its internal {{FileSystem}} > object will hold the credentials after that. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] mtien-apache commented on pull request #4630: NIFI-7924: add fallback claims for identifying user
mtien-apache commented on pull request #4630: URL: https://github.com/apache/nifi/pull/4630#issuecomment-721452676 @sjyang18 Thank you for submitting this. I've reviewed it and the functionality LGTM. I verified I can log in with OIDC enabled and verified the tests will use the listed fallback claim when email is not available. One suggestion is to update the NiFi docs with a description of the new fallback claim property. Here's a link to the docs section I'm referring to: http://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#openid_connect I can give a +1 once this is updated. Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen closed pull request #2029: NIFI-385 Add Kerberos Support to Kite
MikeThomsen closed pull request #2029: URL: https://github.com/apache/nifi/pull/2029 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #1364: NIFI-1856 ExecuteStreamCommand Needs to Consume Standard Error
MikeThomsen commented on pull request #1364: URL: https://github.com/apache/nifi/pull/1364#issuecomment-721468108 GitHub is reporting that the original repo link is broken, so I'm going to close. Feel free to resubmit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen closed pull request #1364: NIFI-1856 ExecuteStreamCommand Needs to Consume Standard Error
MikeThomsen closed pull request #1364: URL: https://github.com/apache/nifi/pull/1364 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #1696: NIFI-1655 - Add .gitattributes to specifically define
MikeThomsen commented on pull request #1696: URL: https://github.com/apache/nifi/pull/1696#issuecomment-721469765 @trixpan if you still want to pursue this, feel free to clean it up and rebase it off main and tag me. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen closed pull request #1696: NIFI-1655 - Add .gitattributes to specifically define
MikeThomsen closed pull request #1696: URL: https://github.com/apache/nifi/pull/1696 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #1964: Nifi 4141 - Adding a processor for convert Json to Orc
MikeThomsen commented on pull request #1964: URL: https://github.com/apache/nifi/pull/1964#issuecomment-721448117 @pvillard31 I agree and I'm closing based on age and the fact that the record paradigm is so entrenched that adding another convert-from-a-to-b processor doesn't make sense at this point. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org