[jira] [Created] (NIFI-8048) PutElasticsearchRecord should retry with 503 response
Koji Kawamura created NIFI-8048: --- Summary: PutElasticsearchRecord should retry with 503 response Key: NIFI-8048 URL: https://issues.apache.org/jira/browse/NIFI-8048 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.12.1 Reporter: Koji Kawamura Assignee: Koji Kawamura During Elasticsearch full cluster restart, PutElasticseachRecord routed incoming FlowFiles to its 'failure' relationship. In contrast, PutElasticsearchHttp / PutElasticsearchHttpRecord routed incoming FlowFiles to 'retry'. While PutElasticsearchHttp processors determine if a request can be retried by checking if HTTP status code is 5XX, PutElasticsearchRecord and corresponding ElasticsearchError only check the name of thrown exception. To make PutElasticsearchRecord more resilient against such situation, ElasticsearchError should treat ResponseException as recoverable where HTTP status code is 503 Service Unavailable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6404) PutElasticsearchHttp: Remove _type as being compulsory
[ https://issues.apache.org/jira/browse/NIFI-6404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6404: Fix Version/s: 1.13.0 Resolution: Fixed Status: Resolved (was: Patch Available) > PutElasticsearchHttp: Remove _type as being compulsory > -- > > Key: NIFI-6404 > URL: https://issues.apache.org/jira/browse/NIFI-6404 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.10.0, 1.9.2 > Environment: Elasticsearch 7.x >Reporter: David Vassallo >Assignee: Joseph Gresock >Priority: Major > Fix For: 1.13.0 > > > In ES 7.x and above, document "type" is no longer compulsory and in fact is > deprecated. When using the 1.9.2 version of PutElasticsearchHttp with ES > v7.2, it still works however you'll see the following HTTP in the response: > > {{HTTP/1.1 200 OK}} > *{{Warning: 299 Elasticsearch-7.2.0-508c38a "[types removal] Specifying > types in bulk requests is deprecated."}}* > {{content-type: application/json; charset=UTF-8}} > > The fix is relatively straightforward: > * In *PutElasticserachHttp.java*, remove the requirement of a compulsory > "Type" property: > {code:java} > public static final PropertyDescriptor TYPE = new PropertyDescriptor.Builder() > .name("put-es-type") > .displayName("Type") > .description("The type of this document (used by Elasticsearch < 7.0 for > indexing and searching). Leave empty for ES >= 7.0") // <- > .required(false) // <- CHANGE > .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) > .addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR) > .build(); > {code} > > * In *AbstractElasticsearchHttpProcessor.java*, check for the presence of > "docType". If not present, assume elasticsearch 7.x or above and omit from > bulk API URL: > > {code:java} > protected void buildBulkCommand(StringBuilder sb, String index, String > docType, String indexOp, String id, String jsonString) { > if (indexOp.equalsIgnoreCase("index")) { > sb.append("{\"index\": { \"_index\": \""); > sb.append(StringEscapeUtils.escapeJson(index)); > if (!(StringUtils.isEmpty(docType) | docType == null)){ // <- > CHANGE START > sb.append("\", \"_type\": \""); > sb.append(StringEscapeUtils.escapeJson(docType)); > sb.append("\""); > }// <- CHANGE END > if (!StringUtils.isEmpty(id)) { > sb.append(", \"_id\": \""); > sb.append(StringEscapeUtils.escapeJson(id)); > sb.append("\""); > } > sb.append("}}\n"); > sb.append(jsonString); > sb.append("\n"); > } else if (indexOp.equalsIgnoreCase("upsert") || > indexOp.equalsIgnoreCase("update")) { > sb.append("{\"update\": { \"_index\": \""); > sb.append(StringEscapeUtils.escapeJson(index)); > sb.append("\", \"_type\": \""); > sb.append(StringEscapeUtils.escapeJson(docType)); > sb.append("\", \"_id\": \""); > sb.append(StringEscapeUtils.escapeJson(id)); > sb.append("\" }\n"); > sb.append("{\"doc\": "); > sb.append(jsonString); > sb.append(", \"doc_as_upsert\": "); > sb.append(indexOp.equalsIgnoreCase("upsert")); > sb.append(" }\n"); > } else if (indexOp.equalsIgnoreCase("delete")) { > sb.append("{\"delete\": { \"_index\": \""); > sb.append(StringEscapeUtils.escapeJson(index)); > sb.append("\", \"_type\": \""); > sb.append(StringEscapeUtils.escapeJson(docType)); > sb.append("\", \"_id\": \""); > sb.append(StringEscapeUtils.escapeJson(id)); > sb.append("\" }\n"); > } > } > {code} > > * The *TestPutElasticsearchHttp.java* test file needs to be updated to > reflect that now a requests without type is valid (it's currently marked as > invalid) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6403) ElasticSearch field selection broken in Elastic 7.0+
[ https://issues.apache.org/jira/browse/NIFI-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6403: Fix Version/s: 1.13.0 Resolution: Fixed Status: Resolved (was: Patch Available) > ElasticSearch field selection broken in Elastic 7.0+ > > > Key: NIFI-6403 > URL: https://issues.apache.org/jira/browse/NIFI-6403 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.10.0, 1.9.2 >Reporter: Wietze B >Assignee: Chris Sampson >Priority: Major > Fix For: 1.13.0 > > Attachments: NIFI-6403.json, NIFI-6403.xml > > Original Estimate: 0.25h > Time Spent: 10.5h > Remaining Estimate: 0h > > Elastic has > [deprecated|https://www.elastic.co/guide/en/elasticsearch/reference/6.6/breaking-changes-6.6.html#_deprecate_literal__source_exclude_literal_and_literal__source_include_literal_url_parameters] > the {{source_include}} search parameter in favour of {{source_includes}} in > version 7.0 and higher. > This means that processors using the field selection will get an HTTP 400 > error upon execution. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-6436) StandardPublicPort throws NullPointerException when it reports a bulletin event
[ https://issues.apache.org/jira/browse/NIFI-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6436. - Resolution: Fixed The PR had been merged. > StandardPublicPort throws NullPointerException when it reports a bulletin > event > --- > > Key: NIFI-6436 > URL: https://issues.apache.org/jira/browse/NIFI-6436 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.10.0 > > Attachments: image-2019-07-12-18-45-11-524.png > > Time Spent: 50m > Remaining Estimate: 0h > > NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable > passed as its constructor argument at the EventReporter.reportEvent() method, > but that variable is null when a public port is instantiated. > EventReporter.reportEvent() should get the current ProcessGroup by calling > getProcessGroup() method each time. > If an error occurred while a public port is processing request, it fails to > report a bulletin event due to this NullPointerException. Due to this issue, > public ports can not report error detail to users with bulletin messages like > following screenshot. > !image-2019-07-12-18-45-11-524.png|width=680! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6436) StandardPublicPort throws NullPointerException when it reports a bulletin event
[ https://issues.apache.org/jira/browse/NIFI-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6436: Affects Version/s: (was: 1.10.0) 1.9.2 > StandardPublicPort throws NullPointerException when it reports a bulletin > event > --- > > Key: NIFI-6436 > URL: https://issues.apache.org/jira/browse/NIFI-6436 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Attachments: image-2019-07-12-18-45-11-524.png > > Time Spent: 50m > Remaining Estimate: 0h > > NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable > passed as its constructor argument at the EventReporter.reportEvent() method, > but that variable is null when a public port is instantiated. > EventReporter.reportEvent() should get the current ProcessGroup by calling > getProcessGroup() method each time. > If an error occurred while a public port is processing request, it fails to > report a bulletin event due to this NullPointerException. Due to this issue, > public ports can not report error detail to users with bulletin messages like > following screenshot. > !image-2019-07-12-18-45-11-524.png|width=680! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6436) StandardPublicPort throws NullPointerException when it reports a bulletin event
[ https://issues.apache.org/jira/browse/NIFI-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6436: Fix Version/s: 1.10.0 > StandardPublicPort throws NullPointerException when it reports a bulletin > event > --- > > Key: NIFI-6436 > URL: https://issues.apache.org/jira/browse/NIFI-6436 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.10.0 > > Attachments: image-2019-07-12-18-45-11-524.png > > Time Spent: 50m > Remaining Estimate: 0h > > NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable > passed as its constructor argument at the EventReporter.reportEvent() method, > but that variable is null when a public port is instantiated. > EventReporter.reportEvent() should get the current ProcessGroup by calling > getProcessGroup() method each time. > If an error occurred while a public port is processing request, it fails to > report a bulletin event due to this NullPointerException. Due to this issue, > public ports can not report error detail to users with bulletin messages like > following screenshot. > !image-2019-07-12-18-45-11-524.png|width=680! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6403) ElasticSearch field selection broken in Elastic 7.0+
[ https://issues.apache.org/jira/browse/NIFI-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17108312#comment-17108312 ] Koji Kawamura commented on NIFI-6403: - [~jgresock] I found NIFI-6404. Please update the Github PR to state that it also addresses NIFI-6404. Thanks. > ElasticSearch field selection broken in Elastic 7.0+ > > > Key: NIFI-6403 > URL: https://issues.apache.org/jira/browse/NIFI-6403 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.10.0, 1.9.2 >Reporter: Wietze B >Assignee: Joseph Gresock >Priority: Major > Original Estimate: 0.25h > Time Spent: 2h 10m > Remaining Estimate: 0h > > Elastic has > [deprecated|https://www.elastic.co/guide/en/elasticsearch/reference/6.6/breaking-changes-6.6.html#_deprecate_literal__source_exclude_literal_and_literal__source_include_literal_url_parameters] > the {{source_include}} search parameter in favour of {{source_includes}} in > version 7.0 and higher. > This means that processors using the field selection will get an HTTP 400 > error upon execution. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6403) ElasticSearch field selection broken in Elastic 7.0+
[ https://issues.apache.org/jira/browse/NIFI-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17108282#comment-17108282 ] Koji Kawamura commented on NIFI-6403: - [~jgresock] Thanks for your contribution! Sorry for it's taking so long for your PR to get reviewed. I took a brief look at the latest PR today. And found it covers broader processors to support Elasticsearch 7.x. Would you mind updating this JIRA description to meet the updates? > ElasticSearch field selection broken in Elastic 7.0+ > > > Key: NIFI-6403 > URL: https://issues.apache.org/jira/browse/NIFI-6403 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.10.0, 1.9.2 >Reporter: Wietze B >Assignee: Joseph Gresock >Priority: Major > Original Estimate: 0.25h > Time Spent: 1h 50m > Remaining Estimate: 0h > > Elastic has > [deprecated|https://www.elastic.co/guide/en/elasticsearch/reference/6.6/breaking-changes-6.6.html#_deprecate_literal__source_exclude_literal_and_literal__source_include_literal_url_parameters] > the {{source_include}} search parameter in favour of {{source_includes}} in > version 7.0 and higher. > This means that processors using the field selection will get an HTTP 400 > error upon execution. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-7348) FlowFiles re-entering a Wait-processor after they've expired expire immediatelly
[ https://issues.apache.org/jira/browse/NIFI-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-7348. - Fix Version/s: 1.12.0 Resolution: Fixed > FlowFiles re-entering a Wait-processor after they've expired expire > immediatelly > > > Key: NIFI-7348 > URL: https://issues.apache.org/jira/browse/NIFI-7348 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.11.4 > Environment: Windows 10 / Ubuntu >Reporter: endzeit >Assignee: endzeit >Priority: Major > Labels: easyfix > Fix For: 1.12.0 > > Attachments: Wait_processor_expiration_issue.xml > > Time Spent: 2h 10m > Remaining Estimate: 0h > > We recently noticed a behaviour of the Wait processor that we thought of to > be a bug. > > As the attribute WAIT_START_TIMESTAMP is only removed once the FlowFile > leaves the processor successfully or failing, it affects FlowFiles that > expire the EXPIRATION_DURATION and re-enter the processor. > In case the FlowFile enters the same processor again - after expiring > beforehand - it is transported to the expired output immediately, without > waiting for the EXPIRATION_DURATION again. > Is this desired behaviour? > > I'll attach a very simple demonstration. Just let it run a minute or two and > look at the FlowFile attribute "counter" afterwards. > > There has been a pull-request addressing a similar issue (NIFI-5892), which > resulted in the attribute being removed after success and failure. This case > just seems to haven't been thought about back then. Or was there a reason to > not clear the attribute after expiration? I couldn't find a mention regarding > expiration in the issue. > > As this should be a very easy fix I would love to contribute, once you > confirm this is not intentional. > > *Current workaround:* > simply remove the attribute WAIT_START_TIMESTAMP after the FlowFile leaves > the Wait processor, e.g. using an UpdateAttribute processor > > *Edit 2020-04-13:* > Also this seems to have the side effect of NOT documenting the repeated > processing. There is no provenance entry added when re-entering the processor > and expiring immediately, leading to the error being harder to trace. > Because of this I reset the priority to "Major", which seems to be the > default anyway. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7348) FlowFiles re-entering a Wait-processor after they've expired expire immediatelly
[ https://issues.apache.org/jira/browse/NIFI-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084594#comment-17084594 ] Koji Kawamura commented on NIFI-7348: - Thanks [~EndzeitBegins] for the detailed explanation and fixing this! I've reviewed the PR and merged it to master. > FlowFiles re-entering a Wait-processor after they've expired expire > immediatelly > > > Key: NIFI-7348 > URL: https://issues.apache.org/jira/browse/NIFI-7348 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.11.4 > Environment: Windows 10 / Ubuntu >Reporter: endzeit >Assignee: endzeit >Priority: Major > Labels: easyfix > Attachments: Wait_processor_expiration_issue.xml > > Time Spent: 2h 10m > Remaining Estimate: 0h > > We recently noticed a behaviour of the Wait processor that we thought of to > be a bug. > > As the attribute WAIT_START_TIMESTAMP is only removed once the FlowFile > leaves the processor successfully or failing, it affects FlowFiles that > expire the EXPIRATION_DURATION and re-enter the processor. > In case the FlowFile enters the same processor again - after expiring > beforehand - it is transported to the expired output immediately, without > waiting for the EXPIRATION_DURATION again. > Is this desired behaviour? > > I'll attach a very simple demonstration. Just let it run a minute or two and > look at the FlowFile attribute "counter" afterwards. > > There has been a pull-request addressing a similar issue (NIFI-5892), which > resulted in the attribute being removed after success and failure. This case > just seems to haven't been thought about back then. Or was there a reason to > not clear the attribute after expiration? I couldn't find a mention regarding > expiration in the issue. > > As this should be a very easy fix I would love to contribute, once you > confirm this is not intentional. > > *Current workaround:* > simply remove the attribute WAIT_START_TIMESTAMP after the FlowFile leaves > the Wait processor, e.g. using an UpdateAttribute processor > > *Edit 2020-04-13:* > Also this seems to have the side effect of NOT documenting the repeated > processing. There is no provenance entry added when re-entering the processor > and expiring immediately, leading to the error being harder to trace. > Because of this I reset the priority to "Major", which seems to be the > default anyway. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-4012) Azure Event Hub UI typos and cleanup
[ https://issues.apache.org/jira/browse/NIFI-4012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-4012: Fix Version/s: 1.11.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Azure Event Hub UI typos and cleanup > > > Key: NIFI-4012 > URL: https://issues.apache.org/jira/browse/NIFI-4012 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Andrew Grande >Assignee: Shayne Burgess >Priority: Trivial > Fix For: 1.11.0 > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-4012) Azure Event Hub UI typos and cleanup
[ https://issues.apache.org/jira/browse/NIFI-4012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-4012: Status: Patch Available (was: Open) > Azure Event Hub UI typos and cleanup > > > Key: NIFI-4012 > URL: https://issues.apache.org/jira/browse/NIFI-4012 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Andrew Grande >Priority: Trivial > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-4012) Azure Event Hub UI typos and cleanup
[ https://issues.apache.org/jira/browse/NIFI-4012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-4012: --- Assignee: Shayne Burgess > Azure Event Hub UI typos and cleanup > > > Key: NIFI-4012 > URL: https://issues.apache.org/jira/browse/NIFI-4012 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Andrew Grande >Assignee: Shayne Burgess >Priority: Trivial > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-5929) Support for IBM MQ multi-instance queue managers (brokers)
[ https://issues.apache.org/jira/browse/NIFI-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-5929: Fix Version/s: 1.11.0 Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~vkcelik] for your contribution! I've added 'contributor' role to your JIRA account. Now you can assign to any NiFi JIRA issues. > Support for IBM MQ multi-instance queue managers (brokers) > -- > > Key: NIFI-5929 > URL: https://issues.apache.org/jira/browse/NIFI-5929 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.2.0, 1.1.1, > 1.0.1, 1.3.0, 1.4.0, 0.7.4, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1 >Reporter: Veli Kerim Celik >Assignee: Veli Kerim Celik >Priority: Major > Fix For: 1.11.0 > > Time Spent: 6h > Remaining Estimate: 0h > > Currently connections provided by JMSConnectionFactoryProvider controller > service can connect to just a single IBM MQ queue manager. This is > problematic when the queue manager is part of a [multi-instance queue > manager|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q018140_.html] > setup and goes from active to standby. > The goal of this issue is to support multiple queue managers, detect > standby/broken instance and switch to active instance. This behavior is > already implemented in [official Java > library|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.1.0/com.ibm.mq.javadoc.doc/WMQJMSClasses/com/ibm/mq/jms/MQConnectionFactory.html#setConnectionNameList_java.lang.String_] > and should be leveraged. > Syntax used to specify multiple queue managers: myhost01(1414),myhost02(1414) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-5929) Support for IBM MQ multi-instance queue managers (brokers)
[ https://issues.apache.org/jira/browse/NIFI-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-5929: --- Assignee: Veli Kerim Celik > Support for IBM MQ multi-instance queue managers (brokers) > -- > > Key: NIFI-5929 > URL: https://issues.apache.org/jira/browse/NIFI-5929 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.2.0, 1.1.1, > 1.0.1, 1.3.0, 1.4.0, 0.7.4, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1 >Reporter: Veli Kerim Celik >Assignee: Veli Kerim Celik >Priority: Major > Time Spent: 6h > Remaining Estimate: 0h > > Currently connections provided by JMSConnectionFactoryProvider controller > service can connect to just a single IBM MQ queue manager. This is > problematic when the queue manager is part of a [multi-instance queue > manager|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q018140_.html] > setup and goes from active to standby. > The goal of this issue is to support multiple queue managers, detect > standby/broken instance and switch to active instance. This behavior is > already implemented in [official Java > library|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.1.0/com.ibm.mq.javadoc.doc/WMQJMSClasses/com/ibm/mq/jms/MQConnectionFactory.html#setConnectionNameList_java.lang.String_] > and should be leveraged. > Syntax used to specify multiple queue managers: myhost01(1414),myhost02(1414) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6395) CountText processor is not thread safe - concurrency error
[ https://issues.apache.org/jira/browse/NIFI-6395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6395: Fix Version/s: 1.11.0 Resolution: Fixed Status: Resolved (was: Patch Available) > CountText processor is not thread safe - concurrency error > -- > > Key: NIFI-6395 > URL: https://issues.apache.org/jira/browse/NIFI-6395 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 > Environment: software platform >Reporter: Iván Ezequiel Rodriguez >Assignee: Iván Ezequiel Rodriguez >Priority: Major > Labels: concurrency, count, error, processor, text, thread-safe > Fix For: 1.11.0 > > Time Spent: 4h 20m > Remaining Estimate: 0h > > The processor counters fail to execute multiple threads. The programming is > not safe since they are not atomic operations. They are using a volatile > instance variable accessed by multiple threads when onTrigger is called. The > solution is to declare those local variables to onTrigger. Perform several > tests with millions of records and the counter does not work correctly when > it is executed with more than one task. > The problem is in the declaration of these instance variables: > private *volatile int* lineCount; > private *volatile int* lineNonEmptyCount; > private *volatile int* wordCount; > private *volatile int* characterCount; > This is not safe to perform atomic operations on these variables. As a result > the counters register less amount of lines when executed with multiple > threads. > I propose the following solution: > [Fix bug pull request|https://github.com/apache/nifi/pull/3552] > problem graph: > [!https://1.bp.blogspot.com/-o-RrxVFT1BA/XW7BW9e-iyI/AV4/SjV_4QzA5Po47fM-Roz75mE9mYwKxkIrQCLcBGAs/s640/thread-safe2.png|width=640,height=353!|https://1.bp.blogspot.com/-o-RrxVFT1BA/XW7BW9e-iyI/AV4/SjV_4QzA5Po47fM-Roz75mE9mYwKxkIrQCLcBGAs/s1600/thread-safe2.png] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-6793) CaptureChangeMySQL throws NumberFormatException when sequence id exceeds maximum Integer
Koji Kawamura created NIFI-6793: --- Summary: CaptureChangeMySQL throws NumberFormatException when sequence id exceeds maximum Integer Key: NIFI-6793 URL: https://issues.apache.org/jira/browse/NIFI-6793 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Koji Kawamura Assignee: Koji Kawamura NumberFormatException can be thrown because the internal sequence id is parsed as an integer where it should be a long. https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-cdc/nifi-cdc-mysql-bundle/nifi-cdc-mysql-processors/src/main/java/org/apache/nifi/cdc/mysql/processors/CaptureChangeMySQL.java#L492 {code} 2019-10-14 04:45:54,727 WARN [Timer-Driven Process Thread-15] o.a.n.controller.tasks.ConnectableTask Administratively Yielding CaptureChangeMySQL[id=719232e6-2e6e-1d73-82b8-4f59a42c5e52] due to uncaught Exception: java.lang.NumberFormatException: For input string: "2291535530" java.lang.NumberFormatException: For input string: "2291535530" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Integer.parseInt(Integer.java:583) at java.lang.Integer.parseInt(Integer.java:615) at org.apache.nifi.cdc.mysql.processors.CaptureChangeMySQL.setup(CaptureChangeMySQL.java:492) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-6374) ConsumeEWS fails when email attachment has no content type
[ https://issues.apache.org/jira/browse/NIFI-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6374. - Fix Version/s: 1.10.0 Resolution: Fixed > ConsumeEWS fails when email attachment has no content type > -- > > Key: NIFI-6374 > URL: https://issues.apache.org/jira/browse/NIFI-6374 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.0 >Reporter: JF Beauvais >Assignee: JF Beauvais >Priority: Major > Fix For: 1.10.0 > > Time Spent: 50m > Remaining Estimate: 0h > > Exchange Version: Exchange2010_SP2 > At line: > [https://github.com/apache/nifi/blob/e913f5706f3b3a0a2e98cf77ac4c96f63ba4f104/nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/ConsumeEWS.java#L457] > file.getContentType() may return null. In that case, we get an exception when > writting message file: > > {code:java} > Caused by: javax.mail.internet.ParseException: In Content-Type string , > expected MIME type, got null > at javax.mail.internet.ContentType.(ContentType.java:97) > at javax.mail.internet.MimeBodyPart.updateHeaders(MimeBodyPart.java:1467) > at javax.mail.internet.MimeBodyPart.updateHeaders(MimeBodyPart.java:1131) > at javax.mail.internet.MimeMultipart.updateHeaders(MimeMultipart.java:515) > at javax.mail.internet.MimeBodyPart.updateHeaders(MimeBodyPart.java:1490) > at javax.mail.internet.MimeMessage.updateHeaders(MimeMessage.java:2198) > at javax.mail.internet.MimeMessage.saveChanges(MimeMessage.java:2159) > at > org.apache.nifi.processors.email.ConsumeEWS.parseMessage(ConsumeEWS.java:478) > at > org.apache.nifi.processors.email.ConsumeEWS.fillMessageQueueIfNecessary(ConsumeEWS.java:365) > {code} > Possible workaround: > > > {code:java} > String type = Optional.ofNullable(file.getContentType()).orElse("text/plain"); > ByteArrayDataSource bds = new ByteArrayDataSource(file.getContent(), type); > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-6374) ConsumeEWS fails when email attachment has no content type
[ https://issues.apache.org/jira/browse/NIFI-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6374: --- Assignee: JF Beauvais > ConsumeEWS fails when email attachment has no content type > -- > > Key: NIFI-6374 > URL: https://issues.apache.org/jira/browse/NIFI-6374 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.0 >Reporter: JF Beauvais >Assignee: JF Beauvais >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > Exchange Version: Exchange2010_SP2 > At line: > [https://github.com/apache/nifi/blob/e913f5706f3b3a0a2e98cf77ac4c96f63ba4f104/nifi-nar-bundles/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/ConsumeEWS.java#L457] > file.getContentType() may return null. In that case, we get an exception when > writting message file: > > {code:java} > Caused by: javax.mail.internet.ParseException: In Content-Type string , > expected MIME type, got null > at javax.mail.internet.ContentType.(ContentType.java:97) > at javax.mail.internet.MimeBodyPart.updateHeaders(MimeBodyPart.java:1467) > at javax.mail.internet.MimeBodyPart.updateHeaders(MimeBodyPart.java:1131) > at javax.mail.internet.MimeMultipart.updateHeaders(MimeMultipart.java:515) > at javax.mail.internet.MimeBodyPart.updateHeaders(MimeBodyPart.java:1490) > at javax.mail.internet.MimeMessage.updateHeaders(MimeMessage.java:2198) > at javax.mail.internet.MimeMessage.saveChanges(MimeMessage.java:2159) > at > org.apache.nifi.processors.email.ConsumeEWS.parseMessage(ConsumeEWS.java:478) > at > org.apache.nifi.processors.email.ConsumeEWS.fillMessageQueueIfNecessary(ConsumeEWS.java:365) > {code} > Possible workaround: > > > {code:java} > String type = Optional.ofNullable(file.getContentType()).orElse("text/plain"); > ByteArrayDataSource bds = new ByteArrayDataSource(file.getContent(), type); > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-6395) CountText processor is not thread safe - concurrency error
[ https://issues.apache.org/jira/browse/NIFI-6395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6395: --- Assignee: Iván Ezequiel Rodriguez > CountText processor is not thread safe - concurrency error > -- > > Key: NIFI-6395 > URL: https://issues.apache.org/jira/browse/NIFI-6395 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 > Environment: software platform >Reporter: Iván Ezequiel Rodriguez >Assignee: Iván Ezequiel Rodriguez >Priority: Major > Labels: concurrency, count, error, processor, text, thread-safe > Time Spent: 1h 40m > Remaining Estimate: 0h > > The processor counters fail to execute multiple threads. The programming is > not safe since they are not atomic operations. They are using a volatile > instance variable accessed by multiple threads when onTrigger is called. The > solution is to declare those local variables to onTrigger. Perform several > tests with millions of records and the counter does not work correctly when > it is executed with more than one task. > The problem is in the declaration of these instance variables: > private *volatile int* lineCount; > private *volatile int* lineNonEmptyCount; > private *volatile int* wordCount; > private *volatile int* characterCount; > This is not safe to perform atomic operations on these variables. As a result > the counters register less amount of lines when executed with multiple > threads. > I propose the following solution: > [Fix bug pull request|https://github.com/apache/nifi/pull/3552] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6752) Create ASN.1 RecordReader
[ https://issues.apache.org/jira/browse/NIFI-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6752: Status: Patch Available (was: Open) > Create ASN.1 RecordReader > - > > Key: NIFI-6752 > URL: https://issues.apache.org/jira/browse/NIFI-6752 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Koji Kawamura >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > ASN.1 (Abstract Syntax Notation One) is a popular data structure definition > language in some domains such as telecommunications, computer networking and > cryptography. See > [WikiPedia|https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One] for > details. > If we add a RecordReader for ASN.1 encoded binary contents NiFi can be a > powerful tool to ingest ASN.1 encoded data into different destinations. Once > NiFi provides a RecordReader for ASN.1, all existing Record aware processors > can be used to convert format, enrich, filter or update records ... etc. > We could use an Apache 2.0 licensed ASN.1 Java library > [jASN1|https://www.beanit.com/asn1/]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6752) Create ASN.1 RecordReader
[ https://issues.apache.org/jira/browse/NIFI-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6752: Description: ASN.1 (Abstract Syntax Notation One) is a popular data structure definition language in some domains such as telecommunications, computer networking and cryptography. See [WikiPedia|https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One] for details. If we add a RecordReader for ASN.1 encoded binary contents NiFi can be a powerful tool to ingest ASN.1 encoded data into different destinations. Once NiFi provides a RecordReader for ASN.1, all existing Record aware processors can be used to convert format, enrich, filter or update records ... etc. We could use an Apache 2.0 licensed ASN.1 Java library [jASN1|https://www.beanit.com/asn1/]. was: ASN.1 (Abstract Syntax Notation One) is a popular data structure definition language in some domains such as telecommunications, computer networking and cryptography. See [WikiPedia|[https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One]] for details. If we add a RecordReader for ASN.1 encoded binary contents NiFi can be a powerful tool to ingest ASN.1 encoded data into different destinations. We could use an Apache 2.0 licensed ASN.1 Java library [jASN1|[https://www.beanit.com/asn1/]]. > Create ASN.1 RecordReader > - > > Key: NIFI-6752 > URL: https://issues.apache.org/jira/browse/NIFI-6752 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Koji Kawamura >Priority: Major > > ASN.1 (Abstract Syntax Notation One) is a popular data structure definition > language in some domains such as telecommunications, computer networking and > cryptography. See > [WikiPedia|https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One] for > details. > If we add a RecordReader for ASN.1 encoded binary contents NiFi can be a > powerful tool to ingest ASN.1 encoded data into different destinations. Once > NiFi provides a RecordReader for ASN.1, all existing Record aware processors > can be used to convert format, enrich, filter or update records ... etc. > We could use an Apache 2.0 licensed ASN.1 Java library > [jASN1|https://www.beanit.com/asn1/]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-6752) Create ASN.1 RecordReader
Koji Kawamura created NIFI-6752: --- Summary: Create ASN.1 RecordReader Key: NIFI-6752 URL: https://issues.apache.org/jira/browse/NIFI-6752 Project: Apache NiFi Issue Type: New Feature Components: Extensions Reporter: Koji Kawamura ASN.1 (Abstract Syntax Notation One) is a popular data structure definition language in some domains such as telecommunications, computer networking and cryptography. See [WikiPedia|[https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One]] for details. If we add a RecordReader for ASN.1 encoded binary contents NiFi can be a powerful tool to ingest ASN.1 encoded data into different destinations. We could use an Apache 2.0 licensed ASN.1 Java library [jASN1|[https://www.beanit.com/asn1/]]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6550) Create controller service for Azure Storage Credentials
[ https://issues.apache.org/jira/browse/NIFI-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6550: Fix Version/s: 1.10.0 Resolution: Fixed Status: Resolved (was: Patch Available) Thanks for your contribution [~turcsanyip] the improvement has been merged! > Create controller service for Azure Storage Credentials > --- > > Key: NIFI-6550 > URL: https://issues.apache.org/jira/browse/NIFI-6550 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Peter Turcsanyi >Assignee: Peter Turcsanyi >Priority: Major > Fix For: 1.10.0 > > Time Spent: 2.5h > Remaining Estimate: 0h > > Create a new controller service that can be used to obtain Azure Storage > Credentials (similar to AWS and GCP credential services). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6613) When FlowFile Repository fails to update due to previous failure, it should log the root cause
[ https://issues.apache.org/jira/browse/NIFI-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6613: Resolution: Fixed Status: Resolved (was: Patch Available) > When FlowFile Repository fails to update due to previous failure, it should > log the root cause > -- > > Key: NIFI-6613 > URL: https://issues.apache.org/jira/browse/NIFI-6613 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.10.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > When the FlowFile Repository (more specifically, the LengthDelimitedJournal > of the write-ahead log) fails to update, it logs the reason. However, all > subsequent attempts to update the repo will first check if the repo is > 'poisoned' and if so throw an Exception. This gets logged as something like: > {code:java} > Failed to process session due to > org.apache.nifi.processor.exception.ProcessException: FlowFile Repository > failed to update: org.apache.nifi.processor.exception.ProcessException: > FlowFile Repository failed to > updateorg.apache.nifi.processor.exception.ProcessException: FlowFile > Repository failed to updateat > org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:405) > at > org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:336) > at > org.apache.nifi.processors.script.ExecuteScript.onTrigger(ExecuteScript.java:228) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748)Caused by: java.io.IOException: > Cannot update journal file /flowfile_repository/journals/4614619461.journal > because this journal has already encountered a failure when attempting to > write to the file. If the repository is able to checkpoint, then this problem > will resolve itself. However, if the repository is unable to be checkpointed > (for example, due to being out of storage space or having too many open > files), then this issue may require manual intervention. {code} > Because there may be many Processors attempting to update the repository, > this causes a lot of errors in the logs and makes it difficult to understand > the underlying cause. When the journal becomes "poisoned" we should hold onto > the Throwable that caused it and log it in this error message so that each > update indicates the root cause. This will make it much easier to track what > happened. > > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-6596) Move AmazonS3EncryptionService to nifi-asw-service-api module
[ https://issues.apache.org/jira/browse/NIFI-6596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6596: Fix Version/s: 1.10.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Move AmazonS3EncryptionService to nifi-asw-service-api module > - > > Key: NIFI-6596 > URL: https://issues.apache.org/jira/browse/NIFI-6596 > Project: Apache NiFi > Issue Type: Bug >Reporter: Troy Melhase >Assignee: Troy Melhase >Priority: Trivial > Fix For: 1.10.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Seeing logs like this: > {{019-08-27 16:14:35,075 WARN [main] > o.a.n.n.StandardExtensionDiscoveringManager Component > org.apache.nifi.processors.aws.s3.FetchS3Object is bundled with its > referenced Controller Service APIs > org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs > should not be bundled with component implementations that reference it.}} > {{2019-08-27 16:14:35,081 WARN [main] > o.a.n.n.StandardExtensionDiscoveringManager Component > org.apache.nifi.processors.aws.s3.PutS3Object is bundled with its referenced > Controller Service APIs > org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs > should not be bundled with component implementations that reference it.}} > {{2019-08-27 16:14:35,195 WARN [main] > o.a.n.n.StandardExtensionDiscoveringManager Controller Service > org.apache.nifi.processors.aws.s3.encryption.StandardS3EncryptionService is > bundled with its supporting APIs > org.apache.nifi.processors.aws.s3.AmazonS3EncryptionService. The service APIs > should not be bundled with the implementations.}} > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (NIFI-6530) HTTP SiteToSite server returns 201 in case no data is available
[ https://issues.apache.org/jira/browse/NIFI-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6530: --- Assignee: Arpad Boda > HTTP SiteToSite server returns 201 in case no data is available > --- > > Key: NIFI-6530 > URL: https://issues.apache.org/jira/browse/NIFI-6530 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Arpad Boda >Assignee: Arpad Boda >Priority: Major > Time Spent: 6h > Remaining Estimate: 0h > > When MiNiFi or other NiFi connects to a HTTP SiteToSite server, the server > always returns 201 in case of transaction creation. > This is inefficient as transactions are created, tracked and later deleted > without anything really being transmitted. According to comments in MiNiFI > code 200 should be returned in such case, although 204 would be a better > choice in HTTP standard point of view. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (NIFI-6530) HTTP SiteToSite server returns 201 in case no data is available
[ https://issues.apache.org/jira/browse/NIFI-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6530. - Fix Version/s: 1.10.0 Resolution: Fixed > HTTP SiteToSite server returns 201 in case no data is available > --- > > Key: NIFI-6530 > URL: https://issues.apache.org/jira/browse/NIFI-6530 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Arpad Boda >Assignee: Arpad Boda >Priority: Major > Fix For: 1.10.0 > > Time Spent: 6h > Remaining Estimate: 0h > > When MiNiFi or other NiFi connects to a HTTP SiteToSite server, the server > always returns 201 in case of transaction creation. > This is inefficient as transactions are created, tracked and later deleted > without anything really being transmitted. According to comments in MiNiFI > code 200 should be returned in such case, although 204 would be a better > choice in HTTP standard point of view. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-6569) Site-to-Site not timing out when reading from remote NiFi
[ https://issues.apache.org/jira/browse/NIFI-6569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6569: Resolution: Fixed Status: Resolved (was: Patch Available) > Site-to-Site not timing out when reading from remote NiFi > - > > Key: NIFI-6569 > URL: https://issues.apache.org/jira/browse/NIFI-6569 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.10.0 > > Time Spent: 40m > Remaining Estimate: 0h > > I created two simple flows: > Remote Input Port -> LogAttribute > GenerateFlowFile -> RemoteProcessGroup (pointing to self) > So that I can simply feed generated FlowFiles to the RPG to the LogAttribute. > When I ran this using Java 11, the listener failed to start (known issue, > NIFI-5952). However, what I noticed is that the RPG would not stop when I > clicked Disable Transmission. Instead, the web request to disable would hang. > Thread dumps show the call to try to find out which nodes are in the cluster > are never timing out. This seems to be related to NIFI-4461, though it's not > entirely clear at this point. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-6570) Site-to-Site listener not starting if standalone and no Root Group Ports exist
[ https://issues.apache.org/jira/browse/NIFI-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6570: Resolution: Fixed Status: Resolved (was: Patch Available) > Site-to-Site listener not starting if standalone and no Root Group Ports exist > -- > > Key: NIFI-6570 > URL: https://issues.apache.org/jira/browse/NIFI-6570 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Minor > Fix For: 1.10.0 > > > I created a flow that has a Site-to-Site input port within a Process Group > but no ports at the root group. When I tried to send to the instance via > site-to-site, I could not connect to the instance. Thread dumps show that the > Site-to-Site listener had not yet begun to accept connections because no > Input Ports or Output Ports exist at the root level and the instance is not > clustered. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-5952) RAW Site-to-Site fails with java.nio.channels.IllegalBlockingModeException
[ https://issues.apache.org/jira/browse/NIFI-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-5952: Priority: Blocker (was: Critical) > RAW Site-to-Site fails with java.nio.channels.IllegalBlockingModeException > -- > > Key: NIFI-5952 > URL: https://issues.apache.org/jira/browse/NIFI-5952 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework > Environment: jdk-11.0.1 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Labels: Java11 > Time Spent: 1h 20m > Remaining Estimate: 0h > > During the review cycle of NIFI-5820, I found that while HTTP S2S works > without issue, RAW S2S is failing with following Exception: > {code:java} > 2018-12-19 16:19:26,811 ERROR [Site-to-Site Listener] org.apache.nifi.NiFi > java.nio.channels.IllegalBlockingModeException: null > at > java.base/sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:121) > at > org.apache.nifi.remote.SocketRemoteSiteListener$1.run(SocketRemoteSiteListener.java:125) > at java.base/java.lang.Thread.run(Thread.java:834) > {code} > Despite of the fact that the RAW has been worked with older Java versions, it > seems current nio usage at RAW S2S is not correct. And JDK 11 starts > complaining about it. > Here are few things I've discovered with current NiFi and nio SocketChannel: > - NiFi accepts RAW S2S client connection with SocketRemoteSiteListener, > which uses ServerSocketChannel as non-blocking manner [1] > - But SocketRemoteSiteListener doesn't use Selector API to accept incoming > connection and transfer data with the channel. This is the cause of above > exception. > - SocketRemoteSiteListener spawns new thread when it accepts connection. > This is how connections are handled with a non-nio, standard Socket > programming. If we want to use non-blocking NIO, we need to use channels with > Selector > - But using non-blocking IO with current NiFi S2S protocol can only add few > or none benefit by doing so. [2] > To make RAW S2S work with Java 11, we need either: > A. Stop using nio packages. > B. Implement correct nio usage, meaning use Selector IO and probably we need > another thread pool. > I'm going to take the approach A above, because B would take much more > refactoring. > [1] > [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-site-to-site/src/main/java/org/apache/nifi/remote/SocketRemoteSiteListener.java#L120] > [2] > [https://stackoverflow.com/questions/12338204/in-java-nio-is-a-selector-useful-for-a-client-socketchannel] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-6599) MergeRecord fails if 'fragment.count' attribute equals the number of records within a FlowFile where it should wait for remaining FlowFiles
[ https://issues.apache.org/jira/browse/NIFI-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6599: Status: Patch Available (was: In Progress) > MergeRecord fails if 'fragment.count' attribute equals the number of records > within a FlowFile where it should wait for remaining FlowFiles > --- > > Key: NIFI-6599 > URL: https://issues.apache.org/jira/browse/NIFI-6599 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Attachments: NIFI-6599.xml > > Time Spent: 10m > Remaining Estimate: 0h > > RecordBinManager.createThresholds has following code block: > {code:java} > if (MergeRecord.MERGE_STRATEGY_DEFRAGMENT.getValue().equals(mergeStrategy)) { > fragmentCountAttribute = MergeContent.FRAGMENT_COUNT_ATTRIBUTE; > if (!StringUtils.isEmpty(flowfile.getAttribute(fragmentCountAttribute))) { > minRecords = > Integer.parseInt(flowfile.getAttribute(fragmentCountAttribute)); > } > } else { > fragmentCountAttribute = null; > } > {code} > The code uses 'fragment.count' as the minRecords. This is wrong because > 'fragment.count' represents the number of fragments, i.e. number of FlowFiles > holding partial record set. > This causes a FlowFile to be sent 'failure' relationship where it should be > hold in the incoming connection. For example, when a FlowFile is split into > two FlowFiles, and each has 2 records in it, 'fragment.count' will be 2. In > this case, MergeContent thinks the minRecords is 2, where 4 is correct. Then > the first FlowFile is processed, while the 2nd one hasn't arrived, > MergeContent misunderstood that the bin reached to the minimum number of > records. But since there's only one FlowFile, it sends the FlowFile to > 'failure'. > The issue can be reproduced by the attached template. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-6599) MergeRecord fails if 'fragment.count' attribute equals the number of records within a FlowFile where it should wait for remaining FlowFiles
[ https://issues.apache.org/jira/browse/NIFI-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6599: Attachment: NIFI-6599.xml > MergeRecord fails if 'fragment.count' attribute equals the number of records > within a FlowFile where it should wait for remaining FlowFiles > --- > > Key: NIFI-6599 > URL: https://issues.apache.org/jira/browse/NIFI-6599 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Attachments: NIFI-6599.xml > > > RecordBinManager.createThresholds has following code block: > {code:java} > if (MergeRecord.MERGE_STRATEGY_DEFRAGMENT.getValue().equals(mergeStrategy)) { > fragmentCountAttribute = MergeContent.FRAGMENT_COUNT_ATTRIBUTE; > if (!StringUtils.isEmpty(flowfile.getAttribute(fragmentCountAttribute))) { > minRecords = > Integer.parseInt(flowfile.getAttribute(fragmentCountAttribute)); > } > } else { > fragmentCountAttribute = null; > } > {code} > The code uses 'fragment.count' as the minRecords. This is wrong because > 'fragment.count' represents the number of fragments, i.e. number of FlowFiles > holding partial record set. > This causes a FlowFile to be sent 'failure' relationship where it should be > hold in the incoming connection. For example, when a FlowFile is split into > two FlowFiles, and each has 2 records in it, 'fragment.count' will be 2. In > this case, MergeContent thinks the minRecords is 2, where 4 is correct. Then > the first FlowFile is processed, while the 2nd one hasn't arrived, > MergeContent misunderstood that the bin reached to the minimum number of > records. But since there's only one FlowFile, it sends the FlowFile to > 'failure'. > The issue can be reproduced by the attached template. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (NIFI-6599) MergeRecord fails if 'fragment.count' attribute equals the number of records within a FlowFile where it should wait for remaining FlowFiles
Koji Kawamura created NIFI-6599: --- Summary: MergeRecord fails if 'fragment.count' attribute equals the number of records within a FlowFile where it should wait for remaining FlowFiles Key: NIFI-6599 URL: https://issues.apache.org/jira/browse/NIFI-6599 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Koji Kawamura Assignee: Koji Kawamura RecordBinManager.createThresholds has following code block: {code:java} if (MergeRecord.MERGE_STRATEGY_DEFRAGMENT.getValue().equals(mergeStrategy)) { fragmentCountAttribute = MergeContent.FRAGMENT_COUNT_ATTRIBUTE; if (!StringUtils.isEmpty(flowfile.getAttribute(fragmentCountAttribute))) { minRecords = Integer.parseInt(flowfile.getAttribute(fragmentCountAttribute)); } } else { fragmentCountAttribute = null; } {code} The code uses 'fragment.count' as the minRecords. This is wrong because 'fragment.count' represents the number of fragments, i.e. number of FlowFiles holding partial record set. This causes a FlowFile to be sent 'failure' relationship where it should be hold in the incoming connection. For example, when a FlowFile is split into two FlowFiles, and each has 2 records in it, 'fragment.count' will be 2. In this case, MergeContent thinks the minRecords is 2, where 4 is correct. Then the first FlowFile is processed, while the 2nd one hasn't arrived, MergeContent misunderstood that the bin reached to the minimum number of records. But since there's only one FlowFile, it sends the FlowFile to 'failure'. The issue can be reproduced by the attached template. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (NIFI-6598) RemoteProcessGroup should utilize ManagedState to persist available peers
Koji Kawamura created NIFI-6598: --- Summary: RemoteProcessGroup should utilize ManagedState to persist available peers Key: NIFI-6598 URL: https://issues.apache.org/jira/browse/NIFI-6598 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Koji Kawamura Assignee: Koji Kawamura Currently NiFi will persist available remote S2S peers into a local file when a RemoteProcessGroup connects to a remote cluster, in order to recover peers from that file when the RPG restart communication next time. The default file location is '$NIFI_HOME/conf/state'. Although the location is configurable, in some deployments, the '$NIFI_HOME/conf' is not writable and NiFi fails to write the file. Ideally, instead of writing a local file, such state should be written to managed state as other components (Processors, ControllerServices) do. However, we should leave the option to store peers into a local file for external tools using SiteToSiteClient, but don't use managed state. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-6158) FetchParquet : Unable to read parquet data which has decimal data type
[ https://issues.apache.org/jira/browse/NIFI-6158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6158: Fix Version/s: 1.10.0 Resolution: Fixed Status: Resolved (was: Patch Available) > FetchParquet : Unable to read parquet data which has decimal data type > -- > > Key: NIFI-6158 > URL: https://issues.apache.org/jira/browse/NIFI-6158 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.1 >Reporter: Prasad Dasari >Assignee: Bryan Bende >Priority: Major > Fix For: 1.10.0 > > Attachments: 00_0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Unable to read parquet data using FetchParquet processor which has decimal > datype (decimal(38,10)) > Here is the error message > Cannot convert value [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] of type > class org.apache.avro.generic.GenericData$Fixed because no compatible types > exist in the UNION for field my_amount > The parquet schema has following data type > fixed_len_byte_array(16) my_amount (DECIMAL(38,10)) > > Here is the stack trace > 2019-03-27 17:47:02,255 ERROR [Timer-Driven Process Thread-42] > o.a.nifi.processors.parquet.FetchParquet > FetchParquet[id=c0836062-0169-1000--360a45b6] Failed to retrieve > content from xx for > StandardFlowFileRecord[uuid=6b65220b-3f77-460e-a5c4-e5298300a5c2,claim=,offset=0,name=part-0,size=0] > due to > org.apache.nifi.serialization.record.util.IllegalTypeConversionException: > Cannot convert value [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] of type > class org.apache.avro.generic.GenericData$Fixed because no compatible types > exist in the UNION for field ; routing to failure: > org.apache.nifi.serialization.record.util.IllegalTypeConversionException: > Cannot convert value [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] of type > class org.apache.avro.generic.GenericData$Fixed because no compatible types > exist in the UNION for field > org.apache.nifi.serialization.record.util.IllegalTypeConversionException: > Cannot convert value [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] of type > class org.apache.avro.generic.GenericData$Fixed because no compatible types > exist in the UNION for field x > at > org.apache.nifi.avro.AvroTypeUtil.convertUnionFieldValue(AvroTypeUtil.java:793) > at org.apache.nifi.avro.AvroTypeUtil.normalizeValue(AvroTypeUtil.java:879) > at > org.apache.nifi.avro.AvroTypeUtil.convertAvroRecordToMap(AvroTypeUtil.java:742) > at > org.apache.nifi.avro.AvroTypeUtil.convertAvroRecordToMap(AvroTypeUtil.java:715) > at > org.apache.nifi.processors.parquet.record.AvroParquetHDFSRecordReader.nextRecord(AvroParquetHDFSRecordReader.java:62) > at > org.apache.nifi.processors.hadoop.AbstractFetchHDFSRecord.lambda$null$0(AbstractFetchHDFSRecord.java:198) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2648) > at > org.apache.nifi.processors.hadoop.AbstractFetchHDFSRecord.lambda$onTrigger$1(AbstractFetchHDFSRecord.java:194) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1942) > at > org.apache.nifi.processors.hadoop.AbstractFetchHDFSRecord.onTrigger(AbstractFetchHDFSRecord.java:178) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (NIFI-6509) Date related issue in unit test VolatileComponentStatusRepositoryTest
[ https://issues.apache.org/jira/browse/NIFI-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6509. - Fix Version/s: 1.10.0 Resolution: Fixed > Date related issue in unit test VolatileComponentStatusRepositoryTest > - > > Key: NIFI-6509 > URL: https://issues.apache.org/jira/browse/NIFI-6509 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Tamas Palfy >Assignee: Tamas Palfy >Priority: Minor > Fix For: 1.10.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Unit test > {{VolatileComponentStatusRepositoryTest.testFilterDatesUsingPreferredDataPoints}} > may fail with the following: > {code:java} > java.lang.AssertionError: > Expected :Thu Jan 01 00:00:00 CET 1970 > Actual :Thu Jan 01 01:00:00 CET 1970 > {code} > The test creates a {{VolatileComponentStatusRepository}} instance and adds > {{java.util.Date}} objects to it starting from epoch (via {{new Date(0)}}). > This first date at epoch is the _Actual_ in the _AssertionError_. > Then filters this list by looking for those that are earlier or matching a > _start_ paramater. This _start_ is created from a {{LocalDateTime}} at the > default system time zone. > This is the _Expected_ in the _AssertionError_. > In general the issue is the difference in how the list is created (dates that > are 00:00:00 GMT) and how the filter parameter date is created (00:00:00 at > system time zone). -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (NIFI-6508) Test failures caused by timezone differences
[ https://issues.apache.org/jira/browse/NIFI-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6508. - Fix Version/s: 1.10.0 Resolution: Fixed Thank you [~malthe] for fixing the test! The change looks good, +1. Merged to master. Also, I've added NiFi Jira 'Contributor' role to your account. You can assign yourself to any NiFi Jira now. > Test failures caused by timezone differences > > > Key: NIFI-6508 > URL: https://issues.apache.org/jira/browse/NIFI-6508 > Project: Apache NiFi > Issue Type: Test > Components: Core Framework >Reporter: Malthe Borch >Assignee: Malthe Borch >Priority: Minor > Fix For: 1.10.0 > > Attachments: > 0001-NIFI-6508-Fix-test-failure-caused-by-timezone-and-or.patch > > > Some tests in {{VolatileComponentStatusRepositoryTest}} fail due to incorrect > timezone conversion, for example {{testFilterDatesUsingStartFilter}}: > {{java.lang.AssertionError: expected: but > was: at > org.apache.nifi.controller.status.history.VolatileComponentStatusRepositoryTest.testFilterDatesUsingStartFilter(VolatileComponentStatusRepositoryTest.java:132)}} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (NIFI-6508) Test failures caused by timezone differences
[ https://issues.apache.org/jira/browse/NIFI-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6508: --- Assignee: Malthe Borch > Test failures caused by timezone differences > > > Key: NIFI-6508 > URL: https://issues.apache.org/jira/browse/NIFI-6508 > Project: Apache NiFi > Issue Type: Test > Components: Core Framework >Reporter: Malthe Borch >Assignee: Malthe Borch >Priority: Minor > Attachments: > 0001-NIFI-6508-Fix-test-failure-caused-by-timezone-and-or.patch > > > Some tests in {{VolatileComponentStatusRepositoryTest}} fail due to incorrect > timezone conversion, for example {{testFilterDatesUsingStartFilter}}: > {{java.lang.AssertionError: expected: but > was: at > org.apache.nifi.controller.status.history.VolatileComponentStatusRepositoryTest.testFilterDatesUsingStartFilter(VolatileComponentStatusRepositoryTest.java:132)}} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-6477) The 'operate the component' policy is not removed when component is removed
[ https://issues.apache.org/jira/browse/NIFI-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6477: Fix Version/s: 1.10.0 Resolution: Fixed Status: Resolved (was: Patch Available) > The 'operate the component' policy is not removed when component is removed > --- > > Key: NIFI-6477 > URL: https://issues.apache.org/jira/browse/NIFI-6477 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Mark Bean >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.10.0 > > Time Spent: 40m > Remaining Estimate: 0h > > The 'operate the component' access policy is not properly cleaned up when its > corresponding component is removed. Specifically, the policy still remains in > authorizations.xml. > This makes it impossible to view the user's set of policies. In the UI, when > selecting the key icon for the user (in the full users list), the policy > window does not appear - presumably due to some exception that gets > swallowed. Nothing reported in the logs either. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (NIFI-6477) The 'operate the component' policy is not removed when component is removed
[ https://issues.apache.org/jira/browse/NIFI-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6477: Status: Patch Available (was: Open) Thank you for reporting the issue [~markbean]! I've submitted a PR to fix this. > The 'operate the component' policy is not removed when component is removed > --- > > Key: NIFI-6477 > URL: https://issues.apache.org/jira/browse/NIFI-6477 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Mark Bean >Assignee: Koji Kawamura >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The 'operate the component' access policy is not properly cleaned up when its > corresponding component is removed. Specifically, the policy still remains in > authorizations.xml. > This makes it impossible to view the user's set of policies. In the UI, when > selecting the key icon for the user (in the full users list), the policy > window does not appear - presumably due to some exception that gets > swallowed. Nothing reported in the logs either. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (NIFI-6477) The 'operate the component' policy is not removed when component is removed
[ https://issues.apache.org/jira/browse/NIFI-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6477: --- Assignee: Koji Kawamura > The 'operate the component' policy is not removed when component is removed > --- > > Key: NIFI-6477 > URL: https://issues.apache.org/jira/browse/NIFI-6477 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Mark Bean >Assignee: Koji Kawamura >Priority: Major > > The 'operate the component' access policy is not properly cleaned up when its > corresponding component is removed. Specifically, the policy still remains in > authorizations.xml. > This makes it impossible to view the user's set of policies. In the UI, when > selecting the key icon for the user (in the full users list), the policy > window does not appear - presumably due to some exception that gets > swallowed. Nothing reported in the logs either. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFIREG-306) NiFi Registry should implement context path verification similar to NiFi
Koji Kawamura created NIFIREG-306: - Summary: NiFi Registry should implement context path verification similar to NiFi Key: NIFIREG-306 URL: https://issues.apache.org/jira/browse/NIFIREG-306 Project: NiFi Registry Issue Type: Bug Reporter: Koji Kawamura NIFIREG-295 makes NiFi Registry can be accessed behind a reverse proxy. In such deployments NiFi Registry can be accessed with a context path configured at the reverse proxy. NiFi API verifies context path at WebUtils.verifyContextPath() using white listed context paths configured at 'nifi.web.proxy.context.path' in nifi.properties. NiFi Registry API should do the same context path verification. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (NIFIREG-303) TypeError: Cannot read property 'innerHTML' of undefined occurs with español browser
[ https://issues.apache.org/jira/browse/NIFIREG-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFIREG-303: - Assignee: Koji Kawamura > TypeError: Cannot read property 'innerHTML' of undefined occurs with español > browser > > > Key: NIFIREG-303 > URL: https://issues.apache.org/jira/browse/NIFIREG-303 > Project: NiFi Registry > Issue Type: Bug >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > NiFi Registry UI throws a TypeError when it's accessed with a browser using > Spanish español. > UI tries to load a translation XML config file if user's browser locale is > not 'en-US'. Currently translation for español (es) is only available. With > other languages, the translation part will not be activated. > The UI expects messages.es.xlf is returned as a XML dom object, and access > its data with "documentElement.innerHTML". But the returned object is a > String. > [https://github.com/apache/nifi-registry/blob/master/nifi-registry-core/nifi-registry-web-ui/src/main/webapp/nf-registry-bootstrap.js#L64] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFIREG-303) TypeError: Cannot read property 'innerHTML' of undefined occurs with español browser
Koji Kawamura created NIFIREG-303: - Summary: TypeError: Cannot read property 'innerHTML' of undefined occurs with español browser Key: NIFIREG-303 URL: https://issues.apache.org/jira/browse/NIFIREG-303 Project: NiFi Registry Issue Type: Bug Reporter: Koji Kawamura NiFi Registry UI throws a TypeError when it's accessed with a browser using Spanish español. UI tries to load a translation XML config file if user's browser locale is not 'en-US'. Currently translation for español (es) is only available. With other languages, the translation part will not be activated. The UI expects messages.es.xlf is returned as a XML dom object, and access its data with "documentElement.innerHTML". But the returned object is a String. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFIREG-303) TypeError: Cannot read property 'innerHTML' of undefined occurs with español browser
[ https://issues.apache.org/jira/browse/NIFIREG-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFIREG-303: -- Description: NiFi Registry UI throws a TypeError when it's accessed with a browser using Spanish español. UI tries to load a translation XML config file if user's browser locale is not 'en-US'. Currently translation for español (es) is only available. With other languages, the translation part will not be activated. The UI expects messages.es.xlf is returned as a XML dom object, and access its data with "documentElement.innerHTML". But the returned object is a String. [https://github.com/apache/nifi-registry/blob/master/nifi-registry-core/nifi-registry-web-ui/src/main/webapp/nf-registry-bootstrap.js#L64] was: NiFi Registry UI throws a TypeError when it's accessed with a browser using Spanish español. UI tries to load a translation XML config file if user's browser locale is not 'en-US'. Currently translation for español (es) is only available. With other languages, the translation part will not be activated. The UI expects messages.es.xlf is returned as a XML dom object, and access its data with "documentElement.innerHTML". But the returned object is a String. > TypeError: Cannot read property 'innerHTML' of undefined occurs with español > browser > > > Key: NIFIREG-303 > URL: https://issues.apache.org/jira/browse/NIFIREG-303 > Project: NiFi Registry > Issue Type: Bug >Reporter: Koji Kawamura >Priority: Major > > NiFi Registry UI throws a TypeError when it's accessed with a browser using > Spanish español. > UI tries to load a translation XML config file if user's browser locale is > not 'en-US'. Currently translation for español (es) is only available. With > other languages, the translation part will not be activated. > The UI expects messages.es.xlf is returned as a XML dom object, and access > its data with "documentElement.innerHTML". But the returned object is a > String. > [https://github.com/apache/nifi-registry/blob/master/nifi-registry-core/nifi-registry-web-ui/src/main/webapp/nf-registry-bootstrap.js#L64] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6479) Fix TestJdbcCommon timezone issues
[ https://issues.apache.org/jira/browse/NIFI-6479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6479: Resolution: Fixed Fix Version/s: 1.10.0 Status: Resolved (was: Patch Available) > Fix TestJdbcCommon timezone issues > -- > > Key: NIFI-6479 > URL: https://issues.apache.org/jira/browse/NIFI-6479 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > Fix For: 1.10.0 > > Time Spent: 50m > Remaining Estimate: 0h > > TestJdbcCommon.testConvertToAvroStreamForDateTimeAsLogicalType() is failing > with timezone issues. > The workaround for this issue is to add > {code:java} > -Dmaven.surefire.arguments=-Duser.timezone=UTC > {code} > when running maven. If {{-Dmaven.surefire.arguments}} is already specified > as part of the build, add {{-Duser.timezone=UTC}} to the value supplied to > that parameter. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (NIFIREG-295) Add support for proxying via Apache Knox
[ https://issues.apache.org/jira/browse/NIFIREG-295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFIREG-295: - Assignee: Koji Kawamura > Add support for proxying via Apache Knox > > > Key: NIFIREG-295 > URL: https://issues.apache.org/jira/browse/NIFIREG-295 > Project: NiFi Registry > Issue Type: New Feature >Reporter: Kevin Risden >Assignee: Koji Kawamura >Priority: Major > > Apache NiFi supports proxying via Apache Knox, but NiFi Registry does not. > This would make single sign on between the different servers seamless. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6407) Support useAvroLogicalTypes in the PutBigQueryBatch Processor
[ https://issues.apache.org/jira/browse/NIFI-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6407: Resolution: Fixed Status: Resolved (was: Patch Available) > Support useAvroLogicalTypes in the PutBigQueryBatch Processor > - > > Key: NIFI-6407 > URL: https://issues.apache.org/jira/browse/NIFI-6407 > Project: Apache NiFi > Issue Type: Wish > Components: Extensions >Affects Versions: 1.9.2 >Reporter: John >Assignee: Pierre Villard >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Would be great if PutBigQueryBatch supported the Google useAvroLogicalTypes > option, similar to https://issues.apache.org/jira/browse/AIRFLOW-3541 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6407) Support useAvroLogicalTypes in the PutBigQueryBatch Processor
[ https://issues.apache.org/jira/browse/NIFI-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6407: Fix Version/s: 1.10.0 > Support useAvroLogicalTypes in the PutBigQueryBatch Processor > - > > Key: NIFI-6407 > URL: https://issues.apache.org/jira/browse/NIFI-6407 > Project: Apache NiFi > Issue Type: Wish > Components: Extensions >Affects Versions: 1.9.2 >Reporter: John >Assignee: Pierre Villard >Priority: Major > Fix For: 1.10.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Would be great if PutBigQueryBatch supported the Google useAvroLogicalTypes > option, similar to https://issues.apache.org/jira/browse/AIRFLOW-3541 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6487) ListS3 processor doesn't add S3 User Metadata to the Flow File
[ https://issues.apache.org/jira/browse/NIFI-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6487. - Resolution: Fixed Fix Version/s: 1.10.0 [~jf.beauvais] I've added NiFi JIRA's 'Contributor' role to your account. You can assign yourself to NiFi JIRAs. Thanks for your contribution! > ListS3 processor doesn't add S3 User Metadata to the Flow File > -- > > Key: NIFI-6487 > URL: https://issues.apache.org/jira/browse/NIFI-6487 > Project: Apache NiFi > Issue Type: Wish > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: JF Beauvais >Assignee: JF Beauvais >Priority: Major > Fix For: 1.10.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently there is the possibility to add S3 tags to the flow files > attributes by setting "Write Object Tags" to true. > It could be also interesting to add S3 Object "User metadata" as well -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (NIFI-6487) ListS3 processor doesn't add S3 User Metadata to the Flow File
[ https://issues.apache.org/jira/browse/NIFI-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6487: --- Assignee: JF Beauvais > ListS3 processor doesn't add S3 User Metadata to the Flow File > -- > > Key: NIFI-6487 > URL: https://issues.apache.org/jira/browse/NIFI-6487 > Project: Apache NiFi > Issue Type: Wish > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: JF Beauvais >Assignee: JF Beauvais >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently there is the possibility to add S3 tags to the flow files > attributes by setting "Write Object Tags" to true. > It could be also interesting to add S3 Object "User metadata" as well -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6490) MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable registry expression language
[ https://issues.apache.org/jira/browse/NIFI-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6490. - Resolution: Fixed Fix Version/s: 1.10.0 [~AxelSync], I've added NiFi Jira projects 'Contributor' roll to your account. Now you can assign yourself to NiFi JIRAs. Thanks for your contribution! > MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable > registry expression language > -- > > Key: NIFI-6490 > URL: https://issues.apache.org/jira/browse/NIFI-6490 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Alessandro D'Armiento >Assignee: Alessandro D'Armiento >Priority: Minor > Fix For: 1.10.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > h2. Current Situation > MergeRecords allows two attributes MIN_RECORDS and MAX_RECORDS to define how > many records a merged bin can contain. > These properties, however, do not support expression language and cannot be > inserted from variables. > h2. Improvement Proposal > Accept variable registry in these properties -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (NIFI-6490) MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable registry expression language
[ https://issues.apache.org/jira/browse/NIFI-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6490: --- Assignee: Alessandro D'Armiento > MergeRecord properties MIN_RECORDS and MAX_RECORDS should accept variable > registry expression language > -- > > Key: NIFI-6490 > URL: https://issues.apache.org/jira/browse/NIFI-6490 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Alessandro D'Armiento >Assignee: Alessandro D'Armiento >Priority: Minor > Time Spent: 2h 20m > Remaining Estimate: 0h > > h2. Current Situation > MergeRecords allows two attributes MIN_RECORDS and MAX_RECORDS to define how > many records a merged bin can contain. > These properties, however, do not support expression language and cannot be > inserted from variables. > h2. Improvement Proposal > Accept variable registry in these properties -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-6507) ConsumeWindowsEventLog should renew failed subscription
Koji Kawamura created NIFI-6507: --- Summary: ConsumeWindowsEventLog should renew failed subscription Key: NIFI-6507 URL: https://issues.apache.org/jira/browse/NIFI-6507 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Koji Kawamura Assignee: Koji Kawamura Current implementation has some code for specific 15011 error code. The processor uses EvtSubscribeStrict flag which produces ERROR_EVT_QUERY_RESULT_STALE (15011) event when event records are missing. Currently, the processor only logs the error code. But does not renew subscription. [https://docs.microsoft.com/en-us/windows/desktop/api/winevt/nc-winevt-evt_subscribe_callback] When error 15011 happens, the processor stopped reading further events. It looks as if the processor hangs. The processor doesn't renew subscription because it thinks it already has a valid subscription. The current implementation determines if a subscription is valid by these lines of code: {code:java} private boolean isSubscribed() { return subscriptionHandle != null && subscriptionHandle.getPointer() != null; }{code} [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-windows-event-log-bundle/nifi-windows-event-log-processors/src/main/java/org/apache/nifi/processors/windows/event/log/ConsumeWindowsEventLog.java#L242-L244] If already subscribed, the processor polls received messages from the internal queue. But since the subscription has encountered an error, no further messages available. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6489) Remove references to HipChat from website
[ https://issues.apache.org/jira/browse/NIFI-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6489. - Resolution: Fixed Thanks [~andrewmlim]! I've merged the PR and deployed the change. > Remove references to HipChat from website > - > > Key: NIFI-6489 > URL: https://issues.apache.org/jira/browse/NIFI-6489 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Example page: https://nifi.apache.org/mailing_lists.html -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6460) Remove references to HipChat in docs
[ https://issues.apache.org/jira/browse/NIFI-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6460. - Resolution: Fixed Fix Version/s: 1.10.0 > Remove references to HipChat in docs > > > Key: NIFI-6460 > URL: https://issues.apache.org/jira/browse/NIFI-6460 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Major > Fix For: 1.10.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Example instances are in the README -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6318) Support EL in CSV formatting properties
[ https://issues.apache.org/jira/browse/NIFI-6318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6318: Resolution: Fixed Status: Resolved (was: Patch Available) > Support EL in CSV formatting properties > --- > > Key: NIFI-6318 > URL: https://issues.apache.org/jira/browse/NIFI-6318 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Peter Turcsanyi >Assignee: Peter Turcsanyi >Priority: Major > Time Spent: 4.5h > Remaining Estimate: 0h > > Improve CSV components to support dynamic configuration of the CSV delimiter > and other formatting parameters via expression language / flowfile attributes. > Components: > - CSVReader > - CSVRecordSetWriter > - ConvertExcelToCSVProcessor > Properties: > - delimiter > - quote character > - escape character -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6318) Support EL in CSV formatting properties
[ https://issues.apache.org/jira/browse/NIFI-6318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6318: Fix Version/s: 1.10.0 > Support EL in CSV formatting properties > --- > > Key: NIFI-6318 > URL: https://issues.apache.org/jira/browse/NIFI-6318 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Peter Turcsanyi >Assignee: Peter Turcsanyi >Priority: Major > Fix For: 1.10.0 > > Time Spent: 4.5h > Remaining Estimate: 0h > > Improve CSV components to support dynamic configuration of the CSV delimiter > and other formatting parameters via expression language / flowfile attributes. > Components: > - CSVReader > - CSVRecordSetWriter > - ConvertExcelToCSVProcessor > Properties: > - delimiter > - quote character > - escape character -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6442) ExecuteSQL/ExecuteSQLRecord convert to Avro date type incorrectly when set 'Use Avro Logical Types' to true
[ https://issues.apache.org/jira/browse/NIFI-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6442. - Resolution: Fixed Fix Version/s: 1.10.0 Thank you [~archon] for reporting and fixing this issue! I've added NiFi Jira project's 'Contributor' role to your Jira account so that you can assign yourself to NiFi JIRAs. > ExecuteSQL/ExecuteSQLRecord convert to Avro date type incorrectly when set > 'Use Avro Logical Types' to true > --- > > Key: NIFI-6442 > URL: https://issues.apache.org/jira/browse/NIFI-6442 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: archon gum >Assignee: archon gum >Priority: Major > Fix For: 1.10.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > > '2019-01-01' will consider to be '2019-01-01 00:00:00' which 154630080L > milliseconds in UTC. > But in other time zone such as '+08:00', do "select date('2019-01-01')" and > result.getObject() or result.getDate() will return java.sql.Date object but > java.sql.Date.getTime() return 154627200L instead of 154630080L which > it's 8 hours earlier. > Currently, ExecuteSQL return "(java.sql.Date.getTime() - 0) / 8640" as > epoch days -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6442) ExecuteSQL/ExecuteSQLRecord convert to Avro date type incorrectly when set 'Use Avro Logical Types' to true
[ https://issues.apache.org/jira/browse/NIFI-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6442: Priority: Major (was: Blocker) > ExecuteSQL/ExecuteSQLRecord convert to Avro date type incorrectly when set > 'Use Avro Logical Types' to true > --- > > Key: NIFI-6442 > URL: https://issues.apache.org/jira/browse/NIFI-6442 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: archon gum >Assignee: archon gum >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > > '2019-01-01' will consider to be '2019-01-01 00:00:00' which 154630080L > milliseconds in UTC. > But in other time zone such as '+08:00', do "select date('2019-01-01')" and > result.getObject() or result.getDate() will return java.sql.Date object but > java.sql.Date.getTime() return 154627200L instead of 154630080L which > it's 8 hours earlier. > Currently, ExecuteSQL return "(java.sql.Date.getTime() - 0) / 8640" as > epoch days -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (NIFI-6442) ExecuteSQL/ExecuteSQLRecord convert to Avro date type incorrectly when set 'Use Avro Logical Types' to true
[ https://issues.apache.org/jira/browse/NIFI-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6442: --- Assignee: archon gum > ExecuteSQL/ExecuteSQLRecord convert to Avro date type incorrectly when set > 'Use Avro Logical Types' to true > --- > > Key: NIFI-6442 > URL: https://issues.apache.org/jira/browse/NIFI-6442 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: archon gum >Assignee: archon gum >Priority: Blocker > Time Spent: 20m > Remaining Estimate: 0h > > > '2019-01-01' will consider to be '2019-01-01 00:00:00' which 154630080L > milliseconds in UTC. > But in other time zone such as '+08:00', do "select date('2019-01-01')" and > result.getObject() or result.getDate() will return java.sql.Date object but > java.sql.Date.getTime() return 154627200L instead of 154630080L which > it's 8 hours earlier. > Currently, ExecuteSQL return "(java.sql.Date.getTime() - 0) / 8640" as > epoch days -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6334) PutBigqueryBatch Throws Java Error
[ https://issues.apache.org/jira/browse/NIFI-6334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6334: Resolution: Fixed Fix Version/s: 1.10.0 Status: Resolved (was: Patch Available) > PutBigqueryBatch Throws Java Error > -- > > Key: NIFI-6334 > URL: https://issues.apache.org/jira/browse/NIFI-6334 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Christopher Gambino >Assignee: Pierre Villard >Priority: Minor > Fix For: 1.10.0 > > Attachments: image-2019-05-30-14-20-39-647.png > > Original Estimate: 2h > Time Spent: 40m > Remaining Estimate: 1h 20m > > PutBigqueryBatch Throws a java.lang.UnsupportedOperationException when no > "Project ID" is entered. Entering a "Project ID" attribute resolves the > error and the processor functions normally. Recommend making it a required > property to resolve issue > !image-2019-05-30-14-20-39-647.png! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-5970) PutSQL with batch size > 1 + DBCPConnectionPoolLookup results in missing database.name
[ https://issues.apache.org/jira/browse/NIFI-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-5970: Status: Patch Available (was: In Progress) > PutSQL with batch size > 1 + DBCPConnectionPoolLookup results in missing > database.name > -- > > Key: NIFI-5970 > URL: https://issues.apache.org/jira/browse/NIFI-5970 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Bryan Bende >Assignee: Koji Kawamura >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > When the batch size is greater than 1 we end up passing null for the > attributes that get passed in to the DBCPConnectionPoolLookup, this is > because the attributes of each flow file could be different: > [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-processor-utils/src/main/java/org/apache/nifi/processor/util/pattern/Put.java#L97] > Part of the issue is that at this point in the code we have no idea if we are > using a DBCPConnectionPoolLookup that requires the database.name attribute, > or just a regular DBCPConnectionPool that doesn't. > https://stackoverflow.com/questions/54312773/dbcpconnectionpoollookup-complaining-about-missing-database-name-even-when-it-is -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6438) PutSQL complaining about missing database.name even when it is set
[ https://issues.apache.org/jira/browse/NIFI-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6438. - Resolution: Fixed [~Behrouz] Thanks for reporting this issue. As you already noticed NIFI-5970 is a duplicate. I will try fixing this issue with NIFI-5970. > PutSQL complaining about missing database.name even when it is set > -- > > Key: NIFI-6438 > URL: https://issues.apache.org/jira/browse/NIFI-6438 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.9.2 > Environment: Linux >Reporter: Behrouz >Priority: Major > Fix For: 1.9.2 > > > I am going to use PutSQL with DBCPConnectionPoolLookup, but sometimes PutSQL > doesn't process the incoming flow and show this error: > {code:java} > Failed to process session due to Attributes must contain an attribute name > 'database.name': org.apache.nifi.processor.exception.ProcessException: > Attributes must contain an attribute name 'database.name'{code} > I set the 'database.name' correctly but it doesn't work. > I use several PutSQL in the way of fileflow, some of them work but a few > doesn't work. > This error happened when I send several (more than 60) flowFile to the > PutSQL processor. > Somebody else has reported this issue: > [http://apache-nifi.1125220.n5.nabble.com/DBCPConnectionPool-Not-looking-up-td25170.html] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (NIFI-5970) PutSQL with batch size > 1 + DBCPConnectionPoolLookup results in missing database.name
[ https://issues.apache.org/jira/browse/NIFI-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-5970: --- Assignee: Koji Kawamura > PutSQL with batch size > 1 + DBCPConnectionPoolLookup results in missing > database.name > -- > > Key: NIFI-5970 > URL: https://issues.apache.org/jira/browse/NIFI-5970 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Bryan Bende >Assignee: Koji Kawamura >Priority: Minor > > When the batch size is greater than 1 we end up passing null for the > attributes that get passed in to the DBCPConnectionPoolLookup, this is > because the attributes of each flow file could be different: > [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-processor-utils/src/main/java/org/apache/nifi/processor/util/pattern/Put.java#L97] > Part of the issue is that at this point in the code we have no idea if we are > using a DBCPConnectionPoolLookup that requires the database.name attribute, > or just a regular DBCPConnectionPool that doesn't. > https://stackoverflow.com/questions/54312773/dbcpconnectionpoollookup-complaining-about-missing-database-name-even-when-it-is -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6432) HBase map cache service ignores column family and qualifiers on get
[ https://issues.apache.org/jira/browse/NIFI-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6432: Resolution: Fixed Status: Resolved (was: Patch Available) > HBase map cache service ignores column family and qualifiers on get > --- > > Key: NIFI-6432 > URL: https://issues.apache.org/jira/browse/NIFI-6432 > Project: Apache NiFi > Issue Type: Bug >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The HBase_1_1_2_ClientMapCacheService (and the 2.x equivalent) has properties > for Column Family and Column Qualifier. These are used in the put method when > inserting a new cache entry. However, they are not used in the get method. > This means that if the row has other families and qualifiers, then you may > not get back the correct value. > This also applies to the containsKey method. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885747#comment-16885747 ] Koji Kawamura commented on NIFI-6439: - I had the exact same exception recently, but I don't remember how I fixed that and what was the root cause... I think I got the error while building NiFi again and again while switching between different Git branches and Java versions (Java 8 and 11). Which JDK version are you using? The current NiFi master branch requires Java 8 to build and run. NIFI-5176 tracks progress on supporting Java 11. One more possible cause is unstable network. I just remember that I encountered WiFi connectivity issue and it may had caused the XMLBeanDefinitionStoreException as it couldn't load the external DTD file from the internet. So, my suggestions are, check Java version and network connectivity. Hope this helps. > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: Linux, >Reporter: Zinan Ma >Priority: Major > Labels: build > Attachments: image-2019-07-15-08-21-51-633.png > > Original Estimate: 96h > Remaining Estimate: 96h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6436) StandardPublicPort throws NullPointerException when it reports a bulletin event
[ https://issues.apache.org/jira/browse/NIFI-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6436: Description: NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable passed as its constructor argument at the EventReporter.reportEvent() method, but that variable is null when a public port is instantiated. EventReporter.reportEvent() should get the current ProcessGroup by calling getProcessGroup() method each time. If an error occurred while a public port is processing request, it fails to report a bulletin event due to this NullPointerException. Due to this issue, public ports can not report error detail to users with bulletin messages like following screenshot. !image-2019-07-12-18-45-11-524.png|width=680! was: NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable passed as its constructor argument at the EventReporter.reportEvent() method, but that variable is null when a public port is instantiated. EventReporter.reportEvent() should get the current ProcessGroup by calling getProcessGroup() method each time. > StandardPublicPort throws NullPointerException when it reports a bulletin > event > --- > > Key: NIFI-6436 > URL: https://issues.apache.org/jira/browse/NIFI-6436 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.10.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Attachments: image-2019-07-12-18-45-11-524.png > > > NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable > passed as its constructor argument at the EventReporter.reportEvent() method, > but that variable is null when a public port is instantiated. > EventReporter.reportEvent() should get the current ProcessGroup by calling > getProcessGroup() method each time. > If an error occurred while a public port is processing request, it fails to > report a bulletin event due to this NullPointerException. Due to this issue, > public ports can not report error detail to users with bulletin messages like > following screenshot. > !image-2019-07-12-18-45-11-524.png|width=680! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6436) StandardPublicPort throws NullPointerException when it reports a bulletin event
[ https://issues.apache.org/jira/browse/NIFI-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6436: Attachment: image-2019-07-12-18-45-11-524.png > StandardPublicPort throws NullPointerException when it reports a bulletin > event > --- > > Key: NIFI-6436 > URL: https://issues.apache.org/jira/browse/NIFI-6436 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.10.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Attachments: image-2019-07-12-18-45-11-524.png > > > NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable > passed as its constructor argument at the EventReporter.reportEvent() method, > but that variable is null when a public port is instantiated. > EventReporter.reportEvent() should get the current ProcessGroup by calling > getProcessGroup() method each time. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-6436) StandardPublicPort throws NullPointerException when it reports a bulletin event
Koji Kawamura created NIFI-6436: --- Summary: StandardPublicPort throws NullPointerException when it reports a bulletin event Key: NIFI-6436 URL: https://issues.apache.org/jira/browse/NIFI-6436 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Koji Kawamura Assignee: Koji Kawamura NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable passed as its constructor argument at the EventReporter.reportEvent() method, but that variable is null when a public port is instantiated. EventReporter.reportEvent() should get the current ProcessGroup by calling getProcessGroup() method each time. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6436) StandardPublicPort throws NullPointerException when it reports a bulletin event
[ https://issues.apache.org/jira/browse/NIFI-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6436: Affects Version/s: 1.10.0 > StandardPublicPort throws NullPointerException when it reports a bulletin > event > --- > > Key: NIFI-6436 > URL: https://issues.apache.org/jira/browse/NIFI-6436 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.10.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > NIFI-2933 changed StandardPublicPort to use the 'processGroup' variable > passed as its constructor argument at the EventReporter.reportEvent() method, > but that variable is null when a public port is instantiated. > EventReporter.reportEvent() should get the current ProcessGroup by calling > getProcessGroup() method each time. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6428) CaptureChangeMySQL throws IOException with BIGIN event due to lingering 'inTransaction' instance variable
[ https://issues.apache.org/jira/browse/NIFI-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883631#comment-16883631 ] Koji Kawamura commented on NIFI-6428: - Hi [~mattyb149], do you have any idea on how we can address this? My idea is resetting 'inTransaction' to false when the processor restarts consuming binlog with user defined initial binlog position (instead of managed state). Also, add a description to note that specifying initial binlog position in the middle of a transaction (between a BEGIN and a COMMIT) is not supported. How do you think? > CaptureChangeMySQL throws IOException with BIGIN event due to lingering > 'inTransaction' instance variable > - > > Key: NIFI-6428 > URL: https://issues.apache.org/jira/browse/NIFI-6428 > Project: Apache NiFi > Issue Type: Bug >Reporter: Koji Kawamura >Priority: Major > > Because of not clearing 'inTransaction' instance variable when the processor > stops, when a user clears processor state and restart it with a specific > initial binlog position, then if BEGIN event is received, the processor > throws an IOException. > The processor should reset 'inTransaction' to false, and also other instance > variables when it start with an empty processor state. > The issue was reported at NiFi user ML. > [https://mail-archives.apache.org/mod_mbox/nifi-users/201907.mbox/%3C2019070919385393266224%40geekplus.com.cn%3E] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6419) AvroWriter single record with external schema results in data loss
[ https://issues.apache.org/jira/browse/NIFI-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6419: Resolution: Fixed Fix Version/s: 1.10.0 Status: Resolved (was: Patch Available) Thank you both [~pcgrenier] and [~turcsanyip] for reporting and fixing this issue! The PRs now have been merged to master. [~pcgrenier] I added 'Contributor' role to your Jira account so that you can assign yourself to NiFi JIRAs. > AvroWriter single record with external schema results in data loss > -- > > Key: NIFI-6419 > URL: https://issues.apache.org/jira/browse/NIFI-6419 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.0 >Reporter: Phillip Grenier >Assignee: Peter Turcsanyi >Priority: Major > Fix For: 1.10.0 > > Attachments: getSplitRecord.xml > > Time Spent: 2h 20m > Remaining Estimate: 0h > > When a split record tries to create single record splits with an external > schema, it does not use a record set and fails to write data. > I have attached a sample flow, if you run a simple flow file through with > first,last > fname,lname > another,name > you will see the resulting flow files have 0 bytes. > > Nifi 1.9.2 > java -version > openjdk version "1.8.0_201" > OpenJDK Runtime Environment (build 1.8.0_201-b09) > OpenJDK 64-Bit Server VM (build 25.201-b09, mixed mode) -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6271) ExecuteSQL incoming flowfile attributes not copied into output flowfiles when Output Batch Size is set
[ https://issues.apache.org/jira/browse/NIFI-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6271. - Resolution: Fixed Fix Version/s: 1.10.0 Thanks for your contribution [~hondawei]! I've added 'Contributor' role to your Jira account, so that you can assign yourself to NiFi Jira tickets. > ExecuteSQL incoming flowfile attributes not copied into output flowfiles when > Output Batch Size is set > -- > > Key: NIFI-6271 > URL: https://issues.apache.org/jira/browse/NIFI-6271 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Arnaud Rivero >Assignee: HondaWei >Priority: Major > Labels: easyfix, features, usability > Fix For: 1.10.0 > > Original Estimate: 0.5h > Time Spent: 2h > Remaining Estimate: 0h > > When using the executeSQL and executeSQLRecord processors, we can use input > flowfiles with a certain number of attributes. If we don't set the Output > Batch Size, all these attributes are copied to the output flowfile. However, > if we set it, only the flowfiles from the first batch will have the > attributes copied to. The flowfiles in the following batches will only have > the default attributes. > h2. Root cause > In the source code of the method _onTrigger_ in the class > _AbstractExecuteSQL,_ we have the following piece of code that is supposed to > create an output flowfile and copy the original attributes into it: > {code:java} > FlowFile resultSetFF; > if (fileToProcess == null) { > resultSetFF = session.create(); > } else { > resultSetFF = session.create(fileToProcess); > resultSetFF = session.putAllAttributes(resultSetFF, > fileToProcess.getAttributes()); > } > {code} > However the fix for the issue NIFI-6040 introduced this snippet way below in > the same method: > > {code:java} > // If we've reached the batch size, send out the flow files > if (outputBatchSize > 0 && resultSetFlowFiles.size() >= outputBatchSize) { > session.transfer(resultSetFlowFiles, REL_SUCCESS); > // Need to remove the original input file if it exists > if (fileToProcess != null) { > session.remove(fileToProcess); > fileToProcess = null; > } > session.commit(); > resultSetFlowFiles.clear(); > } > {code} > As you can see, it sets the variable fileToProcess to null, preventing the > flowfiles in the next batch to copy its attributes > > h2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6271) ExecuteSQL incoming flowfile attributes not copied into output flowfiles when Output Batch Size is set
[ https://issues.apache.org/jira/browse/NIFI-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6271: --- Assignee: HondaWei > ExecuteSQL incoming flowfile attributes not copied into output flowfiles when > Output Batch Size is set > -- > > Key: NIFI-6271 > URL: https://issues.apache.org/jira/browse/NIFI-6271 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Arnaud Rivero >Assignee: HondaWei >Priority: Major > Labels: easyfix, features, usability > Original Estimate: 0.5h > Time Spent: 2h > Remaining Estimate: 0h > > When using the executeSQL and executeSQLRecord processors, we can use input > flowfiles with a certain number of attributes. If we don't set the Output > Batch Size, all these attributes are copied to the output flowfile. However, > if we set it, only the flowfiles from the first batch will have the > attributes copied to. The flowfiles in the following batches will only have > the default attributes. > h2. Root cause > In the source code of the method _onTrigger_ in the class > _AbstractExecuteSQL,_ we have the following piece of code that is supposed to > create an output flowfile and copy the original attributes into it: > {code:java} > FlowFile resultSetFF; > if (fileToProcess == null) { > resultSetFF = session.create(); > } else { > resultSetFF = session.create(fileToProcess); > resultSetFF = session.putAllAttributes(resultSetFF, > fileToProcess.getAttributes()); > } > {code} > However the fix for the issue NIFI-6040 introduced this snippet way below in > the same method: > > {code:java} > // If we've reached the batch size, send out the flow files > if (outputBatchSize > 0 && resultSetFlowFiles.size() >= outputBatchSize) { > session.transfer(resultSetFlowFiles, REL_SUCCESS); > // Need to remove the original input file if it exists > if (fileToProcess != null) { > session.remove(fileToProcess); > fileToProcess = null; > } > session.commit(); > resultSetFlowFiles.clear(); > } > {code} > As you can see, it sets the variable fileToProcess to null, preventing the > flowfiles in the next batch to copy its attributes > > h2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6428) CaptureChangeMySQL throws IOException with BIGIN event due to lingering 'inTransaction' instance variable
Koji Kawamura created NIFI-6428: --- Summary: CaptureChangeMySQL throws IOException with BIGIN event due to lingering 'inTransaction' instance variable Key: NIFI-6428 URL: https://issues.apache.org/jira/browse/NIFI-6428 Project: Apache NiFi Issue Type: Bug Reporter: Koji Kawamura Because of not clearing 'inTransaction' instance variable when the processor stops, when a user clears processor state and restart it with a specific initial binlog position, then if BEGIN event is received, the processor throws an IOException. The processor should reset 'inTransaction' to false, and also other instance variables when it start with an empty processor state. The issue was reported at NiFi user ML. [https://mail-archives.apache.org/mod_mbox/nifi-users/201907.mbox/%3C2019070919385393266224%40geekplus.com.cn%3E] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6387) RetryFlowFile
[ https://issues.apache.org/jira/browse/NIFI-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6387. - Resolution: Fixed Fix Version/s: 1.10.0 Thanks [~tneeley] for your contribution! I've added NiFi Jira projects 'Contributor' role to your account. You can now assign yourself to NiFi Jira issues. > RetryFlowFile > - > > Key: NIFI-6387 > URL: https://issues.apache.org/jira/browse/NIFI-6387 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Travis Neeley >Assignee: Travis Neeley >Priority: Minor > Fix For: 1.10.0 > > Original Estimate: 2h > Time Spent: 4h 10m > Remaining Estimate: 0h > > Processor takes a FlowFile. It will then look for a retry attribute on the > FlowFile. If none is present or if the designated retry attribute is set and > not a number, the retry attribute is set to "1" and passed to a retry > relationship. > A configurable PropertyDescriptor contains the maximum number of times the > FlowFile can be retried before being passed to a separate retries exceeded > relationship. This PropertyDescriptor should have a validator for allowable > positive integers for the retry. > Processor may also conditionally penalize the FlowFile on the retry > relationship. > > Many interactions with NiFi request some information from a service and retry > if the response isn't explicitly success. While it's quite common to use > UpdateAttribute followed by a RouteOnAttribute to do this, there could be > value in having a discreet processor for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6387) RetryFlowFile
[ https://issues.apache.org/jira/browse/NIFI-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6387: --- Assignee: Travis Neeley > RetryFlowFile > - > > Key: NIFI-6387 > URL: https://issues.apache.org/jira/browse/NIFI-6387 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Travis Neeley >Assignee: Travis Neeley >Priority: Minor > Original Estimate: 2h > Time Spent: 4h 10m > Remaining Estimate: 0h > > Processor takes a FlowFile. It will then look for a retry attribute on the > FlowFile. If none is present or if the designated retry attribute is set and > not a number, the retry attribute is set to "1" and passed to a retry > relationship. > A configurable PropertyDescriptor contains the maximum number of times the > FlowFile can be retried before being passed to a separate retries exceeded > relationship. This PropertyDescriptor should have a validator for allowable > positive integers for the retry. > Processor may also conditionally penalize the FlowFile on the retry > relationship. > > Many interactions with NiFi request some information from a service and retry > if the response isn't explicitly success. While it's quite common to use > UpdateAttribute followed by a RouteOnAttribute to do this, there could be > value in having a discreet processor for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6413) nifi-prometheus-nar should use nifi-ssl-context-service-api
[ https://issues.apache.org/jira/browse/NIFI-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6413: Status: Patch Available (was: Open) > nifi-prometheus-nar should use nifi-ssl-context-service-api > --- > > Key: NIFI-6413 > URL: https://issues.apache.org/jira/browse/NIFI-6413 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.10.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Attachments: image-2019-07-02-10-33-57-946.png > > Time Spent: 10m > Remaining Estimate: 0h > > nifi-prometheus-reporting-task has a dependency to nifi-ssl-context-service. > Instead of depending on a concrete implementation, it should refer > nifi-ssl-context-service-api instead. > Because of this, SSLContextService implementations are bundled within > nifi-prometheus-nar. Following image shows two implementations, one from > nifi-ssl-context-service-nar, the other from nifi-prometheus-nar > !image-2019-07-02-10-33-57-946.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6413) nifi-prometheus-nar should use nifi-ssl-context-service-api
[ https://issues.apache.org/jira/browse/NIFI-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6413: --- Assignee: Koji Kawamura > nifi-prometheus-nar should use nifi-ssl-context-service-api > --- > > Key: NIFI-6413 > URL: https://issues.apache.org/jira/browse/NIFI-6413 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.10.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Attachments: image-2019-07-02-10-33-57-946.png > > > nifi-prometheus-reporting-task has a dependency to nifi-ssl-context-service. > Instead of depending on a concrete implementation, it should refer > nifi-ssl-context-service-api instead. > Because of this, SSLContextService implementations are bundled within > nifi-prometheus-nar. Following image shows two implementations, one from > nifi-ssl-context-service-nar, the other from nifi-prometheus-nar > !image-2019-07-02-10-33-57-946.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6174) ListenBeats should expose a Client Auth property for TLS/SSL
[ https://issues.apache.org/jira/browse/NIFI-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6174. - Resolution: Fixed Fix Version/s: 1.10.0 Thank you [~dima_k] for your contribution! The PR has been merged to master. Also, I've added NiFi Jira 'Contributor' role to your Jira account. You can assign yourself to any NiFi Jira issues now. > ListenBeats should expose a Client Auth property for TLS/SSL > > > Key: NIFI-6174 > URL: https://issues.apache.org/jira/browse/NIFI-6174 > Project: Apache NiFi > Issue Type: Bug >Reporter: Dima Kovalyov >Assignee: Dima Kovalyov >Priority: Major > Fix For: 1.10.0 > > Time Spent: 40m > Remaining Estimate: 0h > > ListenBeats currently hard codes the client auth to REQUIRED when creating an > RestrictedSSLContext: > {code:java} > sslContext = > sslContextService.createSSLContext(SSLContextService.ClientAuth.REQUIRED); > {code} > It should expose a Client Auth property like ListenTCP does and use that: > {code:java} > public static final PropertyDescriptor CLIENT_AUTH = new > PropertyDescriptor.Builder() > .name("Client Auth") > .description("The client authentication policy to use for the SSL > Context. Only used if an SSL Context Service is provided.") > .required(false) > .allowableValues(SSLContextService.ClientAuth.values()) > .defaultValue(SSLContextService.ClientAuth.REQUIRED.name()) > .build();{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6174) ListenBeats should expose a Client Auth property for TLS/SSL
[ https://issues.apache.org/jira/browse/NIFI-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6174: --- Assignee: Dima Kovalyov > ListenBeats should expose a Client Auth property for TLS/SSL > > > Key: NIFI-6174 > URL: https://issues.apache.org/jira/browse/NIFI-6174 > Project: Apache NiFi > Issue Type: Bug >Reporter: Dima Kovalyov >Assignee: Dima Kovalyov >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > ListenBeats currently hard codes the client auth to REQUIRED when creating an > RestrictedSSLContext: > {code:java} > sslContext = > sslContextService.createSSLContext(SSLContextService.ClientAuth.REQUIRED); > {code} > It should expose a Client Auth property like ListenTCP does and use that: > {code:java} > public static final PropertyDescriptor CLIENT_AUTH = new > PropertyDescriptor.Builder() > .name("Client Auth") > .description("The client authentication policy to use for the SSL > Context. Only used if an SSL Context Service is provided.") > .required(false) > .allowableValues(SSLContextService.ClientAuth.values()) > .defaultValue(SSLContextService.ClientAuth.REQUIRED.name()) > .build();{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6413) nifi-prometheus-nar should use nifi-ssl-context-service-api
Koji Kawamura created NIFI-6413: --- Summary: nifi-prometheus-nar should use nifi-ssl-context-service-api Key: NIFI-6413 URL: https://issues.apache.org/jira/browse/NIFI-6413 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.10.0 Reporter: Koji Kawamura Attachments: image-2019-07-02-10-33-57-946.png nifi-prometheus-reporting-task has a dependency to nifi-ssl-context-service. Instead of depending on a concrete implementation, it should refer nifi-ssl-context-service-api instead. Because of this, SSLContextService implementations are bundled within nifi-prometheus-nar. Following image shows two implementations, one from nifi-ssl-context-service-nar, the other from nifi-prometheus-nar !image-2019-07-02-10-33-57-946.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-6004) PutFileProcessor parent directory creation should have configurable ownership and permissions
[ https://issues.apache.org/jira/browse/NIFI-6004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876035#comment-16876035 ] Koji Kawamura commented on NIFI-6004: - [~adyoun2] I just merged the PR to master. Also, I added NiFi project 'Contributor' role to your Jira account. You can assign yourself to NiFi Jira tickets. Thanks for your contribution! > PutFileProcessor parent directory creation should have configurable ownership > and permissions > - > > Key: NIFI-6004 > URL: https://issues.apache.org/jira/browse/NIFI-6004 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Adam >Assignee: Adam >Priority: Major > Fix For: 1.10.0 > > Time Spent: 50m > Remaining Estimate: 0h > > When PutFile creates parent directories, it currently uses the default user > umask, where it should allow the use of configuration. Either > # Assume the best match for the configured file permissions/owner is correct. > # Allow additional configuration to specify these properties differently for > parent paths. > I prefer option 1, but not by so much I would argue hard. I also have local > changes for it I can create a pull request for. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6004) PutFileProcessor parent directory creation should have configurable ownership and permissions
[ https://issues.apache.org/jira/browse/NIFI-6004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6004. - Resolution: Fixed Fix Version/s: 1.10.0 > PutFileProcessor parent directory creation should have configurable ownership > and permissions > - > > Key: NIFI-6004 > URL: https://issues.apache.org/jira/browse/NIFI-6004 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Adam >Assignee: Adam >Priority: Major > Fix For: 1.10.0 > > Time Spent: 50m > Remaining Estimate: 0h > > When PutFile creates parent directories, it currently uses the default user > umask, where it should allow the use of configuration. Either > # Assume the best match for the configured file permissions/owner is correct. > # Allow additional configuration to specify these properties differently for > parent paths. > I prefer option 1, but not by so much I would argue hard. I also have local > changes for it I can create a pull request for. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6004) PutFileProcessor parent directory creation should have configurable ownership and permissions
[ https://issues.apache.org/jira/browse/NIFI-6004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6004: --- Assignee: Adam > PutFileProcessor parent directory creation should have configurable ownership > and permissions > - > > Key: NIFI-6004 > URL: https://issues.apache.org/jira/browse/NIFI-6004 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Adam >Assignee: Adam >Priority: Major > Time Spent: 50m > Remaining Estimate: 0h > > When PutFile creates parent directories, it currently uses the default user > umask, where it should allow the use of configuration. Either > # Assume the best match for the configured file permissions/owner is correct. > # Allow additional configuration to specify these properties differently for > parent paths. > I prefer option 1, but not by so much I would argue hard. I also have local > changes for it I can create a pull request for. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6385) Wait processor ignores all FlowFiles except for the first one in the queue.
[ https://issues.apache.org/jira/browse/NIFI-6385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6385: Status: Patch Available (was: Open) > Wait processor ignores all FlowFiles except for the first one in the queue. > --- > > Key: NIFI-6385 > URL: https://issues.apache.org/jira/browse/NIFI-6385 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Koji Kawamura >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > If the Wait processor has two FlowFiles queued up, and the Notify processor > then notifies that the second FlowFile is ready to be released, the Wait > processor ignores this until the first FlowFile is released. At that point, > it will release both of them. When the second FlowFile is ready to be > released, the processor should immediately release it, regardless of whether > or not it's the first in the queue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6385) Wait processor ignores all FlowFiles except for the first one in the queue.
[ https://issues.apache.org/jira/browse/NIFI-6385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6385: --- Assignee: Koji Kawamura > Wait processor ignores all FlowFiles except for the first one in the queue. > --- > > Key: NIFI-6385 > URL: https://issues.apache.org/jira/browse/NIFI-6385 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Koji Kawamura >Priority: Major > > If the Wait processor has two FlowFiles queued up, and the Notify processor > then notifies that the second FlowFile is ready to be released, the Wait > processor ignores this until the first FlowFile is released. At that point, > it will release both of them. When the second FlowFile is ready to be > released, the processor should immediately release it, regardless of whether > or not it's the first in the queue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6362) Upgrade puppycrawl checkstyle lib
[ https://issues.apache.org/jira/browse/NIFI-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6362. - Resolution: Fixed Fix Version/s: 1.10.0 > Upgrade puppycrawl checkstyle lib > - > > Key: NIFI-6362 > URL: https://issues.apache.org/jira/browse/NIFI-6362 > Project: Apache NiFi > Issue Type: Improvement > Components: Security >Reporter: Nathan Gough >Assignee: Nathan Gough >Priority: Minor > Labels: checkstyle > Fix For: 1.10.0 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Upgrade com.puppycrawl.tools:checkstyle version to 8.18. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6243) Add AtomicDistributedMapCacheClient Support to HBase 2.x Client
[ https://issues.apache.org/jira/browse/NIFI-6243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6243. - Resolution: Fixed Fix Version/s: 1.10.0 I've added NiFi JIRA 'Contributor' role to your account. You can now assign yourself to NiFi JIRAs. Thanks [~Absolutesantaja] for your contribution! > Add AtomicDistributedMapCacheClient Support to HBase 2.x Client > --- > > Key: NIFI-6243 > URL: https://issues.apache.org/jira/browse/NIFI-6243 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Shawn Weeks >Assignee: Shawn Weeks >Priority: Major > Fix For: 1.10.0 > > Time Spent: 2.5h > Remaining Estimate: 0h > > It looks like you should be able to support the Atomic Replace requirements > with checkAndMutate in the HBase Client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6243) Add AtomicDistributedMapCacheClient Support to HBase 2.x Client
[ https://issues.apache.org/jira/browse/NIFI-6243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6243: Priority: Major (was: Minor) > Add AtomicDistributedMapCacheClient Support to HBase 2.x Client > --- > > Key: NIFI-6243 > URL: https://issues.apache.org/jira/browse/NIFI-6243 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Shawn Weeks >Priority: Major > Time Spent: 2.5h > Remaining Estimate: 0h > > It looks like you should be able to support the Atomic Replace requirements > with checkAndMutate in the HBase Client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6370) Allow PutDatabaseRecord to execute multiple SQL statements if using SQL field
[ https://issues.apache.org/jira/browse/NIFI-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6370: Resolution: Fixed Fix Version/s: 1.10.0 Status: Resolved (was: Patch Available) > Allow PutDatabaseRecord to execute multiple SQL statements if using SQL field > - > > Key: NIFI-6370 > URL: https://issues.apache.org/jira/browse/NIFI-6370 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.10.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > PutDatabaseRecord can be configured to point at a field in a record that > contains a SQL statement, and execute the statement rather than a generated > INSERT/UPDATE/DELETE statement from the fields in the record. It would be > helpful if the SQL field could contain multiple SQL statements (delimited by > semicolons) and each of the statements be executed, rolling back the whole > thing if any errors occur. > To maintain current behavior, I propose adding a boolean property to "Allow > Multiple SQL Statements" that is only used when the statement type is SQL. > This is also necessary because we can't fully parse the SQL statements; > instead we can just split the SQL field by semicolons (like PutHiveQL does). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6361) PutFile doesn't check the restriction of the max files properly when replace strategy is activated
[ https://issues.apache.org/jira/browse/NIFI-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6361. - Resolution: Fixed Fix Version/s: 1.10.0 I've added 'Contributor' role of NiFi project to your JIRA account so that you can assign yourself to NiFi JIRAs. Thanks again [~andres.garagiola]! > PutFile doesn't check the restriction of the max files properly when replace > strategy is activated > -- > > Key: NIFI-6361 > URL: https://issues.apache.org/jira/browse/NIFI-6361 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andres Garagiola >Assignee: Andres Garagiola >Priority: Minor > Fix For: 1.10.0 > > Time Spent: 40m > Remaining Estimate: 0h > > In PutFile processor, when the conflict resolution is set to replace, the > processor works in the wrong way. If we have a limit of X files in the > folder, and we have already X files there. If we want to override it, the > processor will fail because we exceed the number of files in the folder. > Since we want to replace a file, we will not exceed the limit at all. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6361) PutFile doesn't check the restriction of the max files properly when replace strategy is activated
[ https://issues.apache.org/jira/browse/NIFI-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6361: --- Assignee: Andres Garagiola > PutFile doesn't check the restriction of the max files properly when replace > strategy is activated > -- > > Key: NIFI-6361 > URL: https://issues.apache.org/jira/browse/NIFI-6361 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andres Garagiola >Assignee: Andres Garagiola >Priority: Minor > Time Spent: 40m > Remaining Estimate: 0h > > In PutFile processor, when the conflict resolution is set to replace, the > processor works in the wrong way. If we have a limit of X files in the > folder, and we have already X files there. If we want to override it, the > processor will fail because we exceed the number of files in the folder. > Since we want to replace a file, we will not exceed the limit at all. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6349) MergeRecord does not merge fragmented files correctly
[ https://issues.apache.org/jira/browse/NIFI-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6349. - Resolution: Fixed Fix Version/s: 1.10.0 I've added 'Contributor' role of NiFi project to your JIRA account so that you can assign yourself to NiFi JIRAs. Thanks again [~evanthx]! > MergeRecord does not merge fragmented files correctly > - > > Key: NIFI-6349 > URL: https://issues.apache.org/jira/browse/NIFI-6349 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Evan Reynolds >Assignee: Evan Reynolds >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Attachments: Merge_Test_Flow.xml > > Original Estimate: 24h > Time Spent: 1.5h > Remaining Estimate: 22.5h > > When MergeRecords tries to merge fragments it will close the bin after a > single run. This means that fragments that come in seconds apart will not be > merged. > To replicate this, import the attached flow template and run the > GenerateFlowFile processor. It will generate 2000 flow files. MergeRecord > should merge them back but a number of them will fail. > To replicate this in a unit test, go to the testDefragment method and change > "runner.run(1)" to "runner.run(2)". It will fail. > In code - it is using the minimum number of records, which defaults to 1, to > see if the bin is full. Once a single fragment comes in, the bin will show > full and will be marked complete. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6349) MergeRecord does not merge fragmented files correctly
[ https://issues.apache.org/jira/browse/NIFI-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-6349: --- Assignee: Evan Reynolds > MergeRecord does not merge fragmented files correctly > - > > Key: NIFI-6349 > URL: https://issues.apache.org/jira/browse/NIFI-6349 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Evan Reynolds >Assignee: Evan Reynolds >Priority: Major > Labels: pull-request-available > Attachments: Merge_Test_Flow.xml > > Original Estimate: 24h > Time Spent: 1.5h > Remaining Estimate: 22.5h > > When MergeRecords tries to merge fragments it will close the bin after a > single run. This means that fragments that come in seconds apart will not be > merged. > To replicate this, import the attached flow template and run the > GenerateFlowFile processor. It will generate 2000 flow files. MergeRecord > should merge them back but a number of them will fail. > To replicate this in a unit test, go to the testDefragment method and change > "runner.run(1)" to "runner.run(2)". It will fail. > In code - it is using the minimum number of records, which defaults to 1, to > see if the bin is full. Once a single fragment comes in, the bin will show > full and will be marked complete. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6319) Improve docs around Site-to-Site changes (URLs, batch settings, remote ports)
[ https://issues.apache.org/jira/browse/NIFI-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura resolved NIFI-6319. - Resolution: Fixed Fix Version/s: 1.10.0 > Improve docs around Site-to-Site changes (URLs, batch settings, remote ports) > - > > Key: NIFI-6319 > URL: https://issues.apache.org/jira/browse/NIFI-6319 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation Website >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > Fix For: 1.10.0 > > Time Spent: 1h > Remaining Estimate: 0h > > I noticed that there isn't any documentation for the ability to reference > multiple URLs in an RPG and RPG port batch settings. New screenshots are > needed and current screenshots need to be updated. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6218) Support setting transactional.id in PublishKafka/PublishKafkaRecord
[ https://issues.apache.org/jira/browse/NIFI-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6218: Resolution: Fixed Fix Version/s: 1.10.0 Status: Resolved (was: Patch Available) > Support setting transactional.id in PublishKafka/PublishKafkaRecord > --- > > Key: NIFI-6218 > URL: https://issues.apache.org/jira/browse/NIFI-6218 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.9.0 >Reporter: Ferenc Szabo >Assignee: Ferenc Szabo >Priority: Major > Fix For: 1.10.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Add support for specifying *transactional.id* within > PublishKafka/PublishKafkaRecord. Currently a random UUID is utilized and > generated each time the processor is scheduled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6315) Track remote input/output ports at any level when creating versioned flows
[ https://issues.apache.org/jira/browse/NIFI-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-6315: Resolution: Fixed Status: Resolved (was: Patch Available) > Track remote input/output ports at any level when creating versioned flows > -- > > Key: NIFI-6315 > URL: https://issues.apache.org/jira/browse/NIFI-6315 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.10.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > NIFI-2933 added the ability to create remote input/output ports at any level. > This information needs to be tracked when creating versioned flows that are > saved to registry, otherwise it will be lost on import. > The registry 0.4.0 release added a boolean to VersionedPort for this purpose: > [https://github.com/apache/nifi-registry/blob/master/nifi-registry-core/nifi-registry-data-model/src/main/java/org/apache/nifi/registry/flow/VersionedPort.java#L26] > NiFi's master branch needs to be updated to the 0.4.0 client which is part of > NIFI-6311: > [https://github.com/apache/nifi/pull/3485] > CC: [~ijokarumawak] [~markap14] -- This message was sent by Atlassian JIRA (v7.6.3#76005)