[ 
https://issues.apache.org/jira/browse/NIFI-14125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17909294#comment-17909294
 ] 

Peter Kimberley commented on NIFI-14125:
----------------------------------------

After examining the logs further, I suspect what may be happening is the 
Fabric8 Kubernetes Client is returning HTTP status 409 (conflict) due to all 
nodes attempting to update the state _ConfigMap_ concurrently.

Perhaps only the coordinating node should be responsible for performing the 
update, where a cluster-scoped state is involved?

Relevant log entries follow. Coordinating node in this case is _test-nifi-1._

{{*test-nifi-0 (nifi-request.log):*}}
{{10.42.115.29 - test.user [02/Jan/2025:11:49:59 +0000] "POST 
/nifi-api/processors/1b5f4233-0194-1000-0000-00003610e403/state/clear-requests 
HTTP/2.0" *409* 149 "https://nifi.example.com/nifi/"; "Apache NiFi/2.1.0"}}

{{*test-nifi-1 (nifi-app.log):*}}
{{2025-01-02 22:49:59,291 WARN [Replicate Request Thread-176] 
o.a.n.c.c.node.NodeClusterCoordinator The following nodes failed to process URI 
/nifi-api/processors/1b5f4233-0194-1000-0000-00003610e403/state/clear-requests 
'[test-nifi-0.test-nifi.nifi:8443, test-nifi-2.test-nifi.nifi:8443]'.  
Requesting each node reconnect to cluster.}}

{{*test-nifi-2 (nifi-request.log):*}}
{{10.42.115.29 - test.user [02/Jan/2025:11:49:59 +0000] "POST 
/nifi-api/processors/1b5f4233-0194-1000-0000-00003610e403/state/clear-requests 
HTTP/2.0" *409* 149 "https://nifi.example.com/nifi/"; "Apache NiFi/2.1.0"}}

> Clearing state with Kubernetes state provider causes cluster nodes to 
> reconnect
> -------------------------------------------------------------------------------
>
>                 Key: NIFI-14125
>                 URL: https://issues.apache.org/jira/browse/NIFI-14125
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Core Framework
>    Affects Versions: 2.1.0
>         Environment: Kubernetes
>            Reporter: Peter Kimberley
>            Priority: Major
>         Attachments: image-2025-01-01-19-35-01-388.png
>
>
> When using the Kubernetes state provider implementation, clearing a 
> component's state via the UI results in the following error to be logged by 
> NiFi:
> {{2025-01-01 19:10:30,773 WARN [Replicate Request Thread-260] 
> o.a.n.c.c.node.NodeClusterCoordinator The following nodes failed to process 
> URI 
> /nifi-api/processors/1b5f4233-0194-1000-0000-00003610e403/state/clear-requests
>  '[test-nifi-2.test-nifi.nifi:8443, test-nifi-1.test-nifi.nifi:8443]'. 
> Requesting each node reconnect to cluster.}}
> Following this, the coordinating node is re-elected and all nodes reconnect, 
> as indicated by error bulletins in the UI:
> !image-2025-01-01-19-35-01-388.png!
> This issue is not present when using the Zookeeper state provider.
> Inspecting the browser's API request to the _clear-requests_ endpoint, the 
> request is reported as a success (status code 200).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to