[ 
https://issues.apache.org/jira/browse/ARTEMIS-4325?focusedWorklogId=868783&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-868783
 ]

ASF GitHub Bot logged work on ARTEMIS-4325:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 30/Jun/23 20:07
            Start Date: 30/Jun/23 20:07
    Worklog Time Spent: 10m 
      Work Description: jbertram commented on PR #4522:
URL: 
https://github.com/apache/activemq-artemis/pull/4522#issuecomment-1615143921

   Isn't the use-case in view here live-to-live failback after failover 
(implemented recently via 
[ARTEMIS-4251](https://issues.apache.org/jira/browse/ARTEMIS-4251))? I'd leave 
live/backup pairs out of this completely since there are already semantics in 
place for that use-case.
   
   > would this work if the nodeID of the broker changes (say the broker 
cluster is part of a blue/green deploy or similar, such that FQDN/IP remains 
the same but the underlying broker is replaced by one started on a new journal)?
   
   The `org.apache.activemq.artemis.api.core.client.TopologyMember` has the 
node ID as well as the `TransportConfiguration` which you should be able to use 
to identify host details like name and/or IP address. There are also the 
`isMember` methods which may be useful in this use-case.




Issue Time Tracking
-------------------

    Worklog Id:     (was: 868783)
    Time Spent: 50m  (was: 40m)

> Ability for core client to failback after failover
> --------------------------------------------------
>
>                 Key: ARTEMIS-4325
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-4325
>             Project: ActiveMQ Artemis
>          Issue Type: New Feature
>            Reporter: Anton Roskvist
>            Priority: Major
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> This would be similar to the "priorityBackup" functionality in ActiveMQ 
> "Classic."
> The primary use case for this is to more easily maintain a good distribution 
> of consumers and producers across a broker cluster over time.
> The intended behavior for my own purposes would be something like:
> * Ensure an even distribution across the broker cluster when first connecting 
> a high throughput client.
> * When a broker becomes unavailable (network outage, patch, crash, whatever), 
> move affected client workers to another broker in the cluster to maintain 
> throughput.
> * When the original broker comes back, move the recently failed-over 
> resources to the original broker again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to