[jira] [Created] (ARTEMIS-4285) Disable Redelivery Persistence for new broker installations.

2023-05-18 Thread Clebert Suconic (Jira)
Clebert Suconic created ARTEMIS-4285:


 Summary: Disable Redelivery Persistence for new broker 
installations.
 Key: ARTEMIS-4285
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4285
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Clebert Suconic


We should allow disabling persisted redelivery on messages.
Every time a message is redelivery, and scheduled redelivery is used, an update 
record is stored in the broker.
This may add a big burden in the journal or jdbc journal.

We will keep this as true by default (in java) however new broker.xml 
configuration will have this as false, so we will keep current users' 
expectation while setting this to false for any new broker installation.

This is a borderline between bug and improvement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723995#comment-17723995
 ] 

Justin Bertram edited comment on ARTEMIS-4276 at 5/18/23 6:28 PM:
--

bq. I am not seeing it as a weakness rather than an incomplete solution...

Fair enough. I'm not looking to debate semantics here. :slightly_smiling_face:

bq. I think it is still better to have a local cache than nothing.

Assuming the cache was simple to implement and doesn't incur a meaningful 
runtime cost (e.g. in CPU or memory) then I would agree. It is better than 
nothing.

bq. There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers.

The idempotency that Kafka may provide is not what I'm talking about in this 
context. There are definitely measures that client libraries and brokers can 
take to help make consuming and producing messages idempotent (see [duplicate 
detection|https://activemq.apache.org/components/artemis/documentation/latest/duplicate-detection.html]
 for one example related to idempotent producers). However, those measures only 
apply to the actual _messaging_ operations. Once you add another kind of 
resource like a database or even another message broker there's nothing that 
the client library can do to make the consumer idempotent _overall_. As noted, 
the application developer must implement this kind of idempotency. Technologies 
like XA were invented to deal with this kind use-case. It's worth noting that 
Kafka does not, in fact, support XA.


was (Author: jbertram):
bq. I am not seeing it as a weakness rather than an incomplete solution...

Fair enough. I'm not looking to debate semantics here. :slightly_smiling_face:

bq. I think it is still better to have a local cache than nothing.

Assuming the cache was simple to implement and doesn't incur a meaningful 
runtime cost (e.g. in CPU or memory) then I would agree. It is better than 
nothing.

bq. There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers.

The idempotency that Kafka may provide is not what I'm talking about in this 
context. There are definitely measures that client libraries can take to help 
make consuming and producing messages idempotent. However, those measures only 
apply to the actual _messaging_ operations. Once you add another kind of 
resource like a database or even another message broker there's nothing that 
the client library can do to make the consumer idempotent _overall_. As noted, 
the application developer must implement this kind of idempotency. Technologies 
like XA were invented to deal with this kind use-case. It's worth noting that 
Kafka does not, in fact, support XA.

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched for configuration changes but couldn't find any relevant 
> information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723995#comment-17723995
 ] 

Justin Bertram commented on ARTEMIS-4276:
-

bq. I am not seeing it as a weakness rather than an incomplete solution...

Fair enough. I'm not looking to debate semantics here. :slightly_smiling_face:

bq. I think it is still better to have a local cache than nothing.

Assuming the cache was simple to implement and doesn't incur a meaningful 
runtime cost (e.g. in CPU or memory) then I would agree. It is better than 
nothing.

bq. There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers.

The idempotency that Kafka may provide is not what I'm talking about in this 
context. There are definitely measures that client libraries can take to help 
make consuming and producing messages idempotent. However, those measures only 
apply to the actual _messaging_ operations. Once you add another kind of 
resource like a database or even another message broker there's nothing that 
the client library can do to make the consumer idempotent _overall_. As noted, 
the application developer must implement this kind of idempotency. Technologies 
like XA were invented to deal with this kind use-case. It's worth noting that 
Kafka does not, in fact, support XA.

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched for configuration changes but couldn't find any relevant 
> information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723982#comment-17723982
 ] 

Liviu Citu edited comment on ARTEMIS-4276 at 5/18/23 5:32 PM:
--

> This is a weakness in the application design which will lead to the same 
> problems with duplicate messages as you have when a broker failure causes the 
> consumer-group relationship to change.

I am not seeing it as a weakness rather than an incomplete solution as we might 
still have duplicated messages (as you said) which in the end will fail during 
database import (see below). However,  I think it is still better to have a 
local cache than nothing. There might be some other complex solutions when load 
balancing is used but this local cache will (still) help to reduce unnecessary 
database insert failures.

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers. Never used it though.


was (Author: JIRAUSER300236):
> This is a weakness in the application design which will lead to the same 
> problems with duplicate messages as you have when a broker failure causes the 
> consumer-group relationship to change.

I am not seeing it as a weakness rather than an incomplete solution as we might 
still have duplicated messages (as you said) which in the end will fail during 
database import (see below). However,  in the context of load balancing I think 
it is still better to have a local cache than nothing. There might be some 
other complex solutions indeed but this local cache will (still) help 
especially for those applications which do not have load balancing.

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers. Never used it though.

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched 

[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723982#comment-17723982
 ] 

Liviu Citu edited comment on ARTEMIS-4276 at 5/18/23 5:32 PM:
--

> This is a weakness in the application design which will lead to the same 
> problems with duplicate messages as you have when a broker failure causes the 
> consumer-group relationship to change.

I am not seeing it as a weakness rather than an incomplete solution as we might 
still have duplicated messages (as you said) which in the end will fail during 
database import (see below). However,  I think it is still better to have a 
local cache than nothing. There might be some other complex solutions when load 
balancing is used but this local cache will (still) help to reduce unnecessary 
database insert failures.

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers. Never used them though.


was (Author: JIRAUSER300236):
> This is a weakness in the application design which will lead to the same 
> problems with duplicate messages as you have when a broker failure causes the 
> consumer-group relationship to change.

I am not seeing it as a weakness rather than an incomplete solution as we might 
still have duplicated messages (as you said) which in the end will fail during 
database import (see below). However,  I think it is still better to have a 
local cache than nothing. There might be some other complex solutions when load 
balancing is used but this local cache will (still) help to reduce unnecessary 
database insert failures.

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers. Never used it though.

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched for configuration changes but 

[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723982#comment-17723982
 ] 

Liviu Citu edited comment on ARTEMIS-4276 at 5/18/23 5:31 PM:
--

> This is a weakness in the application design which will lead to the same 
> problems with duplicate messages as you have when a broker failure causes the 
> consumer-group relationship to change.

I am not seeing it as a weakness rather than an incomplete solution as we might 
still have duplicated messages (as you said) which in the end will fail during 
database import (see below). However,  in the context of load balancing I think 
it is still better to have a local cache than nothing. There might be some 
other complex solutions indeed but this local cache will (still) help 
especially for those applications which do not have load balancing.

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers. Never used it though.


was (Author: JIRAUSER300236):
Regarding:

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers. Never used it though.

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched for configuration changes but couldn't find any relevant 
> information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723982#comment-17723982
 ] 

Liviu Citu edited comment on ARTEMIS-4276 at 5/18/23 5:25 PM:
--

Regarding:

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, I 
have seen Kafka having idempotent consumers. Never used it though.


was (Author: JIRAUSER300236):
Actually I think I understood what you meant :)

Regarding:

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, Kafka 
has idempotent consumers.

 

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched for configuration changes but couldn't find any relevant 
> information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723982#comment-17723982
 ] 

Liviu Citu edited comment on ARTEMIS-4276 at 5/18/23 5:16 PM:
--

Actually I think I understood what you meant :)

Regarding:

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course that our database IO meta layer has a mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications during failover switch.

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, Kafka 
has idempotent consumers.

 


was (Author: JIRAUSER300236):
Actually I think I understood what you meant :)

Regarding:

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course our database IO meta layer we have mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, Kafka 
has idempotent consumers.

 

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched for configuration changes but couldn't find any relevant 
> information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723982#comment-17723982
 ] 

Liviu Citu commented on ARTEMIS-4276:
-

Actually I think I understood what you meant :)

Regarding:

>  This leads to the same transaction being imported in the database twice..."

What I meant actually is that *_it will try to import_* the record in the 
database. Of course our database IO meta layer we have mechanism in place to 
avoid same transaction being imported twice (the database records have audit 
trail which include transaction version).  This is because same database tables 
can also be affected by other applications part of our software (UI, batch 
utilities, etc) so it is not only the gateway interface who import data in the 
system. I just wanted to pin point a potential issue that could arise in 
applications

Regarding:

> Idempotency is something you, as the application developer, must implement.

There are some third parties that have this out-of-the box. For instance, Kafka 
has idempotent consumers.

 

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched for configuration changes but couldn't find any relevant 
> information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723979#comment-17723979
 ] 

Justin Bertram commented on ARTEMIS-4276:
-

I think you've misunderstood much of what I wrote. Here are some additional 
comments and clarifications...

bq. Our plan during migration from Classic ActiveMQ to Artemis is to modify as 
little as possible the source code to reduce the regression impact.

Fair enough.

bq. Our software is C++ code based and we are using CMS API (ActiveMQ CPP) as a 
client.

The CMS API was originally based on JMS 1.1 and I don't believe it has been 
updated since JMS 2 was released 10 years ago now. Therefore I wouldn't expect 
it to have the methods for creating a shared subscription.

bq. We do not want to process the same message in more than one group (please 
correct me if I am wrong)...

The whole point of sharing a subscription between multiple consumers is to 
ensure that the same message is not processed more than once.

I recommended the move to JMS 2 shared subscriptions assuming you were using a 
JMS client. This would make your code more portable and easier to understand. 
However, since you're using CMS that's obviously out of the question.

bq. ActiveMQ CPP does not have idempotent consumers.

Idempotency is something you, as the application developer, must implement. It 
is not something inherent to the client implementation which you use to 
communicate with the broker (i.e. ActiveMQ CPP).

bq. Indeed the CMS consumer gets restored during failover but the object is not 
recreated so our wrapper is still valid and the cache still stands in this 
context.

The scenario where the primary broker fails and the client switches to the 
backup broker (i.e. "failover") is _not_ what I was describing. The problem I 
was trying to describe is what happens when some kind of failure renders the 
cache invalid. This could happen for any number of reasons, some of which I 
outlined in my previous comment. This is a weakness in the application design 
which will lead to the same problems with duplicate messages as you have when a 
broker failure causes the consumer-group relationship to change.

bq. The synchronization problem between database and JMS Broker is not 
necessary related to failover  or Artemis usage.

Yes, of course. This is a general problem in computing which is why XA 
transactions were invented in the first place. Their use is certainly not 
restricted to databases and message brokers or even to Java. They are used 
across the industry in many many different kinds of resources in many different 
programming languages.

Typically the need for consistency between resources is identified before 
implementation and is part of the fundamental application design. XA is not 
simple and care is needed when fitting all the pieces together.

bq. At the database level we have a protection with primary keys and indeed the 
same transaction cannot be inserted twice.

This seems to flatly contradict what you said in your previous comment, "This 
leads to the same transaction being imported in the database twice..." Please 
clarify.

bq. We just wanted to explore the possibility to have a way of removing these 
"fake"  failures caused by failover or somehow to distinguish them from those 
which are real business failures.

The "fake" failures are the result of your application design (i.e. the 
consumers are not idempotent). To be clear, even _if_ the broker maintained the 
consumer-group relationship during failover you'd still have the risk of these 
kinds of "fake" failures in other scenarios.

That said, the client knows when a failover has occurred so it knows that, at 
least for a little while, there is a fair chance of duplicate messages and 
therefore primary key violations on the database. It could either add this 
context to the failure notification to help whoever reads it or it could simply 
ignore the primary key violations for a time.

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): 

[jira] [Commented] (AMQ-9254) KahaDB minor fix when db files may be larger than max length

2023-05-18 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723961#comment-17723961
 ] 

ASF subversion and git services commented on AMQ-9254:
--

Commit cfbea60d6d4f934e7fbe85915183a2f211414b82 in activemq's branch 
refs/heads/main from Matt Pavlovich
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=cfbea60d6d ]

[AMQ-9258] Update kahadb corruption test to account for new fix from AMQ-9254 
(#1007)



> KahaDB minor fix when db files may be larger than max length
> 
>
> Key: AMQ-9254
> URL: https://issues.apache.org/jira/browse/AMQ-9254
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Minor
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Log message:
> {noformat}
> Caused by: java.io.IOException: Invalid location size: 54:33554460, size: 2412
> at 
> org.apache.activemq.store.kahadb.disk.journal.DataFileAccessor.readRecord(DataFileAccessor.java:88)
>  ~[?:?]
> at 
> org.apache.activemq.store.kahadb.disk.journal.Journal.read(Journal.java:953) 
> ~[?:?]
> at 
> org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:1197)
>  ~[?:?]
> at 
> org.apache.activemq.store.kahadb.KahaDBStore.loadMessage(KahaDBStore.java:1401)
>  ~[?:?]
> ... 74 more
> {noformat}
> db-54.log size: 33556877
> Note: This read would have succeeded otherwise.
> Reproducible test case:
> ref: https://github.com/mattrpav/activemq-jira-9254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (AMQ-9258) Fix flaky kahadb test that can fail on busy CI server

2023-05-18 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723960#comment-17723960
 ] 

ASF subversion and git services commented on AMQ-9258:
--

Commit cfbea60d6d4f934e7fbe85915183a2f211414b82 in activemq's branch 
refs/heads/main from Matt Pavlovich
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=cfbea60d6d ]

[AMQ-9258] Update kahadb corruption test to account for new fix from AMQ-9254 
(#1007)



> Fix flaky kahadb test that can fail on busy CI server
> -
>
> Key: AMQ-9258
> URL: https://issues.apache.org/jira/browse/AMQ-9258
> Project: ActiveMQ
>  Issue Type: Improvement
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Minor
>
> 1. Add NPE guarding to anonymous log4j2 appender that may receive other log 
> messages when running on slower CI servers



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4275) _AMQ_ConsumerName is missing from Consumer Created/Closed notifications

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17722663#comment-17722663
 ] 

Liviu Citu edited comment on ARTEMIS-4275 at 5/18/23 3:29 PM:
--

Hi Justin,

I saw your commit regarding _AMQ_ConsumerName and I have added a comment to it. 
Please have a look, Thanks

[https://github.com/apache/activemq-artemis/commit/7da9bdf0a9b0d416ab4fa53c421ace27f3a44d0b#diff-96cdf8c4ff8d61ac9690fd5bfe2baefb4207074fc2bcd8a86d9122cb2f1ee1c2]


was (Author: JIRAUSER300236):
Hi Bertram,

I saw your commit regarding _AMQ_ConsumerName and I have added a comment to it. 
Please have a look, Thanks

https://github.com/apache/activemq-artemis/commit/7da9bdf0a9b0d416ab4fa53c421ace27f3a44d0b#diff-96cdf8c4ff8d61ac9690fd5bfe2baefb4207074fc2bcd8a86d9122cb2f1ee1c2

> _AMQ_ConsumerName is missing from Consumer Created/Closed notifications
> ---
>
> Key: ARTEMIS-4275
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4275
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> *_AMQ_ConsumerName* property is missing from *CONSUMER_CREATED / 
> CONSUMER_CLOSED* notification messages.  This property is necessary to 
> identify the *ConsumerId.* In a subscription model functionality the server 
> needs to know when a certain subscription (consumer) gets created or closed. 
> I have tried to use *_AMQ_RoutingName* but it seems it is for different 
> purposes (sometimes it is simply equal with *_AMQ_Address).*
> *_AMQ_ConsumerName* was available in the Advisory Message but it does not 
> seem to be part of the  Notification Message. Therefore this is a regression 
> compared to Classic ActiveMQ.
> Regards
> Liviu



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4282) Sending Large ApplicationProperties section in a transactional session may break the server.

2023-05-18 Thread Timothy A. Bish (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy A. Bish updated ARTEMIS-4282:
-
Summary: Sending Large ApplicationProperties section in a transactional 
session may break the server.  (was: Sending Large ApplicationProperty section 
in a transactional session may break the server.)

> Sending Large ApplicationProperties section in a transactional session may 
> break the server.
> 
>
> Key: ARTEMIS-4282
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4282
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.28.0
>Reporter: Clebert Suconic
>Priority: Major
> Fix For: 2.29.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4282) Sending Large ApplicationProperty section in a transactional session may break the server.

2023-05-18 Thread Timothy A. Bish (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy A. Bish updated ARTEMIS-4282:
-
Summary: Sending Large ApplicationProperty section in a transactional 
session may break the server.  (was: Sending Large Header in a transactional 
session may break the server.)

> Sending Large ApplicationProperty section in a transactional session may 
> break the server.
> --
>
> Key: ARTEMIS-4282
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4282
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.28.0
>Reporter: Clebert Suconic
>Priority: Major
> Fix For: 2.29.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (ARTEMIS-4284) Openwire prefetched messages can be out of order for a single consumer

2023-05-18 Thread Gary Tully (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved ARTEMIS-4284.
-
Resolution: Fixed

> Openwire prefetched messages can be out of order for a single consumer
> --
>
> Key: ARTEMIS-4284
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4284
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: OpenWire
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>
> It is an anti pattern, but a new consumer per message loop can fail with 
> openwire. the remove is non blocking, so a new consumer can co exist with the 
> async cancel/add sorted of the previpous consumer. This breaks ordering that 
> is required for the delivery count logic around unconsumed prefetched 
> messages.
> the workaround is to use prefetch=1 but the underlying problem is real, in 
> 5.x the cancel/add_sorted is done in the same thread as remove. In artemis, 
> the storage manager handles this async.
> A potential fix is to wait for the operation context complete on handling the 
> removeConsumer command.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4284) Openwire prefetched messages can be out of order for a single consumer

2023-05-18 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723916#comment-17723916
 ] 

ASF subversion and git services commented on ARTEMIS-4284:
--

Commit b664022a1ed772e74b518dd5d12f44ca53fb2f16 in activemq-artemis's branch 
refs/heads/main from Gary Tully
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=b664022a1e ]

ARTEMIS-4284 - sync operwire remove consumer with the operation context to 
ensure prefetched messages are available to the next consumer in order

This closes #4483


> Openwire prefetched messages can be out of order for a single consumer
> --
>
> Key: ARTEMIS-4284
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4284
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: OpenWire
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>
> It is an anti pattern, but a new consumer per message loop can fail with 
> openwire. the remove is non blocking, so a new consumer can co exist with the 
> async cancel/add sorted of the previpous consumer. This breaks ordering that 
> is required for the delivery count logic around unconsumed prefetched 
> messages.
> the workaround is to use prefetch=1 but the underlying problem is real, in 
> 5.x the cancel/add_sorted is done in the same thread as remove. In artemis, 
> the storage manager handles this async.
> A potential fix is to wait for the operation context complete on handling the 
> removeConsumer command.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4284) Openwire prefetched messages can be out of order for a single consumer

2023-05-18 Thread Gary Tully (Jira)
Gary Tully created ARTEMIS-4284:
---

 Summary: Openwire prefetched messages can be out of order for a 
single consumer
 Key: ARTEMIS-4284
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4284
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: OpenWire
Affects Versions: 2.28.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 2.29.0


It is an anti pattern, but a new consumer per message loop can fail with 
openwire. the remove is non blocking, so a new consumer can co exist with the 
async cancel/add sorted of the previpous consumer. This breaks ordering that is 
required for the delivery count logic around unconsumed prefetched messages.

the workaround is to use prefetch=1 but the underlying problem is real, in 5.x 
the cancel/add_sorted is done in the same thread as remove. In artemis, the 
storage manager handles this async.

A potential fix is to wait for the operation context complete on handling the 
removeConsumer command.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (ARTEMIS-4280) Kubernetes JaaS LoginModule extract role info from review groups

2023-05-18 Thread Gary Tully (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved ARTEMIS-4280.
-
Resolution: Fixed

>  Kubernetes JaaS LoginModule extract role info from review groups
> -
>
> Key: ARTEMIS-4280
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4280
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: JAAS
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>
> The role mapping is via a properties file, which cold be achieved via the 
> properties login module, however the roles also map nicely from the review 
> users groups.
> Making the role file optional and using the group info as roles by default is 
> more intuitive.
> Adding an optional ignoreTokenReviewRoles to disable the role association 
> from the group information gives full flexibility.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4283) Fail fast CORE client connect on closing

2023-05-18 Thread Domenico Francesco Bruscino (Jira)
Domenico Francesco Bruscino created ARTEMIS-4283:


 Summary: Fail fast CORE client connect on closing
 Key: ARTEMIS-4283
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4283
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Domenico Francesco Bruscino
Assignee: Domenico Francesco Bruscino


ServerLocatorImpl waits for topology after connecting a new session factory. It 
should interrupt waiting for topology when it is closed to fail fast.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723810#comment-17723810
 ] 

Liviu Citu edited comment on ARTEMIS-4276 at 5/18/23 7:23 AM:
--

*Virtual Topics vs Shared Topic Consumers*

Our plan during migration from *Classic ActiveMQ* to *Artemis* is to modify as 
little as possible the source code to reduce the regression impact.

Our software is C++ code based and we are using *CMS API* ({*}ActiveMQ CPP{*}) 
as a client. I am unable to find a CMS API to create shared topic consumer so I 
am not sure if it exists. In the same time, I am not very sure that the 
behavior of such LB using shared subscription is what we want in our Gateway 
Loader Servers. We do not want to process the same message in more than one 
group (please correct me if I am wrong): 
[http://jmesnil.net/weblog/2013/06/27/jms-20-shared-subscription/]

Nonetheless we were using virtual topics with Classic ActiveMQ and they work as 
expected with Artemis too (the setup changes are trivial).

*Idempotent consumer using local, volatile LRU cache*

*ActiveMQ CPP* does not have idempotent consumers. Nonetheless in our software 
we have a wrapper over the CMS consumer and a wrapper over the CMS consumer 
listener. The LRU cache is part of our listener. Indeed the *CMS consumer* gets 
restored during failover but the object is not recreated so our wrapper is 
still valid and the cache still stands in this context. Indeed this might not 
be the best option to handle the duplicated messages but when there is no Load 
Balance it works ok. The problem is indeed when there are more than one 
consumer involved for the same topic.

*XA transaction*

The synchronization problem between database and JMS Broker is not necessary 
related to failover  or Artemis usage. We have this also with Classic ActiveMQ 
[for instance if there is a network glitch or when ActiveMQ Broker goes down 
and the message reached the database]. We were exploring the usage of XA 
transaction however the code changes needed to implement it in an existing 
software is huge and practically impossible.

*Handle duplicate messages at database level*

At the database level we have a protection with primary keys and indeed the 
same transaction cannot be inserted twice. The problem with this is more like a 
user notification problem.

As I have explained in the description of this issue, we have also a Gateway 
Fail Queue Monitor where the users might find all messages that failed during 
processing (included those duplicated that failed during insertion).

We just wanted to explore the possibility to have a way of removing these 
"fake"  failures caused by failover  or somehow to distinguish them from those 
which are real business failures. These are technical failures (cause by 
failover in this case) and users looking to the Fail Queue Monitor might get 
confused when seeing such duplicated messages without understanding what went 
wrong (if indeed the same duplicated transaction was received from external 
system of the duplicated message is caused by failover). I suppose they will 
have to deal with this as being a system limitation.


was (Author: JIRAUSER300236):
*Virtual Topics vs Shared Topic Consumers*

Our plan during migration from *Classic ActiveMQ* to *Artemis* is to modify as 
little as possible the source code to reduce the regression impact.

Our software is C++ code based and we are using *CMS API* ({*}ActiveMQ CPP{*}) 
as a client. I am unable to find a CMS API to create shared topic consumer so I 
am not sure if it exists. In the same time, I am not very sure that the 
behavior of such LB using shared subscription is what we want in our Gateway 
Loader Servers. We do not want to process the same message in more than one 
group (please correct me if I am wrong): 
[http://jmesnil.net/weblog/2013/06/27/jms-20-shared-subscription/]

Nonetheless we were using virtual topics with Classic ActiveMQ and they work as 
expected with Artemis too (the setup changes are trivial).

*Idempotent consumer using local, volatile LRU cache*

*ActiveMQ CPP* does not have idempotent consumers. Nonetheless in our software 
we have a wrapper over the CMS consumer and a wrapper over the CMS consumer 
listener. The LRU cache is part of our listener. Indeed the *CMS consumer* gets 
restored during FailOver but the object is not recreated so our wrapper is 
still valid and the cache still stands in this context. Indeed this might not 
be the best option to handle the duplicated messages but when there is no Load 
Balance it works ok. The problem is indeed when there are more than one 
consumer involved for the same topic.

*XA transaction*

The synchronization problem between database and JMS Broker is not necessary 
related to FailOver or Artemis usage. We have this also with Classic ActiveMQ 
[for instance if there is a network glitch or when 

[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723810#comment-17723810
 ] 

Liviu Citu edited comment on ARTEMIS-4276 at 5/18/23 7:20 AM:
--

*Virtual Topics vs Shared Topic Consumers*

Our plan during migration from *Classic ActiveMQ* to *Artemis* is to modify as 
little as possible the source code to reduce the regression impact.

Our software is C++ code based and we are using *CMS API* ({*}ActiveMQ CPP{*}) 
as a client. I am unable to find a CMS API to create shared topic consumer so I 
am not sure if it exists. In the same time, I am not very sure that the 
behavior of such LB using shared subscription is what we want in our Gateway 
Loader Servers. We do not want to process the same message in more than one 
group (please correct me if I am wrong): 
[http://jmesnil.net/weblog/2013/06/27/jms-20-shared-subscription/]

Nonetheless we were using virtual topics with Classic ActiveMQ and they work as 
expected with Artemis too (the setup changes are trivial).

*Idempotent consumer using local, volatile LRU cache*

*ActiveMQ CPP* does not have idempotent consumers. Nonetheless in our software 
we have a wrapper over the CMS consumer and a wrapper over the CMS consumer 
listener. The LRU cache is part of our listener. Indeed the *CMS consumer* gets 
restored during FailOver but the object is not recreated so our wrapper is 
still valid and the cache still stands in this context. Indeed this might not 
be the best option to handle the duplicated messages but when there is no Load 
Balance it works ok. The problem is indeed when there are more than one 
consumer involved for the same topic.

*XA transaction*

The synchronization problem between database and JMS Broker is not necessary 
related to FailOver or Artemis usage. We have this also with Classic ActiveMQ 
[for instance if there is a network glitch or when *ActiveMQ* goes down and the 
message reached the database]. We were exploring the usage of XA transaction 
however the code changes needed to implement it in an existing software is huge 
and practically impossible.

*Handle duplicate messages at database level*

At the database level we have a protection with primary keys and indeed the 
same transaction cannot be inserted twice. The problem with this is more like a 
user notification problem.

As I have explained in the description of this issue, we have also a Gateway 
Fail Queue Monitor where the users might find all messages that failed during 
processing (included those duplicated that failed during insertion).

We just wanted to explore the possibility to have a way of removing these 
"fake"  failures caused by FailOver or somehow to distinguish them from those 
which are real business failures. These are technical failures (cause by 
FailOver in this case) and users looking to the Fail Queue Monitor might get 
confused when seeing such duplicated messages without understanding what went 
wrong. I suppose they will have to deal with this as being a system limitation.

 


was (Author: JIRAUSER300236):
*Virtual Topics vs Shared Topic Consumers*

Our plan during migration from *Classic ActiveMQ* to *Artemis* is to modify as 
little as possible the source code to reduce the regression impact.

Our software is C++ code based and we are using *CMS API* ({*}ActiveMQ CPP{*}) 
as a client. I am unable to find a CMS API to create shared topic consumer so I 
am not sure if it exists. In the same time, I am not very sure that the 
behavior of such LB using shared subscription is what we want in our Gateway 
Loader Servers. We do not want to process the same message in more than one 
group (please correct me if I am wrong): 
[http://jmesnil.net/weblog/2013/06/27/jms-20-shared-subscription/]

Nonetheless we were using virtual topics with Classic ActiveMQ and they work as 
expected with Artemis too (the setup changes are trivial).

*Idempotent consumer using local, volatile LRU cache*

*ActiveMQ CPP* does not have idempotent consumers. Nonetheless in our software 
we have a wrapper over the CMS consumer and a wrapper over the CMS consumer 
listener. The LRU cache is part of our listener. Indeed the *CMS consumer* gets 
restored during FailOver but the object is not recreated so our wrapper is 
still valid and the cache still stands in this context. Indeed this might not 
be the best option to handle the duplicated messages but when there is no Load 
Balance it works ok. The problem is indeed when there are more than one 
consumer involved for the same topic.

*XA transaction*

The synchronization problem between database and JMS Broker is not necessary 
related to FailOver or Artemis usage. We have this also with Classic ActiveMQ 
[for instance if there is a network glitch or when *ActiveMQ* goes down and the 
message reached the database]. We were exploring the usage of XA transaction 
however the code changes 

[jira] [Comment Edited] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723810#comment-17723810
 ] 

Liviu Citu edited comment on ARTEMIS-4276 at 5/18/23 7:18 AM:
--

*Virtual Topics vs Shared Topic Consumers*

Our plan during migration from *Classic ActiveMQ* to *Artemis* is to modify as 
little as possible the source code to reduce the regression impact.

Our software is C++ code based and we are using *CMS API* ({*}ActiveMQ CPP{*}) 
as a client. I am unable to find a CMS API to create shared topic consumer so I 
am not sure if it exists. In the same time, I am not very sure that the 
behavior of such LB using shared subscription is what we want in our Gateway 
Loader Servers. We do not want to process the same message in more than one 
group (please correct me if I am wrong): 
[http://jmesnil.net/weblog/2013/06/27/jms-20-shared-subscription/]

Nonetheless we were using virtual topics with Classic ActiveMQ and they work as 
expected with Artemis too (the setup changes are trivial).

*Idempotent consumer using local, volatile LRU cache*

*ActiveMQ CPP* does not have idempotent consumers. Nonetheless in our software 
we have a wrapper over the CMS consumer and a wrapper over the CMS consumer 
listener. The LRU cache is part of our listener. Indeed the *CMS consumer* gets 
restored during FailOver but the object is not recreated so our wrapper is 
still valid and the cache still stands in this context. Indeed this might not 
be the best option to handle the duplicated messages but when there is no Load 
Balance it works ok. The problem is indeed when there are more than one 
consumer involved for the same topic.

*XA transaction*

The synchronization problem between database and JMS Broker is not necessary 
related to FailOver or Artemis usage. We have this also with Classic ActiveMQ 
[for instance if there is a network glitch or when *ActiveMQ* goes down and the 
message reached the database]. We were exploring the usage of XA transaction 
however the code changes needed to implement it in an existing software is huge 
and practically impossible. However, at the database level we have a protection 
with primary keys and indeed the same transaction cannot be processed twice.

As I have explained in the description of this issue, we have also a Gateway 
Fail Queue Monitor where the users might find all messages that failed during 
processing (included those duplicated that failed during insertion).

We just wanted to explore the possibility to have a way of removing these 
"fake"  failures caused by FailOver or somehow to distinguish them from those 
which are real business failures. These are technical failures (cause by 
FailOver in this case) and users looking to the Fail Queue Monitor might get 
confused when seeing such duplicated messages without understanding what went 
wrong. I suppose they will have to deal with this as being a system limitation.

 


was (Author: JIRAUSER300236):
*Virtual Topics vs Shared Topic Consumers*

Our plan during migration from *Classic ActiveMQ* to *Artemis* is to modify as 
little as possible the source code to reduce the regression impact.

Our software is C++ code based and we are using *CMS API* ({*}ActiveMQ CPP{*}) 
as a client. I am unable to find a CMS API to create shared topic consumer so I 
am not sure if it exists. In the same time, I am not very sure that the 
behavior of such LB using shared subscription is what we want in our Gateway 
Loader Servers. We do not want to process the same message in more than one 
group (please correct me if I am wrong): 
http://jmesnil.net/weblog/2013/06/27/jms-20-shared-subscription/

Nonetheless we were using virtual topics with Classic ActiveMQ and they work as 
expected with Artemis too (the setup changes are trivial).

*Idempotent consumer using local, volatile LRU cache*

*ActiveMQ CPP* does not support idempotent consumers. Nonetheless in our 
software we have a wrapper over the CMS consumer and a wrapper over the CMS 
consumer listener. The LRU cache is part of our listener. Indeed the *CMS 
consumer* gets restored during FailOver but the object is not recreated so our 
wrapper is still valid and the cache still stands in this context. Indeed this 
might not be the best option to handle the duplicated messages but when there 
is no Load Balance it works ok. The problem is indeed when there are more than 
one consumer involved for the same topic.

*XA transaction*

The synchronization problem between database and JMS Broker is not necessary 
related to FailOver or Artemis usage. We have this also with Classic ActiveMQ 
[for instance if there is a network glitch or when *ActiveMQ* goes down and the 
message reached the database]. We were exploring the usage of XA transaction 
however the code changes needed to implement it in an existing software is huge 
and practically impossible. However, at the 

[jira] [Commented] (ARTEMIS-4276) Message Group does not replicate properly during failover

2023-05-18 Thread Liviu Citu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17723810#comment-17723810
 ] 

Liviu Citu commented on ARTEMIS-4276:
-

*Virtual Topics vs Shared Topic Consumers*

Our plan during migration from *Classic ActiveMQ* to *Artemis* is to modify as 
little as possible the source code to reduce the regression impact.

Our software is C++ code based and we are using *CMS API* ({*}ActiveMQ CPP{*}) 
as a client. I am unable to find a CMS API to create shared topic consumer so I 
am not sure if it exists. In the same time, I am not very sure that the 
behavior of such LB using shared subscription is what we want in our Gateway 
Loader Servers. We do not want to process the same message in more than one 
group (please correct me if I am wrong): 
http://jmesnil.net/weblog/2013/06/27/jms-20-shared-subscription/

Nonetheless we were using virtual topics with Classic ActiveMQ and they work as 
expected with Artemis too (the setup changes are trivial).

*Idempotent consumer using local, volatile LRU cache*

*ActiveMQ CPP* does not support idempotent consumers. Nonetheless in our 
software we have a wrapper over the CMS consumer and a wrapper over the CMS 
consumer listener. The LRU cache is part of our listener. Indeed the *CMS 
consumer* gets restored during FailOver but the object is not recreated so our 
wrapper is still valid and the cache still stands in this context. Indeed this 
might not be the best option to handle the duplicated messages but when there 
is no Load Balance it works ok. The problem is indeed when there are more than 
one consumer involved for the same topic.

*XA transaction*

The synchronization problem between database and JMS Broker is not necessary 
related to FailOver or Artemis usage. We have this also with Classic ActiveMQ 
[for instance if there is a network glitch or when *ActiveMQ* goes down and the 
message reached the database]. We were exploring the usage of XA transaction 
however the code changes needed to implement it in an existing software is huge 
and practically impossible. However, at the database level we have a protection 
with primary keys and indeed the same transaction cannot be processed twice.

As I have explained in the description of this issue, we have also a Gateway 
Fail Queue Monitor where the users might find all messages that failed during 
processing (included those duplicated that failed during insertion).

We just wanted to explore the possibility to have a way of removing these 
"fake"  failures caused by FailOver or somehow to distinguish them from those 
which are real business failures. These are technical failures (cause by 
FailOver in this case) and users looking to the Fail Queue Monitor might get 
confused when seeing such duplicated messages without understanding what went 
wrong. I suppose they will have to deal with this as being a system limitation.

 

> Message Group does not replicate properly during failover
> -
>
> Key: ARTEMIS-4276
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4276
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.28.0
>Reporter: Liviu Citu
>Priority: Major
>
> Hi,
> We are currently migrating our software from Classic to Artemis and we plan 
> to use failover functionality.
> We were using message group functionality by setting *JMSXGroupID* and this 
> was working as expected. However after failover switch I noticed that 
> messages are sent to wrong consumers.
> Our gateway/interface application is actually a collection of servers:
>  * gateway adapter server: receives messages from an external systems and 
> puts them on a specific/virtual topic
>  * gateway loader server (can be balanced): picks up the messages from the 
> topic and do processing
>  * gateway fail queue: monitors all messages that failed processing and has a 
> functionality of resubmitting the message (users will correct the processing 
> errors and then resubmit transaction)
> *JMSXGroupID* is used to ensure that during message resubmit the same 
> consumer/loader is processing the message as it was originally processed.
> However, if the message resubmit is happening during failover switch we have 
> noticed that the message is not sent to the right consumer as it should. 
> Basically the first available consumer is used which is not what we want.
> I have searched for configuration changes but couldn't find any relevant 
> information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)