riccardomodanese commented on PR #4021: URL: https://github.com/apache/activemq-artemis/pull/4021#issuecomment-1119584696
About the exception raised by Paho I think it's a good news (but I'm not so deep into MQTT specifications to be sure). I mean, in one my tests there are n clients connecting more or less at the same time with the same clientId. The test scope is to check that only one client is still connected after a while and all the clients were able to connect. Before your fix I noticed a lot of exceptions (n-1) during the connect calls. Now no client is throwing exception during its connect call. I think it's right since the MQTT protocol says that the previous client with the same clientId should be disconnected. Not the current one. Unfortunately I cannot run the Kapua ITs on our ci against Artemis snapshot but the manual test I did today confirmed me the fix is fine. I added a test to connect n clients (with delay between one connection and the others) with the same clientId but on different accounts (so the Kapua security plugin will change the clientId with accountId:clientId) All the n clients are able to connect and stay connected together. From the plugin log I can see ``` ### authorizing address: $EDC/stealing-link-7/client-stealing-multi-account/#::e2w9YPXivHY|client-stealing-multi-account.$EDC/stealing-link-7/client-stealing-multi-account/# - check type: CONSUME ### authorizing address: $EDC/stealing-link-7/client-stealing-multi-account/#::e2w9YPXivHY|client-stealing-multi-account.$EDC/stealing-link-7/client-stealing-multi-account/# - check type: CREATE_DURABLE_QUEUE ``` and that's right since the clientId in the durable queue address is (accountId:clientId): `e2w9YPXivHY|client-stealing-multi-account` Just one last question about the ETA. Are you expecting to release this fix with the 2.23? Thanks a lot! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
