Hi HasithaH/ Srinath, Thanks a lot for your thoughts.
@HasithaH - I have tried scheduling the deletion of content and even though it could work in a slow subscription rate, It fails when publishing/subscribing with 10 threads and 10000 messages. @Srinath - Agree with maintaining the queue state via hazelcast cos then we can infer the purged state at the delivery stage itself as a failsafe. I have discussed how to implement this with Sewwandi HasithaH. Will post the proposed flow soon. Thanks On Fri, Oct 24, 2014 at 7:17 AM, Srinath Perera <[email protected]> wrote: > How about > > 1) remembering the queue is purged via hazelcast with a timeout > 2) Before get the content or after error happend, check is queue is > purged, and if so drop the message > > IMHO it is too complicated to try to look into every queue and processing > pipeline and cleanup things. Rather, lets filter them out when we process. > > --Srinath > > On Fri, Oct 24, 2014 at 6:43 AM, Hasitha Hiranya <[email protected]> > wrote: > >> Hi, >> >> This is my suggestion. >> >> 1. from purging node delete metadata raw. >> 2. from purging node clear all in memory lists having messages (trackings >> and the middle buffer)(we might have scheduled to deliver. Maybe we should >> cancel the jobs?) >> 3. send a cluster notification that queue is purged (this is already >> done. You just need to implement case of the switch i think) >> 4. schedule to deliver content. Content are in different rows. So we >> cannot delete in one go. Let's schedule. It will delete offline. During the >> schedule interval *we think* cluster notification is gone and purging of >> metadata has happened from everywhere (which is a loose assumption - fair >> enough when dealing with a cluster). >> >> Thanks >> >> >> On Fri, Oct 24, 2014 at 1:24 AM, Hasitha Amal De Silva <[email protected] >> > wrote: >> >>> Hi Ramith, >>> >>> Thanks for bringing this up. I had missed on notifying the cluster >>> during purge. Since the purge flow is defined for one node, we only need to >>> trigger the same at the QueueListener through hazelcast. I will add it. >>> >>> But we still cant distinguish messages in active delivery threads for a >>> given queue, so that their message content can be saved until delivery is >>> complete. >>> >>> Thanks >>> >>> >>> On Fri, Oct 24, 2014 at 12:41 AM, Ramith Jayasinghe <[email protected]> >>> wrote: >>> >>>> I'm thinking out aloud here: >>>> What we when a queue is deleted we fire a event across the cluster and >>>> let every one know the queue is about to get deleted. Further processing >>>> everything related to queue should end there. what do you guys think? >>>> regards >>>> Ramith >>>> >>>> >>>> On Thu, Oct 23, 2014 at 10:34 PM, Hasitha Amal De Silva < >>>> [email protected]> wrote: >>>> >>>>> Hi all, >>>>> >>>>> Given our message processing model, we do not enqueue message content >>>>> in memory, and only keep message metadata. So, at the final point of a >>>>> message delivery, we retrieve the message content accordingly and send. >>>>> >>>>> However, if a user purges a queue while subscribers are receiving from >>>>> it, all message content of that queue is deleted from the database, even >>>>> though some messages might be at the final delivery stage. So when the >>>>> message content of such a message is looked up, it will throw an NPE / >>>>> NoSuchElementException. >>>>> >>>>> We cannot infer if the exception is due to a purge scenario or >>>>> something else, because MessageContent can also be lost due to other >>>>> reasons (e.g. : a message being acknowledged while it's second delivery >>>>> attempt is on the way) >>>>> >>>>> I could think of following ways to handle this : >>>>> >>>>> 1. Catch the exception and add a general trace log explaining all >>>>> possible reasons -> clear the message from in memory collections since we >>>>> can safely say that its already been acked / purged. >>>>> >>>>> 2. Figure out and skip deleting the (message content + metadata) of >>>>> enqueued / redelivered messages in-memory, and assume they will be deleted >>>>> later from normal delivery flow. This means that all the in-memory, >>>>> undelivered messages will still be delivered even after the queue is >>>>> purged. (User can interpret this as an issue) >>>>> >>>>> Better suggestions ? The ideal solution would be to exactly remove all >>>>> undelivered messages (in-store and in-memory) at the moment of purge. But >>>>> this is difficult since the in-memory message buffer maybe delegated very >>>>> fast into delivery jobs. >>>>> >>>>> As at now, I feel that "option 1" would be the most feasible solution. >>>>> >>>>> WDYT ? >>>>> >>>>> >>>>> -- >>>>> Cheers, >>>>> >>>>> Hasitha Amal De Silva >>>>> Software Engineer >>>>> Mobile : 0772037426 >>>>> Blog : http://devnutshell.tumblr.com/ >>>>> WSO2 Inc.: http://wso2.com ( lean.enterprise.middleware. ) >>>>> >>>> >>>> >>>> >>>> -- >>>> Ramith Jayasinghe >>>> Technical Lead >>>> WSO2 Inc., http://wso2.com >>>> lean.enterprise.middleware >>>> >>>> E: [email protected] >>>> P: +94 777542851 >>>> >>>> >>> >>> >>> -- >>> Cheers, >>> >>> Hasitha Amal De Silva >>> Software Engineer >>> Mobile : 0772037426 >>> Blog : http://devnutshell.tumblr.com/ >>> WSO2 Inc.: http://wso2.com ( lean.enterprise.middleware. ) >>> >> >> >> >> -- >> *Hasitha Abeykoon* >> Senior Software Engineer; WSO2, Inc.; http://wso2.com >> *cell:* *+94 719363063* >> *blog: **abeykoon.blogspot.com* <http://abeykoon.blogspot.com> >> >> > > > -- > ============================ > Blog: http://srinathsview.blogspot.com twitter:@srinath_perera > Site: http://people.apache.org/~hemapani/ > Photos: http://www.flickr.com/photos/hemapani/ > Phone: 0772360902 > -- Cheers, Hasitha Amal De Silva Software Engineer Mobile : 0772037426 Blog : http://devnutshell.tumblr.com/ WSO2 Inc.: http://wso2.com ( lean.enterprise.middleware. )
_______________________________________________ Dev mailing list [email protected] http://wso2.org/cgi-bin/mailman/listinfo/dev
