+1. Let me setup a time.

Regards,
Manoj

On Thursday, December 19, 2013, Srinath Perera wrote:

> We need Azeez's feedback. Shall you, myself, and Azeez chat sometime and
> decide on the first Arch design?
>
>
> On Thu, Dec 19, 2013 at 11:55 AM, Manoj Fernando <[email protected]> wrote:
>
> Hi Srinath,
>
> That sounds like a much cleaner solution.  We can perhaps use the native
> map-store declarative [1] which I think does something similar.  It may
> sound a little silly to ask... but are we keeping Hazlecast active on a
> single node environment as well? :) Otherwise we will have to handle
> persistence on a single node in a different way.   This is with the
> assumption of needing to persist throttle data on a single node environment
> as well (but questioning if we really need to do that is not totally
> invalid IMO).
>
> Shall we go ahead with the Hazlecast option targeting cluster deployments
> then?
>
> - Manoj
>
> [1] https://code.google.com/p/hazelcast/wiki/MapPersistence
>
>
> On Thu, Dec 19, 2013 at 10:51 AM, Srinath Perera <[email protected]> wrote:
>
> One another way to do this use Hazelcast and then use "though cache" or
> "Change listener's" in Hazecast for persistence.
>
> --Srinath
>
>
> On Tue, Dec 17, 2013 at 4:49 PM, Manoj Fernando <[email protected]> wrote:
>
> +1 for persisting through a single (elected?) node, and let Hazlecast do
> the replication.
>
> I took into consideration the need to persist periodically instead of at
> each and every request (by spawning a separate thread that has access to
> the callerContext map)...  so yes... we should think in the same way for
> replicating the counters across the cluster as well.
>
> Instead of using a global counter, can we perhaps use the last updated
> timestamp of each CallerContext?  It's actually not a single counter we
> need to deal with, and each CallerContext instance will have separate
> counters mapped to their throttling policy AFAIK.  Therefore, I think its
> probably better to update CallerContext instances based on the last update
> timestamp.
>
> WDYT?
>
> If agree, then I need to figure out how to make delayed replication on
> hazlecast (is it through the hazelcast.heartbeat.interval.seconds config
> item?)
>
> Regards,
> Manoj
>
>
> On Tue, Dec 17, 2013 at 4:22 PM, Srinath Perera <[email protected]> wrote:
>
> We need to think it a cluster setup do we need persistence as well? As we
> can have replication using Hazelcast?
>
> If we need persistence, I think it is a good if a single node persists the
> current throttling values, and if that node fails, someone else takes it
> place?
>
> Current implementation sync the values across the cluster per each
> message, which introduce significant overhead. I think we should go to a
> model where each node collects and update the values once few seconds.
>
> idea is
> 1) there is a global counter, that we use to throttle
> 2) Each node keep a global counter, and periodically it update the global
> counter using value in location counter and reset the counter and read the
> current global counter.
> 3) Until next update, each node make decisions based on local global
> counter values it has read already
>
> This will mean that the throttling will throttle close to the limit, not
> exactly at the limit. However, IMHO, that is not a problem for throttling
> usecase.
>
> --Srinath
>
>
>
>
> On Mon, Dec 16, 2013 at 7:20 PM, Manoj Fernando <[email protected]> wrot
>
>

-- 
Manoj Fernando
Director - Solutions Architecture

Contact:
LK -  +94 112 145345
Mob: +94 773 759340
www.wso2.com
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to