As someone mentioned, the biggest issue would be to make sure the data 
read from the file system isn't stale. If a node has been down for an 
extended period of time, then this process might slow things down.

We'd have to implement some rsync like algorithm, which checks the local 
data against the cluster data, and this might be costly if the data set 
is small. If it's big, then that cost would be amortized over the 
smaller deltas sent over the network to update a local cache.

I don't think this makes sense as (1) data sets in replicated mode are 
usually small and (2) Infinispan's focus is on distributed data.


On 5/17/11 11:25 AM, Galder Zamarreño wrote:
>
> On May 16, 2011, at 1:18 PM, Sanne Grinovero wrote:
>
>> 2011/5/16 Galder Zamarreño<gal...@redhat.com>:
>>> Not sure if the idea has come up but while at GeeCON last week I was 
>>> discussing to one of the attendees about state transfer improvements in 
>>> replicated environments:
>>>
>>> The idea is that in a replicated environment, if a cache manager shuts 
>>> down, it would dump its memory contents to a cache store (i.e. a local 
>>> filesystem) and when it starts up, instead of going over the network to do 
>>> state transfer, it would load the state from the local filesystem which 
>>> would be much quicker. Obviously, at times the cache manager would crash or 
>>> have some failure dumping the memory contents, so in that case it would 
>>> fallback on state transfer over the network. I think it's an interesting 
>>> idea since it could reduce the amount of state transfer to be done. It's 
>>> true though that there're other tricks if you're having issues with state 
>>> transfer, such as the use of a cluster cache loader.
>>>
>>> WDYT?
>>
>> Well if it's a shared cachestore, then we're using network at some
>> level anyway. If we're talking about a not-shared cachestore, how do
>> you know which keys/values are still valid and where not updated? and
>> about the new keys?
>
> I see this only being useful with a local cache store cos if you need to go 
> remote over the network, might as well just do state transfer.
>
> Not sure if the timestamp of creation/update is available per all entries 
> (i'd need to check the code but maybe immortals do not store it...), but 
> anyway assuming that a timestamp was stored in the local cache store, on 
> startup the node could send this timestamp and the coordinator could send 
> anything new created/updated after that timestamp.
>
> This would be particularly efficient in situations where you have to quickly 
> restart a machine for whatever reason and so the deltas are very small, or 
> when the caches are big and state transfer would cost a lot from a bandwidth 
> perspective.

-- 
Bela Ban
Lead JGroups / Clustering Team
JBoss
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to