I'm not sure about the topology cache, but I don't think this would be useful for the cluster registry.
The cluster registry is a global component, so it's only stopped *after* all the caches, and other components are not supposed to know that the cluster registry implementation uses a cache. Cheers Dan On Tue, Aug 19, 2014 at 5:02 PM, Tristan Tarrant <ttarr...@redhat.com> wrote: > Thanks, this actually has multiple issues currently: > > - the default cache is stopped last (why ?) > - some "service" caches need to be handled manually: e.g. the registry > and the topology cache. > > A generic ref counting system would be a great improvement > > Tristan > > On 15/08/14 14:29, Sanne Grinovero wrote: > > The goal being to resolve ISPN-4561, I was thinking to expose a very > > simple reference counter in the AdvancedCache API. > > > > As you know the Query module - which triggers on indexed caches - can > > use the Infinispan Lucene Directory to store its indexes in a > > (different) Cache. > > When the CacheManager is stopped, if the index storage caches are > > stopped first, then the indexed cache is stopped, this might need to > > flush/close some pending state on the index and this results in an > > illegal operation as the storate is shut down already. > > > > We could either implement a complex dependency graph, or add a method > like: > > > > > > boolean incRef(); > > > > on AdvancedCache. > > > > when the Cache#close() method is invoked, this will do an internal > > decrement, and only when hitting zero it will really close the cache. > > > > A CacheManager shutdown will loop through all caches, and invoke > > close() on all of them; the close() method should return something so > > that the CacheManager shutdown loop understand if it really did close > > all caches or if not, in which case it will loop again through all > > caches, and loops until all cache instances are really closed. > > The return type of "close()" doesn't necessarily need to be exposed on > > public API, it could be an internal only variant. > > > > Could we do this? > > > > --Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev@lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev@lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev >
_______________________________________________ infinispan-dev mailing list infinispan-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/infinispan-dev