Actually looking into this closer. I have found a way to completely remove the expiration interceptor with minimal drawback. The drawback is that a remove expired command will be generated if a read finds the entry gone is concurrently fired with a write for the same key. But I would say this should happen so infrequently that it probably shouldn't matter.
I have put it all on [1] and all the tests seem to pass fine. I want to double check a few things but this should be pretty good. [1] https://github.com/wburns/infinispan/commits/expiration_listener On Mon, Nov 30, 2015 at 5:46 PM Sanne Grinovero <[email protected]> wrote: > Wouldn't it be an interesting compromise to make sure we calculate > things like the key's hash only once? > > On 30 November 2015 at 21:54, William Burns <[email protected]> wrote: > > I am not sure there is an easy way to consolidate these into a single > map, > > since some of these are written to on reads, some on writes and sometimes > > conditionally written to. And then as Dan said they are cleaned up at > > different times possibly. > > > > We could do something like states (based on which ones would have > written to > > the map), but I think it will get quite complex, especially if we ever > add > > more of these map type requirement. > > > > On a similar note, I had actually thought of possibly moving the > expiration > > check out of the data container and into the entry wrapping interceptor > or > > the likes. This would allow for us to remove the expiration map > completely > > since we could only raise the extra expiration commands on a read and not > > writes. But this would change the API and I am thinking we can only do > this > > for 9.0. > > > > On Mon, Nov 30, 2015 at 2:18 PM Dan Berindei <[email protected]> > wrote: > >> > >> The first problem that comes to mind is that context entries are also > >> stored in a map, at least in transactional mode. So access through the > >> context would only be faster in non-tx caches, in tx caches it would > >> not add any benefits. > >> > >> I also have some trouble imagining how these temporary entries would > >> be released, since locks, L1 requestors, L1 synchronizers, and write > >> registrations all have their own rules for cleaning up. > >> > >> Finally, I'm not sure how much this would help. I actually removed the > >> write registration for everything except RemoveExpiredCommand when > >> testing the HotRod server performance, but I didn't get any > >> significant improvement on my machine. Which was kind of expected, > >> since the benchmark doesn't seem to be CPU-bound, and JFR was showing > >> it with < 1.5% of CPU. > >> > >> > >> Cheers > >> Dan > >> > >> > >> On Fri, Nov 27, 2015 at 11:28 AM, Radim Vansa <[email protected]> > wrote: > >> > No thoughts at all? @wburns, could I have your view on this? > >> > > >> > Thanks > >> > > >> > Radim > >> > > >> > On 11/23/2015 04:26 PM, Radim Vansa wrote: > >> >> Hi again, > >> >> > >> >> examining some flamegraphs I've found out that recently the > >> >> ExpirationInterceptor has been added, which registers ongoing write > in > >> >> a > >> >> hashmap. So at this point we have a map for locks, map for writes > used > >> >> for expiration, another two key-addressed maps in L1ManagerImpl and > one > >> >> in L1NonTxInterceptor and maybe another maps elsewhere. > >> >> > >> >> This makes me think that we could spare map lookups and expensive > >> >> writes > >> >> by providing *single map for temporary per-key data*. A reference to > >> >> the > >> >> entry could be stored in the context to save the lookups. An extreme > >> >> case would be to put this into DataContainer, but I think that this > >> >> would prove too tricky in practice. > >> >> > >> >> A downside would be the loss of encapsulation (any component could > >> >> theoretically access e.g. locks), but I don't find that too dramatic. > >> >> > >> >> WDYT? > >> >> > >> >> Radim > >> >> > >> > > >> > > >> > -- > >> > Radim Vansa <[email protected]> > >> > JBoss Performance Team > >> > > >> > _______________________________________________ > >> > infinispan-dev mailing list > >> > [email protected] > >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> _______________________________________________ > >> infinispan-dev mailing list > >> [email protected] > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > > _______________________________________________ > > infinispan-dev mailing list > > [email protected] > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > [email protected] > https://lists.jboss.org/mailman/listinfo/infinispan-dev >
_______________________________________________ infinispan-dev mailing list [email protected] https://lists.jboss.org/mailman/listinfo/infinispan-dev
