Re: [infinispan-dev] Preloading from disk versus state transfer Re: ISPN-1384 - InboundInvocationHandlerImpl should wait for cache to be started? (not just defined)

2011-10-26 Thread Dan Berindei
On Mon, Oct 24, 2011 at 4:42 PM, Sanne Grinovero sa...@infinispan.org wrote:
 On 24 October 2011 12:58, Dan Berindei dan.berin...@gmail.com wrote:
 Hi Galder

 On Mon, Oct 24, 2011 at 1:46 PM, Galder Zamarreño gal...@redhat.com wrote:

 On Oct 24, 2011, at 12:04 PM, Dan Berindei wrote:

 ISPN-1470 (https://issues.jboss.org/browse/ISPN-1470) raises an
 interesting question: if the preloading happens before joining, the
 preloading code won't know anything about the consistent hash. It will
 load everything from the cache store, including the keys that are
 owned by other nodes.

 It's been defined to work that way:
 https://docs.jboss.org/author/display/ISPN/CacheLoaders

 Tbh, that will only happen in shared cache stores. In non-shared ones, 
 you'll only have data that belongs to that node.


 Not really... in distributed mode, every time the cache starts it will
 have another position on the hash wheel.
 That means even with a non-shared cache store, it's likely most of the
 stored keys will no longer be local.

 Actually I just noticed that you've fixed ISPN-1404, which looks like
 it would solves my problem when the cache is created by a HotRod
 server. I would like to extend it to work like this by default, e.g.
 by using the transport's nodeName as the seed.

 I think there is a check in place already so that the joiner won't
 push stale data from its cache store to the other nodes, but we should
 also discard the keys that don't map locally or we'll have stale data
 (since we don't have a way to check if those keys are stale and
 register to receive invalidations for those keys).

 +1, only for shared cache stores.


 What do you think, should I discard the non-local keys with the fix
 for ISPN-1470 or should I let them be and warn the user about
 potentially stale data?

 Discard only for shared cache stores.

 Cache configurations should be symmetrical, so if other nodes preload, 
 they'll preload only data local to them with your change.


 Discarding works fine from the correctness POV, but for performance
 it's not that great: we may do a lot of work to preload keys and have
 nothing to show for it at the end.

 Can't you just skip loading state and be happy with the state you
 receive from peers? More data will be lazily loaded.
 Applying of course only when you're not the only/first node in the
 grid, in which case you have to load.


Right, we could preload only on the first node. With a shared cache
store this should work great, we just have to start preloading after
we connect to the cluster and before we send the join request.

But I have trouble visualizing how a persistent (purgeOnStartup =
false) non-shared cache store should to work until we have some
validation mechanism like in
https://issues.jboss.org/browse/ISPN-1195. Should we even allow this
kind of setup?


 The only alternative I see is to be able to find the boundaries of
 keys you own, and change the CacheLoader API to load keys by the
 identified range - should work with multiple boundaries too for
 virtualnodes, but this is something that not all CacheLoaders will be
 able to implement, so it should be an optional API; for now I'd stick
 with the first option above as I don't see how we can be more
 efficient in loading the state from CacheLoaders than via JGroups.

 Sanne

 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Cache ops after view changes result in RehashInProgressException very easily

2011-10-26 Thread Dan Berindei
Hi Galder, sorry it took so long to reply.

On Mon, Oct 24, 2011 at 4:16 PM, Galder Zamarreño gal...@redhat.com wrote:
 Btw, forgot to attach the log:




 On Oct 24, 2011, at 3:13 PM, Galder Zamarreño wrote:

 Hi Dan,

 Re: http://goo.gl/TGwrP

 There's a few of this in the Hot Rod server+client testsuites. It's easy to 
 replicate it locally. Seems like cache operations right after a cache has 
 started are rather problematic.

 In local execution of HotRodReplicationTest, I was able to replicate the 
 issue when trying to test topology changes. Please find attached the log 
 file, but here're the interesting bits:

 1. A new view installation is being prepared with NodeA and NodeB:
 2011-10-24 14:36:09,046 4221  TRACE 
 [org.infinispan.cacheviews.CacheViewsManagerImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) 
 ___hotRodTopologyCache: Preparing cache view CacheView{viewId=4, 
 members=[NodeA-63227, NodeB-15806]}, committed view is CacheView{viewId=3, 
 members=[NodeA-63227, NodeB-15806, NodeC-17654]}
 …
 2011-10-24 14:36:09,047 4222  DEBUG 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Blocking new 
 transactions
 2011-10-24 14:36:09,047 4222  TRACE 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Acquiring 
 exclusive state transfer shared lock, shared holders: 0
 2011-10-24 14:36:09,047 4222  TRACE 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Acquired state 
 transfer lock in exclusive mode

 2. The cluster coordinator discovers a view change and requests NodeA and 
 NodeB to remove NodeC from the topology view:
 2011-10-24 14:36:09,048 4223  TRACE 
 [org.infinispan.interceptors.InvocationContextInterceptor] 
 (OOB-3,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Invoked with 
 command RemoveCommand{key=NodeC-17654, value=null, flags=null} and 
 InvocationContext [NonTxInvocationContext{flags=null}]

 3. NodeB has not yet finished installing the cache view, so that remove 
 times out:
 2011-10-24 14:36:09,049 4224  ERROR 
 [org.infinispan.interceptors.InvocationContextInterceptor] 
 (OOB-3,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) ISPN000136: 
 Execution error
 org.infinispan.distribution.RehashInProgressException: Timed out waiting for 
 the transaction lock

 A way to solve this is to avoid relying on cluster view changes, but instead 
 wait for the cache view to be installed, and then do the operations then. Is 
 there any way to wait till then?

 One way would be to have some CacheView installed callbacks or similar. This 
 could be a good option cos I could have a CacheView listener for the hot rod 
 topology cache whose callbacks I can check for isPre=false and then do the 
 cache ops safely.


Initially I was thinking of allowing multiple cache view listeners for
each cache and making StateTransferManager one of them but I decided
against it because I realized it needs a different interface than our
regular listeners. I know that it was only a matter of time until
someone needed it...

An alternative solution would be to retry all operations, like we do
with commits now, when we receive a RehashInProgressException
exception from the remote node. That's what I was planning to do first
as it helps in other use cases as well.

 Otherwise, code like this the one I used for keeping the Hot Rod topology is 
 gonna be racing against your cache view installation code.

 You seem to have some pieces in place for this, i.e. CacheViewListener, but 
 it seems only designed for internal core/ work.

 Any other suggestions?

 Cheers,
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Preloading from disk versus state transfer Re: ISPN-1384 - InboundInvocationHandlerImpl should wait for cache to be started? (not just defined)

2011-10-26 Thread Sanne Grinovero

 Can't you just skip loading state and be happy with the state you
 receive from peers? More data will be lazily loaded.
 Applying of course only when you're not the only/first node in the
 grid, in which case you have to load.


 Right, we could preload only on the first node. With a shared cache
 store this should work great, we just have to start preloading after
 we connect to the cluster and before we send the join request.

 But I have trouble visualizing how a persistent (purgeOnStartup =
 false) non-shared cache store should to work until we have some
 validation mechanism like in
 https://issues.jboss.org/browse/ISPN-1195. Should we even allow this
 kind of setup?

Right I don't think it makes much sense. The current node might have
been down for a long time and it's dedicated cacheloader will likely
contain stale values; we might update older values via versions of
optimistic locking, but we won't be able to remove those which should
have been removed.
I don't think we should support that, at least until these problems are solved.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Cache ops after view changes result in RehashInProgressException very easily

2011-10-26 Thread Galder Zamarreño

On Oct 26, 2011, at 9:46 AM, Dan Berindei wrote:

 Hi Galder, sorry it took so long to reply.
 
 On Mon, Oct 24, 2011 at 4:16 PM, Galder Zamarreño gal...@redhat.com wrote:
 Btw, forgot to attach the log:
 
 
 
 
 On Oct 24, 2011, at 3:13 PM, Galder Zamarreño wrote:
 
 Hi Dan,
 
 Re: http://goo.gl/TGwrP
 
 There's a few of this in the Hot Rod server+client testsuites. It's easy to 
 replicate it locally. Seems like cache operations right after a cache has 
 started are rather problematic.
 
 In local execution of HotRodReplicationTest, I was able to replicate the 
 issue when trying to test topology changes. Please find attached the log 
 file, but here're the interesting bits:
 
 1. A new view installation is being prepared with NodeA and NodeB:
 2011-10-24 14:36:09,046 4221  TRACE 
 [org.infinispan.cacheviews.CacheViewsManagerImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) 
 ___hotRodTopologyCache: Preparing cache view CacheView{viewId=4, 
 members=[NodeA-63227, NodeB-15806]}, committed view is CacheView{viewId=3, 
 members=[NodeA-63227, NodeB-15806, NodeC-17654]}
 …
 2011-10-24 14:36:09,047 4222  DEBUG 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Blocking new 
 transactions
 2011-10-24 14:36:09,047 4222  TRACE 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Acquiring 
 exclusive state transfer shared lock, shared holders: 0
 2011-10-24 14:36:09,047 4222  TRACE 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Acquired 
 state transfer lock in exclusive mode
 
 2. The cluster coordinator discovers a view change and requests NodeA and 
 NodeB to remove NodeC from the topology view:
 2011-10-24 14:36:09,048 4223  TRACE 
 [org.infinispan.interceptors.InvocationContextInterceptor] 
 (OOB-3,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Invoked with 
 command RemoveCommand{key=NodeC-17654, value=null, flags=null} and 
 InvocationContext [NonTxInvocationContext{flags=null}]
 
 3. NodeB has not yet finished installing the cache view, so that remove 
 times out:
 2011-10-24 14:36:09,049 4224  ERROR 
 [org.infinispan.interceptors.InvocationContextInterceptor] 
 (OOB-3,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) ISPN000136: 
 Execution error
 org.infinispan.distribution.RehashInProgressException: Timed out waiting 
 for the transaction lock
 
 A way to solve this is to avoid relying on cluster view changes, but 
 instead wait for the cache view to be installed, and then do the operations 
 then. Is there any way to wait till then?
 
 One way would be to have some CacheView installed callbacks or similar. 
 This could be a good option cos I could have a CacheView listener for the 
 hot rod topology cache whose callbacks I can check for isPre=false and then 
 do the cache ops safely.
 
 
 Initially I was thinking of allowing multiple cache view listeners for
 each cache and making StateTransferManager one of them but I decided
 against it because I realized it needs a different interface than our
 regular listeners. I know that it was only a matter of time until
 someone needed it...
 
 An alternative solution would be to retry all operations, like we do
 with commits now, when we receive a RehashInProgressException
 exception from the remote node. That's what I was planning to do first
 as it helps in other use cases as well.

Ok, do you have time to include this today ahead of the BETA3 release? 

I think this is a very important fix cos as you can see in the testsuite, it's 
very easy to get this error with Hot Rod servers.

 
 Otherwise, code like this the one I used for keeping the Hot Rod topology 
 is gonna be racing against your cache view installation code.
 
 You seem to have some pieces in place for this, i.e. CacheViewListener, but 
 it seems only designed for internal core/ work.
 
 Any other suggestions?
 
 Cheers,
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Preloading from disk versus state transfer Re: ISPN-1384 - InboundInvocationHandlerImpl should wait for cache to be started? (not just defined)

2011-10-26 Thread Galder Zamarreño

On Oct 24, 2011, at 12:58 PM, Dan Berindei wrote:

 Hi Galder
 
 On Mon, Oct 24, 2011 at 1:46 PM, Galder Zamarreño gal...@redhat.com wrote:
 
 On Oct 24, 2011, at 12:04 PM, Dan Berindei wrote:
 
 ISPN-1470 (https://issues.jboss.org/browse/ISPN-1470) raises an
 interesting question: if the preloading happens before joining, the
 preloading code won't know anything about the consistent hash. It will
 load everything from the cache store, including the keys that are
 owned by other nodes.
 
 It's been defined to work that way:
 https://docs.jboss.org/author/display/ISPN/CacheLoaders
 
 Tbh, that will only happen in shared cache stores. In non-shared ones, 
 you'll only have data that belongs to that node.
 
 
 Not really... in distributed mode, every time the cache starts it will
 have another position on the hash wheel.
 That means even with a non-shared cache store, it's likely most of the
 stored keys will no longer be local.
 
 Actually I just noticed that you've fixed ISPN-1404, which looks like
 it would solves my problem when the cache is created by a HotRod
 server. I would like to extend it to work like this by default, e.g.
 by using the transport's nodeName as the seed.
 
 I think there is a check in place already so that the joiner won't
 push stale data from its cache store to the other nodes, but we should
 also discard the keys that don't map locally or we'll have stale data
 (since we don't have a way to check if those keys are stale and
 register to receive invalidations for those keys).
 
 +1, only for shared cache stores.
 
 
 What do you think, should I discard the non-local keys with the fix
 for ISPN-1470 or should I let them be and warn the user about
 potentially stale data?
 
 Discard only for shared cache stores.
 
 Cache configurations should be symmetrical, so if other nodes preload, 
 they'll preload only data local to them with your change.
 
 
 Discarding works fine from the correctness POV, but for performance
 it's not that great: we may do a lot of work to preload keys and have
 nothing to show for it at the end.

I agree, I thought of that when replying to this. It'd be great if you could 
only bring that data that will belong to you, but for that we'd need to store 
the hash of the key as well.

 
 Enabling the fixed hash seed by default should make the performance
 issue go away. I think it would also require virtual nodes enabled by
 default and a way to ensure that the nodeNames are unique across the
 cluster.
 
 Cheers
 Dan
 
 
 
 Cheers
 Dan
 
 
 On Mon, Oct 3, 2011 at 3:09 AM, Manik Surtani ma...@jboss.org wrote:
 
 On 28 Sep 2011, at 10:56, Dan Berindei wrote:
 
 I'm not sure if the comment is valid though, since the old
 StateTransferManager had priority 55 and it also cleared the data
 container before applying the state from the coordinator. I'm not sure
 how preloading and state transfer are supposed to interact, maybe
 Manik can help clear this up?
 
 Hmm - this is interesting.  I think preloading should happen first, since
 the cache store may contain old data.
 --
 Manik Surtani
 ma...@jboss.org
 twitter.com/maniksurtani
 Lead, Infinispan
 http://www.infinispan.org
 
 
 
 
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Preloading from disk versus state transfer Re: ISPN-1384 - InboundInvocationHandlerImpl should wait for cache to be started? (not just defined)

2011-10-26 Thread Galder Zamarreño

On Oct 24, 2011, at 2:42 PM, Sanne Grinovero wrote:

 On 24 October 2011 12:58, Dan Berindei dan.berin...@gmail.com wrote:
 Hi Galder
 
 On Mon, Oct 24, 2011 at 1:46 PM, Galder Zamarreño gal...@redhat.com wrote:
 
 On Oct 24, 2011, at 12:04 PM, Dan Berindei wrote:
 
 ISPN-1470 (https://issues.jboss.org/browse/ISPN-1470) raises an
 interesting question: if the preloading happens before joining, the
 preloading code won't know anything about the consistent hash. It will
 load everything from the cache store, including the keys that are
 owned by other nodes.
 
 It's been defined to work that way:
 https://docs.jboss.org/author/display/ISPN/CacheLoaders
 
 Tbh, that will only happen in shared cache stores. In non-shared ones, 
 you'll only have data that belongs to that node.
 
 
 Not really... in distributed mode, every time the cache starts it will
 have another position on the hash wheel.
 That means even with a non-shared cache store, it's likely most of the
 stored keys will no longer be local.
 
 Actually I just noticed that you've fixed ISPN-1404, which looks like
 it would solves my problem when the cache is created by a HotRod
 server. I would like to extend it to work like this by default, e.g.
 by using the transport's nodeName as the seed.
 
 I think there is a check in place already so that the joiner won't
 push stale data from its cache store to the other nodes, but we should
 also discard the keys that don't map locally or we'll have stale data
 (since we don't have a way to check if those keys are stale and
 register to receive invalidations for those keys).
 
 +1, only for shared cache stores.
 
 
 What do you think, should I discard the non-local keys with the fix
 for ISPN-1470 or should I let them be and warn the user about
 potentially stale data?
 
 Discard only for shared cache stores.
 
 Cache configurations should be symmetrical, so if other nodes preload, 
 they'll preload only data local to them with your change.
 
 
 Discarding works fine from the correctness POV, but for performance
 it's not that great: we may do a lot of work to preload keys and have
 nothing to show for it at the end.
 
 Can't you just skip loading state and be happy with the state you
 receive from peers? More data will be lazily loaded.
 Applying of course only when you're not the only/first node in the
 grid, in which case you have to load.
 
 The only alternative I see is to be able to find the boundaries of
 keys you own, and change the CacheLoader API to load keys by the
 identified range - should work with multiple boundaries too for
 virtualnodes, but this is something that not all CacheLoaders will be
 able to implement, so it should be an optional API; for now I'd stick
 with the first option above as I don't see how we can be more
 efficient in loading the state from CacheLoaders than via JGroups.

Before when state transfer meant that state came from a single node, that node 
could be overloaded and so cache loader access might have been more efficient, 
particularly if it's a non-shared one that's available in your machine.

The benefit of loading state from cache loader is that the rest of nodes don't 
have to stop what they're doing, which with loading it from other nodes, in the 
current design they have to.

 
 Sanne

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Syncing to central

2011-10-26 Thread Galder Zamarreño

On Oct 24, 2011, at 12:04 PM, Sanne Grinovero wrote:

 1. We hijack javadoc processing to generate the RHQ xml file, which means 
 that the annotations need be available beyond the compile phase. This 
 forces the annotation, and rhq annotations, dependencies to be required. 
 At the time of writing this, I wasn't aware of annotation processors and 
 that would have been the right way to solve this issue. That should avoid 
 dependencies going beyond compilation time, but would bring other issues, 
 i.e. annotation processor discovery (we'd need one for JBoss Logging and 
 one for this)…etc.
 
 What has annotation processor discovery to do with RHQ?
 
 All JMX methods are annotated with JMX annotations and RHQ annotations too. 
 The latter are used to generate the RHQ configuration file which is what 
 tells RHQ what a jmx value is used for, whether a numeric measuremen, a text 
 field…etc.
 
 I see, but this package only includes plain annotations right? Even
 looking into the infinispan-tool package I don't see annotation
 PROCESSORS, are we making confusion on the terms here?
 Also when I explicitly disabled annotation processors discovery to
 list only the jboss-logger this didn't break.

Yeah, it only contains annotations. I might have been confusing with the actual 
RHQ dependencies that the rhq-plugin/ module has, which might have a Hibernate 
3.x dependency.

 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Cache ops after view changes result in RehashInProgressException very easily

2011-10-26 Thread Dan Berindei
I was gunning for beta3 but I don't think I'm going to make it.

On Wed, Oct 26, 2011 at 12:38 PM, Galder Zamarreño gal...@redhat.com wrote:

 On Oct 26, 2011, at 9:46 AM, Dan Berindei wrote:

 Hi Galder, sorry it took so long to reply.

 On Mon, Oct 24, 2011 at 4:16 PM, Galder Zamarreño gal...@redhat.com wrote:
 Btw, forgot to attach the log:




 On Oct 24, 2011, at 3:13 PM, Galder Zamarreño wrote:

 Hi Dan,

 Re: http://goo.gl/TGwrP

 There's a few of this in the Hot Rod server+client testsuites. It's easy 
 to replicate it locally. Seems like cache operations right after a cache 
 has started are rather problematic.

 In local execution of HotRodReplicationTest, I was able to replicate the 
 issue when trying to test topology changes. Please find attached the log 
 file, but here're the interesting bits:

 1. A new view installation is being prepared with NodeA and NodeB:
 2011-10-24 14:36:09,046 4221  TRACE 
 [org.infinispan.cacheviews.CacheViewsManagerImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) 
 ___hotRodTopologyCache: Preparing cache view CacheView{viewId=4, 
 members=[NodeA-63227, NodeB-15806]}, committed view is CacheView{viewId=3, 
 members=[NodeA-63227, NodeB-15806, NodeC-17654]}
 …
 2011-10-24 14:36:09,047 4222  DEBUG 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Blocking new 
 transactions
 2011-10-24 14:36:09,047 4222  TRACE 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Acquiring 
 exclusive state transfer shared lock, shared holders: 0
 2011-10-24 14:36:09,047 4222  TRACE 
 [org.infinispan.statetransfer.StateTransferLockImpl] 
 (OOB-1,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Acquired 
 state transfer lock in exclusive mode

 2. The cluster coordinator discovers a view change and requests NodeA and 
 NodeB to remove NodeC from the topology view:
 2011-10-24 14:36:09,048 4223  TRACE 
 [org.infinispan.interceptors.InvocationContextInterceptor] 
 (OOB-3,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) Invoked with 
 command RemoveCommand{key=NodeC-17654, value=null, flags=null} and 
 InvocationContext [NonTxInvocationContext{flags=null}]

 3. NodeB has not yet finished installing the cache view, so that remove 
 times out:
 2011-10-24 14:36:09,049 4224  ERROR 
 [org.infinispan.interceptors.InvocationContextInterceptor] 
 (OOB-3,Infinispan-Cluster,NodeB-15806:___hotRodTopologyCache) ISPN000136: 
 Execution error
 org.infinispan.distribution.RehashInProgressException: Timed out waiting 
 for the transaction lock

 A way to solve this is to avoid relying on cluster view changes, but 
 instead wait for the cache view to be installed, and then do the 
 operations then. Is there any way to wait till then?

 One way would be to have some CacheView installed callbacks or similar. 
 This could be a good option cos I could have a CacheView listener for the 
 hot rod topology cache whose callbacks I can check for isPre=false and 
 then do the cache ops safely.


 Initially I was thinking of allowing multiple cache view listeners for
 each cache and making StateTransferManager one of them but I decided
 against it because I realized it needs a different interface than our
 regular listeners. I know that it was only a matter of time until
 someone needed it...

 An alternative solution would be to retry all operations, like we do
 with commits now, when we receive a RehashInProgressException
 exception from the remote node. That's what I was planning to do first
 as it helps in other use cases as well.

 Ok, do you have time to include this today ahead of the BETA3 release?

 I think this is a very important fix cos as you can see in the testsuite, 
 it's very easy to get this error with Hot Rod servers.


 Otherwise, code like this the one I used for keeping the Hot Rod topology 
 is gonna be racing against your cache view installation code.

 You seem to have some pieces in place for this, i.e. CacheViewListener, 
 but it seems only designed for internal core/ work.

 Any other suggestions?

 Cheers,
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 

Re: [infinispan-dev] Abstracting javax.cache.annotation.CacheKey away from the user?

2011-10-26 Thread Kevin Pollet
Hi,

On 25 October 2011 18:11, Galder Zamarreño gal...@redhat.com wrote:


 On Oct 24, 2011, at 4:58 PM, Kevin Pollet wrote:

  On 24 October 2011 16:51, Peter Muir pm...@redhat.com wrote:
  If we didnt use any jsr107 annotations with this cache.
 
  Without any jsr107 annotations it works fine :-)
 
  I've just got an idea, what do you think about providing a
 SingleCacheKeyT interface (which extends CacheKey) used when only one
 value is used as a key?
 
  This interface could provide an additionnal method T getValue?

 +1

 Should be pretty straightfoward to implement for single type based keys as
 well.


I've opened ISPN-1491 https://issues.jboss.org/browse/ISPN-1491 for
that. Any comments are welcome!


  
 
  --
  Pete Muir
  http://in.relation.to/Bloggers/Pete
 
  On 24 Oct 2011, at 16:38, Kevin Pollet pollet.ke...@gmail.com wrote:
 
  On 24 October 2011 16:30, Pete Muir pm...@redhat.com wrote:
  This is just because you are interacting with the JSR-107 managed cache.
 If we used a general purpose cache, this wouldn't be a problem right?
 
  This is because the interceptors are defined like that in JSR-107.
  I'm not sure to understand If we used a general purpose cache, this
 wouldn't be a problem?
 
 
  On 24 Oct 2011, at 16:25, Kevin Pollet wrote:
 
   Hi Galder,
  
   On 24 October 2011 15:15, Galder Zamarreño gal...@redhat.com wrote:
   Pete/Kevin,
  
   Looking at the Infinispan CDI quickstart, I see:
  
 @GreetingCache
 private CacheCacheKey, String cache;
  
   The key that the user really uses here is String. So, could that be
 defined like this?
  
 @GreetingCache
 private CacheString, String cache;
  
   Btw, I've just tried this and when using the key I get:
  
   Caused by: java.lang.ClassCastException:
 org.infinispan.cdi.interceptor.DefaultCacheKey cannot be cast to
 java.lang.String
  
   Are we forcing the user to dephicer what's in CacheKey? Related to
 this, looking at org.infinispan.cdi.interceptor.DefaultCacheKey I see no way
 to retrieve individual elements of a key.
  
   That's how it's defined in JSR-107 specification All generated cache
 keys must implement the CacheKey interface.
  
   If you look at the CacheKey contract there is no methods defined to
 retrieve the content of the key. Here we could provide our own methods but
 the user will be implementation dependent. Maybe you could raise this point
 on JSR-107 mailing list, an unwrap method could be defined in the CacheKey
 contract to use specific implementation features.
  
   Pete, Manik?
  
  
   My point here is whether we can avoid leaking
 javax.cache.annotation.CacheKey to the user cos it can do little with it
 without being able to get its contents.
   I see there;s a way to define a custom key, but that should not be
 necessary for a simple key based on a String for example.
  
   I'm not sure we can avoid the use of CacheKey since it's defined like
 that in the spec. As said before we can provide at least our own methods in
 the DefaultCacheKey implementation (open an issue and I'll do it).
  
  
   Cheers,
   --
   Galder Zamarreño
   Sr. Software Engineer
   Infinispan, JBoss Cache
  
  
   --Kevin
 
 
 
  ___
  infinispan-dev mailing list
  infinispan-dev@lists.jboss.org
  https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


--Kevin
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Refactoring API and Commons

2011-10-26 Thread Galder Zamarreño
First of all, what is the problem that the Hot Rod client has depending on 
core/ as it is? It's not clear from the JIRA.

On Oct 25, 2011, at 3:56 PM, Tristan Tarrant wrote:

 Hi all,
 
 I've been looking into refactoring certain interfaces and common classes 
 as part of https://issues.jboss.org/browse/ISPN-1490
 I have come across a couple of snags (more will come I'm sure).
 
 Firstly all modules use org.infinispan.util.logging.LoggingFactory to 
 get a logger. Unfortunately the logger in question implements the 
 org.infinispan.util.logging.Log interface which contains a ton of 
 logging methods mostly related to core functionality, and therefore 
 irrelevant for things such as the remote APIs.

Some are irrelevant, some could potentially be re-used by other modules :) 

(* playing devil's advocate)

 My suggestion here is 
 that each module either uses a specialized LoggingFactory or create a 
 common one which returns implementations of BasicLogger (which is the 
 root interface of our Logs).

Tbh, I'm not fuzzed about it. I think the reason I originally I designed this 
as it is to limit the number of changes associated with using JBoss Logging. 
So, by having a common LogFactory for all, I was saving quite a bit of 
refactoring! 

As you can see from this commit, integrating JBoss Logging was no small task: 
https://github.com/infinispan/infinispan/pull/275/files

 Another one is related to org.infinispan.util.FileLookupFactory which 
 references OSGi classes, even though the org.osgi dependency is marked 
 as optional in the infinispan-core POM. In my opinion the OsgiFileLookup 
 should be put in an external class and loaded via reflection so that we 
 don't get NoClassDefFoundErrors.
 
 I've also introduced at the API level a BasicCacheK,V which now 
 CacheK,V extends. BasicCacheK,V knows nothing about Lifecycle, 
 Listenable, AdvancedCache, Configuration, eviction, batching and is 
 intended to be the base for the RemoteCacheK,V interface.

I don't necessarily disagree with these changes, but a little worried about 
changing all this in the middle of the 5.x series without a good reason. 

I think we can look into this again for 6.0

 
 Suggestions, recommendations, etc.
 
 Tristan
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Refactoring API and Commons

2011-10-26 Thread Michal Linhard
On 10/26/2011 06:29 PM, Galder Zamarreño wrote:
 First of all, what is the problem that the Hot Rod client has depending on 
 core/ as it is? It's not clear from the JIRA.
Personally, what I don't like about the infinispan-core dependency is 
that it makes you falsely think that you should keep the infinispan-core 
version in sync on both sides.

But that's not true. You should be able to use infinispan-core as old as 
the Hot rod protocol version allows even with the newest infinispan. 
OTOH This problem will actually exist even with refactored packaging.

Actually, what I don't like is the fact that we say that this client is 
e.g. infinispan 5.1.0.BETA2, but actually we should be saying that this 
client supports hotrod v1 - infinispan version shouldn't bother us on 
the client side...

m.
 On Oct 25, 2011, at 3:56 PM, Tristan Tarrant wrote:

 Hi all,

 I've been looking into refactoring certain interfaces and common classes
 as part of https://issues.jboss.org/browse/ISPN-1490
 I have come across a couple of snags (more will come I'm sure).

 Firstly all modules use org.infinispan.util.logging.LoggingFactory to
 get a logger. Unfortunately the logger in question implements the
 org.infinispan.util.logging.Log interface which contains a ton of
 logging methods mostly related to core functionality, and therefore
 irrelevant for things such as the remote APIs.
 Some are irrelevant, some could potentially be re-used by other modules :)

 (* playing devil's advocate)

 My suggestion here is
 that each module either uses a specialized LoggingFactory or create a
 common one which returns implementations of BasicLogger (which is the
 root interface of our Logs).
 Tbh, I'm not fuzzed about it. I think the reason I originally I designed this 
 as it is to limit the number of changes associated with using JBoss Logging. 
 So, by having a common LogFactory for all, I was saving quite a bit of 
 refactoring!

 As you can see from this commit, integrating JBoss Logging was no small task: 
 https://github.com/infinispan/infinispan/pull/275/files

 Another one is related to org.infinispan.util.FileLookupFactory which
 references OSGi classes, even though the org.osgi dependency is marked
 as optional in the infinispan-core POM. In my opinion the OsgiFileLookup
 should be put in an external class and loaded via reflection so that we
 don't get NoClassDefFoundErrors.

 I've also introduced at the API level a BasicCacheK,V  which now
 CacheK,V  extends. BasicCacheK,V  knows nothing about Lifecycle,
 Listenable, AdvancedCache, Configuration, eviction, batching and is
 intended to be the base for the RemoteCacheK,V  interface.
 I don't necessarily disagree with these changes, but a little worried about 
 changing all this in the middle of the 5.x series without a good reason.

 I think we can look into this again for 6.0

 Suggestions, recommendations, etc.

 Tristan


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


-- 
Michal Linhard
Quality Assurance Engineer
JBoss Enterprise Datagrid

Red Hat Czech s.r.o.
Purkynova 99 612 45 Brno, Czech Republic
phone: +420 532 294 320 ext. 62320
mobile: +420 728 626 363

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev