Re: [infinispan-dev] compiler warning: code cache is full

2012-01-20 Thread Tristan Tarrant
The infinispan-spring module is quite late in the sequence of modules to 
be compiled so it could be that there is a bug in your version of 
OpenJDK's compiler (incidentally which one is it ?). Maybe just adding 
fork to the maven-compiler-plugin would work around it. Can you try 
with the Oracle JDK for 1.6.x ?

Tristan

On 01/20/2012 09:11 AM, Dan Berindei wrote:
 Never seen it before Sanne.

 Are you using -XX:CompileThreshold with a really low threshold? That
 could explain it, as the JIT would compile a lot more code (that would
 otherwise be interpreted).

 It could also be because Maven uses a separate classloader when
 compiling each module and it keeps loading the same classes over and
 over again...

 Anyway, using -XX:ReservedCodeCacheSize=64m probably wouldn't hurt.

 Cheers
 Dan


 On Fri, Jan 20, 2012 at 9:14 AM, Sanne Grinoverosa...@infinispan.org  wrote:
 No, the build is reported as successful.I'm wondering if it's true.

 On Jan 20, 2012 6:52 AM, Manik Surtanima...@jboss.org  wrote:
 I have no idea, I've never seen this before.  Does it stop the build?

 On 20 Jan 2012, at 04:55, Sanne Grinovero wrote:

 Hi,
 I just noticed this warning in my build logs; I don't read them often
 so it might have been there for a while:

 [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
 infinispan-spring ---
 [INFO] Compiling 22 source files to
 /home/sanne/workspaces/infinispan/infinispan/spring/target/classes
 OpenJDK 64-Bit Server VM warning: CodeCache is full. Compiler has been
 disabled.
 OpenJDK 64-Bit Server VM warning: Try increasing the code cache size
 using -XX:ReservedCodeCacheSize=
 Code Cache  [0x7fb8da00, 0x7fb8dff9, 0x7fb8e000)
 total_blobs=34047 nmethods=33597 adapters=367 free_code_cache=1036Kb
 largest_free_block=511552
 [INFO]
 [INFO]  exec-maven-plugin:1.2.1:java (serialize_component_metadata)
 @ infinispan-spring

 I've never seen such a warning before, anything to be concerned about?


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Keeping track of locked nodes

2012-01-20 Thread Dan Berindei
In this particular case it just means that the commit is sync even if
it's configured to be async.

But there are more places where we check the remote lock nodes list,
e.g. BaseRpcInterceptor.shouldInvokeRemotely or
AbstractEnlistmentAdapter - which, incidentally, could probably settle
with a hasRemoteLocks boolean flag instead of the nodes collection.

TransactionXaAdapter.forgetSuccessfullyCompletedTransaction does use
it properly when recovery is enabled - if we didn't keep the
collection around it would have to compute it again from the list of
keys.

The same with PessimisticLockingInterceptor.releaseLocksOnFailureBeforePrepare,
but there we're on thefailure path already so recomputing the address
set wouldn't hurt that much.

I may have missed others...

Cheers
Dan


On Fri, Jan 20, 2012 at 8:52 AM, Manik Surtani ma...@jboss.org wrote:
 Well, a view change may not mean that the nodes prepared on have changed.  
 But still, there would almost certainly be a better way to do this than a 
 collection of addresses.  So the goal is to determine whether the set of 
 nodes the prepare has been sent to and the current set of affected nodes at 
 the time of commit has changed?  And what are the consequences of getting 
 this (pessimistically) wrong, just that the prepare gets repeated on some 
 nodes?


 On 20 Jan 2012, at 03:54, Mircea Markus wrote:

 On 19 Jan 2012, at 19:22, Sanne Grinovero wrote:
 I just noticed that org.infinispan.transaction.LocalTransaction is
 keeping track of Addresses on which locks where acquired.
 That's surprising me .. why should it ever be interested in the
 specific Address? I'd expect it to be able to figure that out when
 needed, especially since the Address owning the lock might change over
 time I don't understand to track for a specific node.
 This information is required at least here[1], but not sure wether we cannot 
 rely on the viewId which is now associated with every transaction to replace 
 that logic.
 [1] http://bit.ly/wgMIHH


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Manik Surtani
 ma...@jboss.org
 twitter.com/maniksurtani

 Lead, Infinispan
 http://www.infinispan.org




 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] moving asyncMarshalling to jgroups

2012-01-20 Thread Galder Zamarreño
The last person I remember testing async marshalling extensively was Craig 
Bomba who was indeed the one to spot the re-ordering.

I can't remember 100% cos we discussed this on IRC, but I think he used to see 
a nice performance improvement with async marshalling, but I think the 
reordering made it not usable for him.

On Jan 20, 2012, at 10:14 AM, Galder Zamarreño wrote:

 
 On Jan 19, 2012, at 9:08 PM, Mircea Markus wrote:
 
 On 19 Jan 2012, at 19:42, Bela Ban wrote:
 
 On 1/19/12 6:36 PM, Galder Zamarreño wrote:
 
 
 On Jan 19, 2012, at 3:43 PM, Bela Ban wrote:
 
 This may not give you any performance increase:
 
 #1 In my experience, serialization is way faster than de-serialization.
 Unless you're doing something fancy in your serializer
 
 No. I think Mircea didn't explain this very well. What really happens here 
 is that when asyncMarshalling is turned on (the name is confusing...), 
 async transport sending is activated.  What does this mean?
 
 When the request needs to be passed onto JGroups, this is done in a 
 separate thread, which indirectly, results in marshalling happening in a 
 different thread.
 
 
 How is this thread created ? I assume you use a thread pool with a 
 *bounded* queue.
 yes, these[1] are the default initialisation values for the pool.
 
 Nope, the default value is not the one pointed out but rather the ones 
 indicated in 
 https://docs.jboss.org/author/display/ISPN/Default+Values+For+Property+Based+Attributes
 
 Which comes from: 
 https://github.com/galderz/infinispan/blob/master/core/src/main/java/org/infinispan/factories/KnownComponentNames.java#L59
 
 Returning to the original question, can't we achieve the same functionality 
 (i.e. marshalling not to happen in caller's thread - that's for async 
 replication *only*) by passing the marshaller(an adapter) to jgroups which 
 would then serialize it in the (async) replication thread?
 
 [1]http://bit.ly/zUFIEb
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 --
 Galder Zamarreño
 Sr. Software Engineer
 Infinispan, JBoss Cache
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Keeping track of locked nodes

2012-01-20 Thread Mircea Markus
On 20 Jan 2012, at 09:11, Dan Berindei wrote:
 In this particular case it just means that the commit is sync even if
 it's configured to be async.
 
 But there are more places where we check the remote lock nodes list,
 e.g. BaseRpcInterceptor.shouldInvokeRemotely or
 AbstractEnlistmentAdapter - which, incidentally, could probably settle
 with a hasRemoteLocks boolean flag instead of the nodes collection.
 
 TransactionXaAdapter.forgetSuccessfullyCompletedTransaction does use
 it properly when recovery is enabled - if we didn't keep the
 collection around it would have to compute it again from the list of
 keys.
 
 The same with 
 PessimisticLockingInterceptor.releaseLocksOnFailureBeforePrepare,
 but there we're on thefailure path already so recomputing the address
 set wouldn't hurt that much.
 
 I may have missed others...
As mentioned, these can be determined based on the set of locks held by that 
tramsaction.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Keeping track of locked nodes

2012-01-20 Thread Mircea Markus

On 20 Jan 2012, at 09:33, Manik Surtani wrote:

 
 On 20 Jan 2012, at 14:41, Dan Berindei wrote:
 
 In this particular case it just means that the commit is sync even if
 it's configured to be async.
 
 But there are more places where we check the remote lock nodes list,
 e.g. BaseRpcInterceptor.shouldInvokeRemotely or
 AbstractEnlistmentAdapter - which, incidentally, could probably settle
 with a hasRemoteLocks boolean flag instead of the nodes collection.
 
 TransactionXaAdapter.forgetSuccessfullyCompletedTransaction does use
 it properly when recovery is enabled - if we didn't keep the
 collection around it would have to compute it again from the list of
 keys.
 
 The same with 
 PessimisticLockingInterceptor.releaseLocksOnFailureBeforePrepare,
 but there we're on thefailure path already so recomputing the address
 set wouldn't hurt that much.
 
 I may have missed others…
 
 Cool.  Mircea, reckon we can patch this quickly and with low risk?  Or is it 
 high risk at this stage?
I don't think it's a good moment for this right now. I'm not even convinced 
that this is the way go, as it might be cheaper to cache this information than 
to calculate it when needed.


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] [jgroups-dev] experiments with NAKACK2

2012-01-20 Thread Sanne Grinovero
inline:

On 20 January 2012 08:43, Bela Ban b...@redhat.com wrote:
 Hi Sanne,

 (redirected back to infinispan-dev)

 Hello,
 I've run the same Infinispan benchmark mentioned today on the
 Infinispan mailing list, but having the goal to test NAKACK2
 development.

 Infinispan 5.1.0 at 2d7c65e with JGroups 3.0.2.Final :

 Done 844,952,883 transactional operations in 22.08 minutes using
 5.1.0-SNAPSHOT
  839,810,425 reads and 5,142,458 writes
  Reads / second: 634,028
  Writes/ second: 3,882

 Same Infinispan, with JGroups b294965 (and reconfigured for NAKACK2):


 Done 807,220,989 transactional operations in 18.15 minutes using
 5.1.0-SNAPSHOT
  804,162,247 reads and 3,058,742 writes
  Reads / second: 738,454
  Writes/ second: 2,808

 same versions and configuration, run it again as I was too surprised:

 Done 490,928,700 transactional operations in 10.94 minutes using
 5.1.0-SNAPSHOT
  489,488,339 reads and 1,440,361 writes
  Reads / second: 745,521
  Writes/ second: 2,193

 So the figures aren't very stable, I might need to run longer tests,
 but there seems to be a trend of this new protocol speeding up Read
 operations at the cost of writes.



 This is really strange !

 In my own tests with 2 members on the same box (using MPerf), I found that
 the blockings on Table.add() and Table.removeMany() were much smaller than
 in the previous tests, and now the TP.TransferQueueBundler.send() method was
 the culprit #1 by far ! Of course, still being much smaller than the
 previous highest blockings !

I totally believe you, I'm wondering if the fact that JGroups is more
efficient is making Infinispan writes slower. Consider as well that
these read figures are stellar, it's never been that fast before (on
this test on my laptop), makes me think of some unfair lock acquired
by readers so that writers are not getting a chance to make any
progress.
Manik, Dan, any such lock around? If I profiler monitors, these
figures change dramatically..

We could be in a situation in which the faster JGroups gets, the worse
the write numbers I get - not sure why, but I'm suspecting that we
shouldn't use this test to evaluate on JGroups code effectiveness,
there is too much other stuff going on.


 I'll run Transactional on my own box today, to see the diffs between various
 versions of JGroups.
 Can you send me your bench.sh ? If I don't change the values, the test takes
 forever !

This is exactly how I'm running it:
https://github.com/Sanne/InfinispanStartupBenchmark/commit/c4efbc66abb6bf2db4da75cb21f4d2b55676a468

Attention on:
  CFG=-Dbench.loops=1 $CFG
  (sets the amount of max iterations to perform)
And:
  versions=infinispan-5.1.SNAPSHOT
to avoid running the previous versions, which are significantly slower
Set some good MAVEN_OPTS options as well.. carefull with the specific
paths for my system, which are the reason to not merge this patch in
master.





 --
 Bela Ban
 Lead JGroups (http://www.jgroups.org)
 JBoss / Red Hat

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Keeping track of locked nodes

2012-01-20 Thread Sanne Grinovero
On 20 January 2012 11:49, Mircea Markus mircea.mar...@jboss.com wrote:

 On 20 Jan 2012, at 09:33, Manik Surtani wrote:


 On 20 Jan 2012, at 14:41, Dan Berindei wrote:

 In this particular case it just means that the commit is sync even if
 it's configured to be async.

 But there are more places where we check the remote lock nodes list,
 e.g. BaseRpcInterceptor.shouldInvokeRemotely or
 AbstractEnlistmentAdapter - which, incidentally, could probably settle
 with a hasRemoteLocks boolean flag instead of the nodes collection.

 TransactionXaAdapter.forgetSuccessfullyCompletedTransaction does use
 it properly when recovery is enabled - if we didn't keep the
 collection around it would have to compute it again from the list of
 keys.

 The same with 
 PessimisticLockingInterceptor.releaseLocksOnFailureBeforePrepare,
 but there we're on thefailure path already so recomputing the address
 set wouldn't hurt that much.

 I may have missed others…

 Cool.  Mircea, reckon we can patch this quickly and with low risk?  Or is it 
 high risk at this stage?
 I don't think it's a good moment for this right now. I'm not even convinced 
 that this is the way go, as it might be cheaper to cache this information 
 than to calculate it when needed.

Just to clarify, I wasn't reporting a contention point or a
performance issue. I was just puzzled by the design as it's very
different than what I was expecting. I think we should move towards a
design for which we don't really consider the locks to be positioned
on a specific node, they should be free to move around (still
deterministically, I mean on rehash).
Not asking for urgent changes!

Cheers,
Sanne

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] [jgroups-dev] experiments with NAKACK2

2012-01-20 Thread Manik Surtani

On 20 Jan 2012, at 17:57, Sanne Grinovero wrote:

 inline:
 
 On 20 January 2012 08:43, Bela Ban b...@redhat.com wrote:
 Hi Sanne,
 
 (redirected back to infinispan-dev)
 
 Hello,
 I've run the same Infinispan benchmark mentioned today on the
 Infinispan mailing list, but having the goal to test NAKACK2
 development.
 
 Infinispan 5.1.0 at 2d7c65e with JGroups 3.0.2.Final :
 
 Done 844,952,883 transactional operations in 22.08 minutes using
 5.1.0-SNAPSHOT
  839,810,425 reads and 5,142,458 writes
  Reads / second: 634,028
  Writes/ second: 3,882
 
 Same Infinispan, with JGroups b294965 (and reconfigured for NAKACK2):
 
 
 Done 807,220,989 transactional operations in 18.15 minutes using
 5.1.0-SNAPSHOT
  804,162,247 reads and 3,058,742 writes
  Reads / second: 738,454
  Writes/ second: 2,808
 
 same versions and configuration, run it again as I was too surprised:
 
 Done 490,928,700 transactional operations in 10.94 minutes using
 5.1.0-SNAPSHOT
  489,488,339 reads and 1,440,361 writes
  Reads / second: 745,521
  Writes/ second: 2,193
 
 So the figures aren't very stable, I might need to run longer tests,
 but there seems to be a trend of this new protocol speeding up Read
 operations at the cost of writes.
 
 
 
 This is really strange !
 
 In my own tests with 2 members on the same box (using MPerf), I found that
 the blockings on Table.add() and Table.removeMany() were much smaller than
 in the previous tests, and now the TP.TransferQueueBundler.send() method was
 the culprit #1 by far ! Of course, still being much smaller than the
 previous highest blockings !
 
 I totally believe you, I'm wondering if the fact that JGroups is more
 efficient is making Infinispan writes slower. Consider as well that
 these read figures are stellar, it's never been that fast before (on
 this test on my laptop), makes me think of some unfair lock acquired
 by readers so that writers are not getting a chance to make any
 progress.
 Manik, Dan, any such lock around? If I profiler monitors, these
 figures change dramatically..

Yes, our (transactional) reads are phenomenally fast now.  I think it has to do 
with contention on the CHMs in the transaction table being optimised.  In terms 
of JGroups, perhaps writer threads being faster reduce the contention on these 
CHMs so more reads can be squeezed through.  This is REPL mode though.  In DIST 
our reads are about the same as 5.0.

 
 We could be in a situation in which the faster JGroups gets, the worse
 the write numbers I get

That's the fault of the test.  In a real-world scenario, faster reads will 
always be good, since the reads (per timeslice) are finite.  Once they are 
done, they are done, and the writes can proceed.  To model this in your test, 
fix the number of reads and writes that will be performed.  Maybe even per 
timeslice like per minute or something, and then measure the average time per 
read or write operation.

 - not sure why, but I'm suspecting that we
 shouldn't use this test to evaluate on JGroups code effectiveness,
 there is too much other stuff going on.


 
 
 I'll run Transactional on my own box today, to see the diffs between various
 versions of JGroups.
 Can you send me your bench.sh ? If I don't change the values, the test takes
 forever !
 
 This is exactly how I'm running it:
 https://github.com/Sanne/InfinispanStartupBenchmark/commit/c4efbc66abb6bf2db4da75cb21f4d2b55676a468
 
 Attention on:
  CFG=-Dbench.loops=1 $CFG
  (sets the amount of max iterations to perform)
 And:
  versions=infinispan-5.1.SNAPSHOT
 to avoid running the previous versions, which are significantly slower
 Set some good MAVEN_OPTS options as well.. carefull with the specific
 paths for my system, which are the reason to not merge this patch in
 master.
 
 
 
 
 
 --
 Bela Ban
 Lead JGroups (http://www.jgroups.org)
 JBoss / Red Hat
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Keeping track of locked nodes

2012-01-20 Thread Mircea Markus
 Cool.  Mircea, reckon we can patch this quickly and with low risk?  Or is 
 it high risk at this stage?
 I don't think it's a good moment for this right now. I'm not even convinced 
 that this is the way go, as it might be cheaper to cache this information 
 than to calculate it when needed.
 
 Just to clarify, I wasn't reporting a contention point or a
 performance issue. I was just puzzled by the design as it's very
 different than what I was expecting. I think we should move towards a
 design for which we don't really consider the locks to be positioned
 on a specific node, they should be free to move around (still
 deterministically, I mean on rehash).
 Not asking for urgent changes!
+1, you've been quite convincing about this in Lisbon :-)
However ATM the lock failover is mainly managed by the transaction originator 
and is not migrated across during topology changes.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] [jgroups-dev] experiments with NAKACK2

2012-01-20 Thread Sanne Grinovero
On 20 January 2012 12:40, Manik Surtani ma...@jboss.org wrote:

 On 20 Jan 2012, at 17:57, Sanne Grinovero wrote:

 inline:

 On 20 January 2012 08:43, Bela Ban b...@redhat.com wrote:
 Hi Sanne,

 (redirected back to infinispan-dev)

 Hello,
 I've run the same Infinispan benchmark mentioned today on the
 Infinispan mailing list, but having the goal to test NAKACK2
 development.

 Infinispan 5.1.0 at 2d7c65e with JGroups 3.0.2.Final :

 Done 844,952,883 transactional operations in 22.08 minutes using
 5.1.0-SNAPSHOT
  839,810,425 reads and 5,142,458 writes
  Reads / second: 634,028
  Writes/ second: 3,882

 Same Infinispan, with JGroups b294965 (and reconfigured for NAKACK2):


 Done 807,220,989 transactional operations in 18.15 minutes using
 5.1.0-SNAPSHOT
  804,162,247 reads and 3,058,742 writes
  Reads / second: 738,454
  Writes/ second: 2,808

 same versions and configuration, run it again as I was too surprised:

 Done 490,928,700 transactional operations in 10.94 minutes using
 5.1.0-SNAPSHOT
  489,488,339 reads and 1,440,361 writes
  Reads / second: 745,521
  Writes/ second: 2,193

 So the figures aren't very stable, I might need to run longer tests,
 but there seems to be a trend of this new protocol speeding up Read
 operations at the cost of writes.



 This is really strange !

 In my own tests with 2 members on the same box (using MPerf), I found that
 the blockings on Table.add() and Table.removeMany() were much smaller than
 in the previous tests, and now the TP.TransferQueueBundler.send() method was
 the culprit #1 by far ! Of course, still being much smaller than the
 previous highest blockings !

 I totally believe you, I'm wondering if the fact that JGroups is more
 efficient is making Infinispan writes slower. Consider as well that
 these read figures are stellar, it's never been that fast before (on
 this test on my laptop), makes me think of some unfair lock acquired
 by readers so that writers are not getting a chance to make any
 progress.
 Manik, Dan, any such lock around? If I profiler monitors, these
 figures change dramatically..

 Yes, our (transactional) reads are phenomenally fast now.  I think it has to 
 do with contention on the CHMs in the transaction table being optimised.  In 
 terms of JGroups, perhaps writer threads being faster reduce the contention 
 on these CHMs so more reads can be squeezed through.  This is REPL mode 
 though.  In DIST our reads are about the same as 5.0.


 We could be in a situation in which the faster JGroups gets, the worse
 the write numbers I get

 That's the fault of the test.  In a real-world scenario, faster reads will 
 always be good, since the reads (per timeslice) are finite.  Once they are 
 done, they are done, and the writes can proceed.  To model this in your test, 
 fix the number of reads and writes that will be performed.  Maybe even per 
 timeslice like per minute or something, and then measure the average time per 
 read or write operation.

+1000, the good effects of a cocktail @ seaside. Stupid me as I even
thought about that yesterday, as it's a common problem..
But this test seemed to be designed from the beginning to stress
contention as much as possible to identify bottlenecks; to get that
kind of performance measurements is RadarGun not more suited?

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Keeping track of locked nodes

2012-01-20 Thread Sanne Grinovero
On 20 January 2012 12:41, Mircea Markus mircea.mar...@jboss.com wrote:
 Cool.  Mircea, reckon we can patch this quickly and with low risk?  Or is 
 it high risk at this stage?
 I don't think it's a good moment for this right now. I'm not even convinced 
 that this is the way go, as it might be cheaper to cache this information 
 than to calculate it when needed.

 Just to clarify, I wasn't reporting a contention point or a
 performance issue. I was just puzzled by the design as it's very
 different than what I was expecting. I think we should move towards a
 design for which we don't really consider the locks to be positioned
 on a specific node, they should be free to move around (still
 deterministically, I mean on rehash).
 Not asking for urgent changes!
 +1, you've been quite convincing about this in Lisbon :-)
 However ATM the lock failover is mainly managed by the transaction originator 
 and is not migrated across during topology changes.

So if locks are failed-over, is your transaction originator able to
find the new owner to unlock it? That's the core of my question on
keeping the Address.. not performance related.

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] compiler warning: code cache is full

2012-01-20 Thread Sanne Grinovero
On 20 January 2012 08:18, Tristan Tarrant ttarr...@redhat.com wrote:
 The infinispan-spring module is quite late in the sequence of modules to
 be compiled so it could be that there is a bug in your version of
 OpenJDK's compiler (incidentally which one is it ?). Maybe just adding
 fork to the maven-compiler-plugin would work around it. Can you try
 with the Oracle JDK for 1.6.x ?

Right, I'll make a note for next week on this. I won't be doing any
release soon anyway, so all I ask is whoever makes one to check their
logs for it, at least if there is some risk for the quality of the
built jars.

Sanne


 Tristan

 On 01/20/2012 09:11 AM, Dan Berindei wrote:
 Never seen it before Sanne.

 Are you using -XX:CompileThreshold with a really low threshold? That
 could explain it, as the JIT would compile a lot more code (that would
 otherwise be interpreted).

 It could also be because Maven uses a separate classloader when
 compiling each module and it keeps loading the same classes over and
 over again...

 Anyway, using -XX:ReservedCodeCacheSize=64m probably wouldn't hurt.

 Cheers
 Dan


 On Fri, Jan 20, 2012 at 9:14 AM, Sanne Grinoverosa...@infinispan.org  
 wrote:
 No, the build is reported as successful.I'm wondering if it's true.

 On Jan 20, 2012 6:52 AM, Manik Surtanima...@jboss.org  wrote:
 I have no idea, I've never seen this before.  Does it stop the build?

 On 20 Jan 2012, at 04:55, Sanne Grinovero wrote:

 Hi,
 I just noticed this warning in my build logs; I don't read them often
 so it might have been there for a while:

 [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
 infinispan-spring ---
 [INFO] Compiling 22 source files to
 /home/sanne/workspaces/infinispan/infinispan/spring/target/classes
 OpenJDK 64-Bit Server VM warning: CodeCache is full. Compiler has been
 disabled.
 OpenJDK 64-Bit Server VM warning: Try increasing the code cache size
 using -XX:ReservedCodeCacheSize=
 Code Cache  [0x7fb8da00, 0x7fb8dff9, 0x7fb8e000)
 total_blobs=34047 nmethods=33597 adapters=367 free_code_cache=1036Kb
 largest_free_block=511552
 [INFO]
 [INFO]  exec-maven-plugin:1.2.1:java (serialize_component_metadata)
 @ infinispan-spring

 I've never seen such a warning before, anything to be concerned about?


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] [jgroups-dev] experiments with NAKACK2

2012-01-20 Thread Manik Surtani

On 20 Jan 2012, at 18:16, Sanne Grinovero wrote:

 On 20 January 2012 12:40, Manik Surtani ma...@jboss.org wrote:
 
 On 20 Jan 2012, at 17:57, Sanne Grinovero wrote:
 
 inline:
 
 On 20 January 2012 08:43, Bela Ban b...@redhat.com wrote:
 Hi Sanne,
 
 (redirected back to infinispan-dev)
 
 Hello,
 I've run the same Infinispan benchmark mentioned today on the
 Infinispan mailing list, but having the goal to test NAKACK2
 development.
 
 Infinispan 5.1.0 at 2d7c65e with JGroups 3.0.2.Final :
 
 Done 844,952,883 transactional operations in 22.08 minutes using
 5.1.0-SNAPSHOT
  839,810,425 reads and 5,142,458 writes
  Reads / second: 634,028
  Writes/ second: 3,882
 
 Same Infinispan, with JGroups b294965 (and reconfigured for NAKACK2):
 
 
 Done 807,220,989 transactional operations in 18.15 minutes using
 5.1.0-SNAPSHOT
  804,162,247 reads and 3,058,742 writes
  Reads / second: 738,454
  Writes/ second: 2,808
 
 same versions and configuration, run it again as I was too surprised:
 
 Done 490,928,700 transactional operations in 10.94 minutes using
 5.1.0-SNAPSHOT
  489,488,339 reads and 1,440,361 writes
  Reads / second: 745,521
  Writes/ second: 2,193
 
 So the figures aren't very stable, I might need to run longer tests,
 but there seems to be a trend of this new protocol speeding up Read
 operations at the cost of writes.
 
 
 
 This is really strange !
 
 In my own tests with 2 members on the same box (using MPerf), I found that
 the blockings on Table.add() and Table.removeMany() were much smaller than
 in the previous tests, and now the TP.TransferQueueBundler.send() method 
 was
 the culprit #1 by far ! Of course, still being much smaller than the
 previous highest blockings !
 
 I totally believe you, I'm wondering if the fact that JGroups is more
 efficient is making Infinispan writes slower. Consider as well that
 these read figures are stellar, it's never been that fast before (on
 this test on my laptop), makes me think of some unfair lock acquired
 by readers so that writers are not getting a chance to make any
 progress.
 Manik, Dan, any such lock around? If I profiler monitors, these
 figures change dramatically..
 
 Yes, our (transactional) reads are phenomenally fast now.  I think it has to 
 do with contention on the CHMs in the transaction table being optimised.  In 
 terms of JGroups, perhaps writer threads being faster reduce the contention 
 on these CHMs so more reads can be squeezed through.  This is REPL mode 
 though.  In DIST our reads are about the same as 5.0.
 
 
 We could be in a situation in which the faster JGroups gets, the worse
 the write numbers I get
 
 That's the fault of the test.  In a real-world scenario, faster reads will 
 always be good, since the reads (per timeslice) are finite.  Once they are 
 done, they are done, and the writes can proceed.  To model this in your 
 test, fix the number of reads and writes that will be performed.  Maybe even 
 per timeslice like per minute or something, and then measure the average 
 time per read or write operation.
 
 +1000, the good effects of a cocktail @ seaside.

All manner of problems get solved this way.  ;)

 Stupid me as I even
 thought about that yesterday, as it's a common problem..
 But this test seemed to be designed from the beginning to stress
 contention as much as possible to identify bottlenecks; to get that
 kind of performance measurements is RadarGun not more suited?

Yes, bit RadarGun isn't that easy to profile.


--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev