Re: [infinispan-dev] [ISPN1797] Cannot find ispn core metadata file

2013-01-03 Thread Tristan Tarrant

That file is generated by the build machinery:

mvn -pl parent,core clean install -DskipTests

will build it and package it for you

Tristan

On 01/03/2013 03:29 AM, Guillaume SCHEIBEL wrote:

Hi guys,

I'm finishing the last version of the MongoDB cache store module and 
I'm facing to a problem when the core module try to load 
infinispan-core-component-metadata.dat but I know nothing about it.
Weird thing, last time I ran the test (XML parsing) the problem didn't 
showed up.


Any idea on this ?
Thanks
Guillaume


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] MFC/UFC credits in default config

2013-01-03 Thread Bela Ban
Let's make sure though that we have a meaningful default that's not 
optimized for an edge case. Also, if we use TCP, we can remove UFC from 
the config, as TCP already performs point-to-point flow control.

On 1/3/13 11:29 AM, Radim Vansa wrote:
 20k credits seems to be the best choice for this test:

 10k: bad performance
 20k: Average of 2.79 requests / sec (27.87MB / sec), 358.81 ms /request 
 (prot=UNICAST2)
 30k: Average of 2.52 requests / sec (25.18MB / sec), 397.15 ms /request 
 (prot=UNICAST2)
 50k: Average of 2.35 requests / sec (23.47MB / sec), 426.10 ms /request 
 (prot=UNICAST2)
 80k: Average of 1.29 requests / sec (12.94MB / sec), 772.78 ms /request 
 (prot=UNICAST2)
 200k: bad performance

 (for remembrance: 4 nodes in hyperion, for these results I've set up 8k frag 
 size)

 I have held dot key for the duration of the test so you can see how long each 
 apply state took as the dots were inserted into console in constant rate 
 (lame ascii chart). See attachements.

 Radim

 - Original Message -
 | From: Dan Berindei dan.berin...@gmail.com
 | To: infinispan -Dev List infinispan-dev@lists.jboss.org
 | Sent: Monday, December 24, 2012 8:01:26 AM
 | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
 |
 |
 |
 |
 | This is weird, I would have expected problems with the last message,
 | but not in the middle of the sequence (that's why I suggested
 | sending only 1 message). Maybe we need an an even lower
 | max_credits...
 |
 | Merry Christmas to you, too!
 |
 | Dan
 | On 21 Dec 2012 16:41, Radim Vansa  rva...@redhat.com  wrote:
 |
 |
 | Hi Dan,
 |
 | I have ran the test on 4 nodes in hyperion (just for the start to see
 | how it will behave) but with 100 messages (1 message is nothing for
 | a statistician) each 10MB and I see a weird behaviour - there are
 | about 5-10 messages received in a fast succession and then the
 | nothing is received for several seconds. I experience this behaviour
 | for both 200k and 500k credits. Is this really how it should
 | perform?
 |
 | Merry Christmas and tons of snow :)
 |
 | Radim
 |
 | h1☃/h1
 |
 | - Original Message -
 | | From: Dan Berindei  dan.berin...@gmail.com 
 | | To: infinispan -Dev List  infinispan-dev@lists.jboss.org 
 | | Sent: Tuesday, December 18, 2012 8:57:08 AM
 | | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
 | |
 | |
 | | Hi Radim
 | |
 | | If you run the test with only 2 nodes and FC disabled, it's going
 | | to
 | | perform even better. But then as you increase the number of nodes,
 | | the speed with no FC will drop dramatically (when we didn't have
 | | RSVP enabled, with only 3 nodes, it didn't manage to send 1 x 10MB
 | | message in 10 minutes).
 | |
 | | Please run the tests with as many nodes as possible and just 1
 | | message x 10MB. If 500k still performs better, create a JIRA to
 | | change the default.
 | |
 | | Cheers
 | | Dan
 | |
 | |
 | |
 | |
 | |
 | | On Mon, Dec 17, 2012 at 4:55 PM, Radim Vansa  rva...@redhat.com 
 | | wrote:
 | |
 | |
 | | Sorry I haven't specified the amount, I am a stupido... my tests
 | | are
 | | working with 500k credits.
 | |
 | | UUPerf (JGroups 3.2.4.Final-redhat-1) from one computer in perflab
 | | to
 | | another, 2 threads (default), 1000x sends 10MB message (default
 | | chunkSize = 1 * our entry size is usually 1kB) executed 3x
 | |
 | | 200k: Average of 6.02 requests / sec (60.19MB / sec), 166.13 ms
 | | /request (prot=UNICAST2)
 | | Average of 5.61 requests / sec (56.09MB / sec), 178.30 ms /request
 | | (prot=UNICAST2)
 | | Average of 5.49 requests / sec (54.94MB / sec), 182.03 ms /request
 | | (prot=UNICAST2)
 | |
 | | 500k: Average of 7.93 requests / sec (79.34MB / sec), 126.04 ms
 | | /request (prot=UNICAST2)
 | | Average of 8.18 requests / sec (81.82MB / sec), 122.23 ms /request
 | | (prot=UNICAST2)
 | | Average of 8.41 requests / sec (84.09MB / sec), 118.92 ms /request
 | | (prot=UNICAST2)
 | |
 | | Can you also reproduce such results? I think that suggests that
 | | 500k
 | | behaves really better.
 | |
 | | Radun
 | |
 | |
 | |
 | |
 | | - Original Message -
 | | | From: Dan Berindei  dan.berin...@gmail.com 
 | | | To: infinispan -Dev List  infinispan-dev@lists.jboss.org 
 | | | Sent: Monday, December 17, 2012 12:43:37 PM
 | | | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
 | | |
 | | |
 | | |
 | | |
 | | |
 | | | On Mon, Dec 17, 2012 at 1:28 PM, Bela Ban  b...@redhat.com 
 | | | wrote:
 | | |
 | | |
 | | | Dan reduced those values to 200K, IIRC it was for UUPerfwhich
 | | | behaved
 | | | best with 200K. Idon't know if this is still needed. Dan ?
 | | |
 | | |
 | | |
 | | |
 | | | I haven't run UUPerf in a while...
 | | |
 | | |
 | | |
 | | |
 | | | On 12/17/12 12:19 PM, Radim Vansa wrote:
 | | |  Hi,
 | | | 
 | | |  recently I have synchronized our jgroups configuration with the
 | | |  default one shipped with Infinispan
 | | |  (core/src/main/resources/jgroups-(tcp|udp).xml) and it has
 | | |  

Re: [infinispan-dev] MFC/UFC credits in default config

2013-01-03 Thread Dan Berindei
Bela, I'm pretty sure these tests use UDP. I'd be really surprised if we
could improve TCP performance by lowering max_credits.

We do have a JIRA to change the state transfer behaviour to request state
from only a few nodes at a time (perhaps only 1):
https://issues.jboss.org/browse/ISPN-2580. Adrian is working on it ATM, and
once it's integrated it would make UUPerf performance largely irrelevant.

Even if Adrian's fix doesn't make it into Final, I think a max_credits of
only 20k would impact performance in the stable state (i.e. what UPerf is
testing). So maybe we can find a workaround, like lowering Infinispan's
stateTransfer.chunkSize.

I wonder if we could automate UPerf and UUPerf, like RadarGun does (or
maybe make them RadarGun test scenarios?), so we can gather more data
points. At the moment there's a lot of manual work involved in running the
tests with all the possible configurations (TCP/UNICAST2, TCP/UNICAST2/UFC,
UDP/UNICAST, UDP/UNICAST/UFC, UDP/UNICAST2/UFC, UDP/UNICAST2/UFC/RSVP, each
protocol with several tweak-able attributes) and figuring out which
configuration is best.

Cheers
Dan



On Thu, Jan 3, 2013 at 12:42 PM, Bela Ban b...@redhat.com wrote:

 Let's make sure though that we have a meaningful default that's not
 optimized for an edge case. Also, if we use TCP, we can remove UFC from
 the config, as TCP already performs point-to-point flow control.

 On 1/3/13 11:29 AM, Radim Vansa wrote:
  20k credits seems to be the best choice for this test:
 
  10k: bad performance
  20k: Average of 2.79 requests / sec (27.87MB / sec), 358.81 ms /request
 (prot=UNICAST2)
  30k: Average of 2.52 requests / sec (25.18MB / sec), 397.15 ms /request
 (prot=UNICAST2)
  50k: Average of 2.35 requests / sec (23.47MB / sec), 426.10 ms /request
 (prot=UNICAST2)
  80k: Average of 1.29 requests / sec (12.94MB / sec), 772.78 ms /request
 (prot=UNICAST2)
  200k: bad performance
 
  (for remembrance: 4 nodes in hyperion, for these results I've set up 8k
 frag size)
 
  I have held dot key for the duration of the test so you can see how long
 each apply state took as the dots were inserted into console in constant
 rate (lame ascii chart). See attachements.
 
  Radim
 
  - Original Message -
  | From: Dan Berindei dan.berin...@gmail.com
  | To: infinispan -Dev List infinispan-dev@lists.jboss.org
  | Sent: Monday, December 24, 2012 8:01:26 AM
  | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
  |
  |
  |
  |
  | This is weird, I would have expected problems with the last message,
  | but not in the middle of the sequence (that's why I suggested
  | sending only 1 message). Maybe we need an an even lower
  | max_credits...
  |
  | Merry Christmas to you, too!
  |
  | Dan
  | On 21 Dec 2012 16:41, Radim Vansa  rva...@redhat.com  wrote:
  |
  |
  | Hi Dan,
  |
  | I have ran the test on 4 nodes in hyperion (just for the start to see
  | how it will behave) but with 100 messages (1 message is nothing for
  | a statistician) each 10MB and I see a weird behaviour - there are
  | about 5-10 messages received in a fast succession and then the
  | nothing is received for several seconds. I experience this behaviour
  | for both 200k and 500k credits. Is this really how it should
  | perform?
  |
  | Merry Christmas and tons of snow :)
  |
  | Radim
  |
  | h1☃/h1
  |
  | - Original Message -
  | | From: Dan Berindei  dan.berin...@gmail.com 
  | | To: infinispan -Dev List  infinispan-dev@lists.jboss.org 
  | | Sent: Tuesday, December 18, 2012 8:57:08 AM
  | | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
  | |
  | |
  | | Hi Radim
  | |
  | | If you run the test with only 2 nodes and FC disabled, it's going
  | | to
  | | perform even better. But then as you increase the number of nodes,
  | | the speed with no FC will drop dramatically (when we didn't have
  | | RSVP enabled, with only 3 nodes, it didn't manage to send 1 x 10MB
  | | message in 10 minutes).
  | |
  | | Please run the tests with as many nodes as possible and just 1
  | | message x 10MB. If 500k still performs better, create a JIRA to
  | | change the default.
  | |
  | | Cheers
  | | Dan
  | |
  | |
  | |
  | |
  | |
  | | On Mon, Dec 17, 2012 at 4:55 PM, Radim Vansa  rva...@redhat.com 
  | | wrote:
  | |
  | |
  | | Sorry I haven't specified the amount, I am a stupido... my tests
  | | are
  | | working with 500k credits.
  | |
  | | UUPerf (JGroups 3.2.4.Final-redhat-1) from one computer in perflab
  | | to
  | | another, 2 threads (default), 1000x sends 10MB message (default
  | | chunkSize = 1 * our entry size is usually 1kB) executed 3x
  | |
  | | 200k: Average of 6.02 requests / sec (60.19MB / sec), 166.13 ms
  | | /request (prot=UNICAST2)
  | | Average of 5.61 requests / sec (56.09MB / sec), 178.30 ms /request
  | | (prot=UNICAST2)
  | | Average of 5.49 requests / sec (54.94MB / sec), 182.03 ms /request
  | | (prot=UNICAST2)
  | |
  | | 500k: Average of 7.93 requests / sec (79.34MB / sec), 

Re: [infinispan-dev] MFC/UFC credits in default config

2013-01-03 Thread Radim Vansa
| 
| 
| Bela, I'm pretty sure these tests use UDP. I'd be really surprised if
| we could improve TCP performance by lowering max_credits.

True, they do.

| 
| We do have a JIRA to change the state transfer behaviour to request
| state from only a few nodes at a time (perhaps only 1):
| https://issues.jboss.org/browse/ISPN-2580 . Adrian is working on it
| ATM, and once it's integrated it would make UUPerf performance
| largely irrelevant.

I don't think so, I expect that e.g. 3 nodes to make ST from is perfectly 
reasonable scenario and as these tests are ran with 4 nodes, this is the case.

| 
| Even if Adrian's fix doesn't make it into Final, I think a
| max_credits of only 20k would impact performance in the stable
| state (i.e. what UPerf is testing). So maybe we can find a
| workaround, like lowering Infinispan's stateTransfer.chunkSize.

Yeah, I have used 10MB messages for testing, I should do that for smaller ones 
as well.

| 
| I wonder if we could automate UPerf and UUPerf, like RadarGun does
| (or maybe make them RadarGun test scenarios?), so we can gather more
| data points. At the moment there's a lot of manual work involved in
| running the tests with all the possible configurations
| (TCP/UNICAST2, TCP/UNICAST2/UFC, UDP/UNICAST, UDP/UNICAST/UFC,
| UDP/UNICAST2/UFC, UDP/UNICAST2/UFC/RSVP, each protocol with several
| tweak-able attributes) and figuring out which configuration is
| best.

This sounds good, using JGroups cachewrapper I could just do GET on one slave 
in a loop, right? The only modification required is that the JGroupsWrapper.get 
should do dispatcher.callRemoteMethods(...) with all members instead of just 
single invocation. And maybe the 
I think I could grab some time for this next week.

Radim

| 
| 
| 
| 
| On Thu, Jan 3, 2013 at 12:42 PM, Bela Ban  b...@redhat.com  wrote:
| 
| 
| Let's make sure though that we have a meaningful default that's not
| optimized for an edge case. Also, if we use TCP, we can remove UFC
| from
| the config, as TCP already performs point-to-point flow control.
| 
| 
| 
| On 1/3/13 11:29 AM, Radim Vansa wrote:
|  20k credits seems to be the best choice for this test:
|  
|  10k: bad performance
|  20k: Average of 2.79 requests / sec (27.87MB / sec), 358.81 ms
|  /request (prot=UNICAST2)
|  30k: Average of 2.52 requests / sec (25.18MB / sec), 397.15 ms
|  /request (prot=UNICAST2)
|  50k: Average of 2.35 requests / sec (23.47MB / sec), 426.10 ms
|  /request (prot=UNICAST2)
|  80k: Average of 1.29 requests / sec (12.94MB / sec), 772.78 ms
|  /request (prot=UNICAST2)
|  200k: bad performance
|  
|  (for remembrance: 4 nodes in hyperion, for these results I've set
|  up 8k frag size)
|  
|  I have held dot key for the duration of the test so you can see how
|  long each apply state took as the dots were inserted into console
|  in constant rate (lame ascii chart). See attachements.
|  
|  Radim
|  
|  - Original Message -
|  | From: Dan Berindei  dan.berin...@gmail.com 
|  | To: infinispan -Dev List  infinispan-dev@lists.jboss.org 
|  | Sent: Monday, December 24, 2012 8:01:26 AM
|  | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
|  | 
|  | 
|  | 
|  | 
|  | This is weird, I would have expected problems with the last
|  | message,
|  | but not in the middle of the sequence (that's why I suggested
|  | sending only 1 message). Maybe we need an an even lower
|  | max_credits...
|  | 
|  | Merry Christmas to you, too!
|  | 
|  | Dan
|  | On 21 Dec 2012 16:41, Radim Vansa  rva...@redhat.com  wrote:
|  | 
|  | 
|  | Hi Dan,
|  | 
|  | I have ran the test on 4 nodes in hyperion (just for the start to
|  | see
|  | how it will behave) but with 100 messages (1 message is nothing
|  | for
|  | a statistician) each 10MB and I see a weird behaviour - there are
|  | about 5-10 messages received in a fast succession and then the
|  | nothing is received for several seconds. I experience this
|  | behaviour
|  | for both 200k and 500k credits. Is this really how it should
|  | perform?
|  | 
|  | Merry Christmas and tons of snow :)
|  | 
|  | Radim
|  | 
|  | h1☃/h1
|  | 
|  | - Original Message -
|  | | From: Dan Berindei  dan.berin...@gmail.com 
|  | | To: infinispan -Dev List  infinispan-dev@lists.jboss.org 
|  | | Sent: Tuesday, December 18, 2012 8:57:08 AM
|  | | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
|  | | 
|  | | 
|  | | Hi Radim
|  | | 
|  | | If you run the test with only 2 nodes and FC disabled, it's
|  | | going
|  | | to
|  | | perform even better. But then as you increase the number of
|  | | nodes,
|  | | the speed with no FC will drop dramatically (when we didn't
|  | | have
|  | | RSVP enabled, with only 3 nodes, it didn't manage to send 1 x
|  | | 10MB
|  | | message in 10 minutes).
|  | | 
|  | | Please run the tests with as many nodes as possible and just 1
|  | | message x 10MB. If 500k still performs better, create a JIRA to
|  | | change the default.
|  | | 
|  | | Cheers
|  | | Dan
|  

Re: [infinispan-dev] MFC/UFC credits in default config

2013-01-03 Thread Dan Berindei
On Thu, Jan 3, 2013 at 3:46 PM, Radim Vansa rva...@redhat.com wrote:

 |
 |
 | Bela, I'm pretty sure these tests use UDP. I'd be really surprised if
 | we could improve TCP performance by lowering max_credits.

 True, they do.


So you are running the tests with TCP?


 |
 | We do have a JIRA to change the state transfer behaviour to request
 | state from only a few nodes at a time (perhaps only 1):
 | https://issues.jboss.org/browse/ISPN-2580 . Adrian is working on it
 | ATM, and once it's integrated it would make UUPerf performance
 | largely irrelevant.

 I don't think so, I expect that e.g. 3 nodes to make ST from is perfectly
 reasonable scenario and as these tests are ran with 4 nodes, this is the
 case.


Based on the test results we have so far, I think it will be very hard to
come up with a configuration that performs better with state transfer 3
sources than with 2 sources. That's even without considering the effects on
performance when there isn't a state transfer in progress.

So we could spend a lot of time on improving the performance with 3
sources, and never quite get to the 2-sources performance, or we could just
make 2 the default and not recommend changing the value. (We could also
hard-code the number of sources, but exposing the setting will make it
easier to test different values and confirm which one is best).


 |
 | Even if Adrian's fix doesn't make it into Final, I think a
 | max_credits of only 20k would impact performance in the stable
 | state (i.e. what UPerf is testing). So maybe we can find a
 | workaround, like lowering Infinispan's stateTransfer.chunkSize.

 Yeah, I have used 10MB messages for testing, I should do that for smaller
 ones as well.

 |
 | I wonder if we could automate UPerf and UUPerf, like RadarGun does
 | (or maybe make them RadarGun test scenarios?), so we can gather more
 | data points. At the moment there's a lot of manual work involved in
 | running the tests with all the possible configurations
 | (TCP/UNICAST2, TCP/UNICAST2/UFC, UDP/UNICAST, UDP/UNICAST/UFC,
 | UDP/UNICAST2/UFC, UDP/UNICAST2/UFC/RSVP, each protocol with several
 | tweak-able attributes) and figuring out which configuration is
 | best.

 This sounds good, using JGroups cachewrapper I could just do GET on one
 slave in a loop, right? The only modification required is that the
 JGroupsWrapper.get should do dispatcher.callRemoteMethods(...) with all
 members instead of just single invocation. And maybe the
 I think I could grab some time for this next week.


I think to make it really like state transfer you'd have to keep one GET
target, but make all nodes pick the same target (e.g. the first node) and
make the key really big. Making all nodes targets would work as well, but
you'd have to do that on only one node to mimic a single joiner asking for
state.



 Radim

 |
 |
 |
 |
 | On Thu, Jan 3, 2013 at 12:42 PM, Bela Ban  b...@redhat.com  wrote:
 |
 |
 | Let's make sure though that we have a meaningful default that's not
 | optimized for an edge case. Also, if we use TCP, we can remove UFC
 | from
 | the config, as TCP already performs point-to-point flow control.
 |
 |
 |
 | On 1/3/13 11:29 AM, Radim Vansa wrote:
 |  20k credits seems to be the best choice for this test:
 | 
 |  10k: bad performance
 |  20k: Average of 2.79 requests / sec (27.87MB / sec), 358.81 ms
 |  /request (prot=UNICAST2)
 |  30k: Average of 2.52 requests / sec (25.18MB / sec), 397.15 ms
 |  /request (prot=UNICAST2)
 |  50k: Average of 2.35 requests / sec (23.47MB / sec), 426.10 ms
 |  /request (prot=UNICAST2)
 |  80k: Average of 1.29 requests / sec (12.94MB / sec), 772.78 ms
 |  /request (prot=UNICAST2)
 |  200k: bad performance
 | 
 |  (for remembrance: 4 nodes in hyperion, for these results I've set
 |  up 8k frag size)
 | 
 |  I have held dot key for the duration of the test so you can see how
 |  long each apply state took as the dots were inserted into console
 |  in constant rate (lame ascii chart). See attachements.
 | 
 |  Radim
 | 
 |  - Original Message -
 |  | From: Dan Berindei  dan.berin...@gmail.com 
 |  | To: infinispan -Dev List  infinispan-dev@lists.jboss.org 
 |  | Sent: Monday, December 24, 2012 8:01:26 AM
 |  | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
 |  |
 |  |
 |  |
 |  |
 |  | This is weird, I would have expected problems with the last
 |  | message,
 |  | but not in the middle of the sequence (that's why I suggested
 |  | sending only 1 message). Maybe we need an an even lower
 |  | max_credits...
 |  |
 |  | Merry Christmas to you, too!
 |  |
 |  | Dan
 |  | On 21 Dec 2012 16:41, Radim Vansa  rva...@redhat.com  wrote:
 |  |
 |  |
 |  | Hi Dan,
 |  |
 |  | I have ran the test on 4 nodes in hyperion (just for the start to
 |  | see
 |  | how it will behave) but with 100 messages (1 message is nothing
 |  | for
 |  | a statistician) each 10MB and I see a weird behaviour - there are
 |  | about 5-10 messages received in a fast succession and then the
 |  | 

Re: [infinispan-dev] MFC/UFC credits in default config

2013-01-03 Thread Radim Vansa

| 
| | 
| | 
| | Bela, I'm pretty sure these tests use UDP. I'd be really surprised
| | if
| | we could improve TCP performance by lowering max_credits.
| 
| True, they do.
| 
| So you are running the tests with TCP?
| 

No, I have confirmed the first sentence. The tests use UDP.

| 
| | 
| | We do have a JIRA to change the state transfer behaviour to request
| | state from only a few nodes at a time (perhaps only 1):
| | https://issues.jboss.org/browse/ISPN-2580 . Adrian is working on it
| 
| | ATM, and once it's integrated it would make UUPerf performance
| | largely irrelevant.
| 
| I don't think so, I expect that e.g. 3 nodes to make ST from is
| perfectly reasonable scenario and as these tests are ran with 4
| nodes, this is the case.
| 
| 
| 
| Based on the test results we have so far, I think it will be very
| hard to come up with a configuration that performs better with state
| transfer 3 sources than with 2 sources. That's even without
| considering the effects on performance when there isn't a state
| transfer in progress.
| 
| 
| So we could spend a lot of time on improving the performance with 3
| sources, and never quite get to the 2-sources performance, or we
| could just make 2 the default and not recommend changing the
| value. (We could also hard-code the number of sources, but exposing
| the setting will make it easier to test different values and confirm
| which one is best).
| 

I must agree, or rather the results are the only right judge, not any of our 
(~my) assumptions.

| 
| | 
| | Even if Adrian's fix doesn't make it into Final, I think a
| | max_credits of only 20k would impact performance in the stable
| | state (i.e. what UPerf is testing). So maybe we can find a
| | workaround, like lowering Infinispan's stateTransfer.chunkSize.
| 
| Yeah, I have used 10MB messages for testing, I should do that for
| smaller ones as well.
| 
| 
| | 
| | I wonder if we could automate UPerf and UUPerf, like RadarGun does
| | (or maybe make them RadarGun test scenarios?), so we can gather
| | more
| | data points. At the moment there's a lot of manual work involved in
| | running the tests with all the possible configurations
| | (TCP/UNICAST2, TCP/UNICAST2/UFC, UDP/UNICAST, UDP/UNICAST/UFC,
| | UDP/UNICAST2/UFC, UDP/UNICAST2/UFC/RSVP, each protocol with several
| | tweak-able attributes) and figuring out which configuration is
| | best.
| 
| This sounds good, using JGroups cachewrapper I could just do GET on
| one slave in a loop, right? The only modification required is that
| the JGroupsWrapper.get should do dispatcher.callRemoteMethods(...)
| with all members instead of just single invocation. And maybe the
| I think I could grab some time for this next week.
| 
| 
| I think to make it really like state transfer you'd have to keep one
| GET target, but make all nodes pick the same target (e.g. the first
| node) and make the key really big. Making all nodes targets would
| work as well, but you'd have to do that on only one node to mimic a
| single joiner asking for state.
| 

Single joiner flooded by data was the problem, wasn't it? We could test both, 
of course, single joiner to big cluster and superelasticity where many nodes 
try to request data from single node. Still, the second one is not problematic 
for flow control, because the source will supply the data as fast as it can but 
all nodes can handle the fraction of data.

Radim

| | 
| | 
| | 
| | On Thu, Jan 3, 2013 at 12:42 PM, Bela Ban  b...@redhat.com 
| | wrote:
| | 
| | 
| | Let's make sure though that we have a meaningful default that's not
| | optimized for an edge case. Also, if we use TCP, we can remove UFC
| | from
| | the config, as TCP already performs point-to-point flow control.
| | 
| | 
| | 
| | On 1/3/13 11:29 AM, Radim Vansa wrote:
| |  20k credits seems to be the best choice for this test:
| |  
| |  10k: bad performance
| |  20k: Average of 2.79 requests / sec (27.87MB / sec), 358.81 ms
| |  /request (prot=UNICAST2)
| |  30k: Average of 2.52 requests / sec (25.18MB / sec), 397.15 ms
| |  /request (prot=UNICAST2)
| |  50k: Average of 2.35 requests / sec (23.47MB / sec), 426.10 ms
| |  /request (prot=UNICAST2)
| |  80k: Average of 1.29 requests / sec (12.94MB / sec), 772.78 ms
| |  /request (prot=UNICAST2)
| |  200k: bad performance
| |  
| |  (for remembrance: 4 nodes in hyperion, for these results I've set
| |  up 8k frag size)
| |  
| |  I have held dot key for the duration of the test so you can see
| |  how
| |  long each apply state took as the dots were inserted into console
| |  in constant rate (lame ascii chart). See attachements.
| |  
| |  Radim
| |  
| |  - Original Message -
| |  | From: Dan Berindei  dan.berin...@gmail.com 
| |  | To: infinispan -Dev List  infinispan-dev@lists.jboss.org 
| |  | Sent: Monday, December 24, 2012 8:01:26 AM
| |  | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
| |  | 
| |  | 
| |  | 
| |  | 
| |  | This is weird, I would have expected problems