Let's make sure though that we have a meaningful default that's not 
optimized for an edge case. Also, if we use TCP, we can remove UFC from 
the config, as TCP already performs point-to-point flow control.

On 1/3/13 11:29 AM, Radim Vansa wrote:
> 20k credits seems to be the best choice for this test:
>
> 10k: bad performance
> 20k: Average of 2.79 requests / sec (27.87MB / sec), 358.81 ms /request 
> (prot=UNICAST2)
> 30k: Average of 2.52 requests / sec (25.18MB / sec), 397.15 ms /request 
> (prot=UNICAST2)
> 50k: Average of 2.35 requests / sec (23.47MB / sec), 426.10 ms /request 
> (prot=UNICAST2)
> 80k: Average of 1.29 requests / sec (12.94MB / sec), 772.78 ms /request 
> (prot=UNICAST2)
> 200k: bad performance
>
> (for remembrance: 4 nodes in hyperion, for these results I've set up 8k frag 
> size)
>
> I have held dot key for the duration of the test so you can see how long each 
> apply state took as the dots were inserted into console in constant rate 
> (lame ascii chart). See attachements.
>
> Radim
>
> ----- Original Message -----
> | From: "Dan Berindei" <dan.berin...@gmail.com>
> | To: "infinispan -Dev List" <infinispan-dev@lists.jboss.org>
> | Sent: Monday, December 24, 2012 8:01:26 AM
> | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
> |
> |
> |
> |
> | This is weird, I would have expected problems with the last message,
> | but not in the middle of the sequence (that's why I suggested
> | sending only 1 message). Maybe we need an an even lower
> | max_credits...
> |
> | Merry Christmas to you, too!
> |
> | Dan
> | On 21 Dec 2012 16:41, "Radim Vansa" < rva...@redhat.com > wrote:
> |
> |
> | Hi Dan,
> |
> | I have ran the test on 4 nodes in hyperion (just for the start to see
> | how it will behave) but with 100 messages (1 message is nothing for
> | a statistician) each 10MB and I see a weird behaviour - there are
> | about 5-10 messages received in a fast succession and then the
> | nothing is received for several seconds. I experience this behaviour
> | for both 200k and 500k credits. Is this really how it should
> | perform?
> |
> | Merry Christmas and tons of snow :)
> |
> | Radim
> |
> | <h1>☃</h1>
> |
> | ----- Original Message -----
> | | From: "Dan Berindei" < dan.berin...@gmail.com >
> | | To: "infinispan -Dev List" < infinispan-dev@lists.jboss.org >
> | | Sent: Tuesday, December 18, 2012 8:57:08 AM
> | | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
> | |
> | |
> | | Hi Radim
> | |
> | | If you run the test with only 2 nodes and FC disabled, it's going
> | | to
> | | perform even better. But then as you increase the number of nodes,
> | | the speed with no FC will drop dramatically (when we didn't have
> | | RSVP enabled, with only 3 nodes, it didn't manage to send 1 x 10MB
> | | message in 10 minutes).
> | |
> | | Please run the tests with as many nodes as possible and just 1
> | | message x 10MB. If 500k still performs better, create a JIRA to
> | | change the default.
> | |
> | | Cheers
> | | Dan
> | |
> | |
> | |
> | |
> | |
> | | On Mon, Dec 17, 2012 at 4:55 PM, Radim Vansa < rva...@redhat.com >
> | | wrote:
> | |
> | |
> | | Sorry I haven't specified the amount, I am a stupido... my tests
> | | are
> | | working with 500k credits.
> | |
> | | UUPerf (JGroups 3.2.4.Final-redhat-1) from one computer in perflab
> | | to
> | | another, 2 threads (default), 1000x sends 10MB message (default
> | | chunkSize = 10000 * our entry size is usually 1kB) executed 3x
> | |
> | | 200k: Average of 6.02 requests / sec (60.19MB / sec), 166.13 ms
> | | /request (prot=UNICAST2)
> | | Average of 5.61 requests / sec (56.09MB / sec), 178.30 ms /request
> | | (prot=UNICAST2)
> | | Average of 5.49 requests / sec (54.94MB / sec), 182.03 ms /request
> | | (prot=UNICAST2)
> | |
> | | 500k: Average of 7.93 requests / sec (79.34MB / sec), 126.04 ms
> | | /request (prot=UNICAST2)
> | | Average of 8.18 requests / sec (81.82MB / sec), 122.23 ms /request
> | | (prot=UNICAST2)
> | | Average of 8.41 requests / sec (84.09MB / sec), 118.92 ms /request
> | | (prot=UNICAST2)
> | |
> | | Can you also reproduce such results? I think that suggests that
> | | 500k
> | | behaves really better.
> | |
> | | Radun
> | |
> | |
> | |
> | |
> | | ----- Original Message -----
> | | | From: "Dan Berindei" < dan.berin...@gmail.com >
> | | | To: "infinispan -Dev List" < infinispan-dev@lists.jboss.org >
> | | | Sent: Monday, December 17, 2012 12:43:37 PM
> | | | Subject: Re: [infinispan-dev] MFC/UFC credits in default config
> | | |
> | | |
> | | |
> | | |
> | | |
> | | | On Mon, Dec 17, 2012 at 1:28 PM, Bela Ban < b...@redhat.com >
> | | | wrote:
> | | |
> | | |
> | | | Dan reduced those values to 200K, IIRC it was for UUPerfwhich
> | | | behaved
> | | | best with 200K. Idon't know if this is still needed. Dan ?
> | | |
> | | |
> | | |
> | | |
> | | | I haven't run UUPerf in a while...
> | | |
> | | |
> | | |
> | | |
> | | | On 12/17/12 12:19 PM, Radim Vansa wrote:
> | | | > Hi,
> | | | >
> | | | > recently I have synchronized our jgroups configuration with the
> | | | > default one shipped with Infinispan
> | | | > (core/src/main/resources/jgroups-(tcp|udp).xml) and it has
> | | | > shown
> | | | > that 200k credits in UFC/MFC (I keep the two values in sync) is
> | | | > not enough even for our smallest resilience test (killing one
> | | | > of
> | | | > four nodes). The state transfer was often blocked when
> | | | > requesting
> | | | > for more credits which resulted in not completing it within the
> | | | > time limit.
> | | | > Therefore, I'd like to suggest to increase the amount of
> | | | > credits
> | | | > in
> | | | > default configuration as well, because we simply cannot use the
> | | | > lower setting and it's preferable to have the configurations as
> | | | > close as possible. The only settings we need to keep different
> | | | > are
> | | | > thread pool sizes and addresses and ports.
> | | | >
> | | |
> | | |
> | | | What value would you like to use instead?
> | | |
> | | | Can you try UUPerf with 200k and your proposed configuration and
> | | | compare the results?
> | | |
> | | | Cheers
> | | | Dan
> | | |
> | | |
> | |
> | |
> | | | _______________________________________________
> | | | infinispan-dev mailing list
> | | | infinispan-dev@lists.jboss.org
> | | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | | _______________________________________________
> | | infinispan-dev mailing list
> | | infinispan-dev@lists.jboss.org
> | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | |
> | |
> | | _______________________________________________
> | | infinispan-dev mailing list
> | | infinispan-dev@lists.jboss.org
> | | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> |
> | _______________________________________________
> | infinispan-dev mailing list
> | infinispan-dev@lists.jboss.org
> | https://lists.jboss.org/mailman/listinfo/infinispan-dev
> | _______________________________________________
> | infinispan-dev mailing list
> | infinispan-dev@lists.jboss.org
> | https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

-- 
Bela Ban, JGroups lead (http://www.jgroups.org)

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to