Where *both* TotalSchedulers and OnlineSchedulers should be set to 50% of your 
Logical cores.

Sam

On 14 Aug 2013, at 4:14PM, Sean Cribbs <[email protected]> wrote:

> http://www.erlang.org/doc/man/erl.html#+S
> 
> +S TotalSchedulers:OnlineSchedulers
> 
> 
> On Wed, Aug 14, 2013 at 9:57 AM, Guido Medina <[email protected]> 
> wrote:
> Hi Matthew,
> 
> It is a bit confusing, cause let's say "+S C:T" is defined as:
> 
> C = Physical cores?
> T = Total threads or total - physical cores?
> 
> Is it a sum of or the physical cores and total of threads? That's the 
> confusing part, let's say you have a server with 8 physical cores with no 
> hyper threading, so total threads is also 8, would that still be "+S 4:4", 
> "+S 8:8" or "+S 8:0"
> 
> Thanks,
> 
> Guido.
> 
> 
> On 14/08/13 15:41, Matthew Von-Maszewski wrote:
> "threads=8" is the key phrase … +S 4:4
> 
> On Aug 14, 2013, at 10:04 AM, Guido Medina <[email protected]> wrote:
> 
> For the following information should it be +S 4:4 or +S 4:8?
> 
> root@somehost# lshw -C processor
>   *-cpu
>        description: CPU
>        product: Intel(R) Core(TM) i7 CPU         930  @ 2.80GHz
>        vendor: Intel Corp.
>        physical id: 4
>        bus info: cpu@0
>        version: Intel(R) Core(TM) i7 CPU         930  @ 2.80GHz
>        serial: To Be Filled By O.E.M.
>        slot: CPU 1
>        size: 1600MHz
>        capacity: 1600MHz
>        width: 64 bits
>        clock: 133MHz
>        capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce 
> cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 
> ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good 
> nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 
> ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida dtherm tpr_shadow vnmi 
> flexpriority ept vpid cpufreq
>        configuration: cores=4 enabledcores=4 threads=8
> 
> Thanks,
> 
> Guido.
> 
> On 14/08/13 01:38, Matthew Von-Maszewski wrote:
> ** The following is copied from Basho's leveldb wiki page:
> 
> https://github.com/basho/leveldb/wiki/Riak-tuning-1
> 
> 
> 
> Summary:
> 
> leveldb has a higher read and write throughput in Riak if the Erlang 
> scheduler count is limited to half the number of CPU cores. Tests have 
> demonstrated improvements of 15% to 80% greater throughput.
> 
> The scheduler limit is set in the vm.args file:
> 
> +S x:x
> 
> where "x" is the number of schedulers Erlang may use. Erlang's default value 
> of "x" is the total number of CPUs in the system. For Riak installations 
> using leveldb, the recommendation is to set "x" to half the number of CPUs. 
> Virtual environments are not yet tested.
> 
> Example: for 24 CPU system
> 
> +S 12:12
> 
> Discussion:
> 
> We have tested a limited number of CPU configurations and customer loads. In 
> all cases, there is a performance increase when the +S option is added to the 
> vm.args file to reduce the number of Erlang schedulers. The working 
> hypothesis is that the Erlang schedulers perform enough "busy wait" work that 
> they always create context switch away from leveldb when leveldb is actually 
> the only system task with real work.
> 
> The tests included 8 CPU (no hyper threading, physical cores only) and 24 CPU 
> (12 physical cores with hyper threading) systems. All were 64bit Intel 
> platforms. Generalized findings:
> 
>         • servers running higher number of vnodes (64) had larger performance 
> gains than those with fewer (8)
>         • servers running SSD arrays had larger performance gains than those 
> running SATA arrays
>         • Get and Write operations showed performance gains, 2i query 
> operations (leveldb iterators) were unchanged
>         • Not recommended for servers with less than 8 CPUs (go no lower than 
> +S 4:4)
> 
> Performance improvements were as high as 80% over extended, heavily loaded 
> intervals on servers with SSD arrays and 64 vnodes. No test resulted in worse 
> performance due to the addition of +S x:x.
> 
> The +S x:x configuration change does not have to be implemented 
> simultaneously to an entire Riak cluster. The change may be applied to a 
> single server for verification. Steps: update the vm.args file, then restart 
> the Riak node. Erlang command line changes to schedules were ineffective.
> 
> This configuration change has been running in at least one large, 
> multi-datacenter production environment for several months.
> 
> 
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 
> -- 
> Sean Cribbs <[email protected]>
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to