Ooh, looks familiar. We did some work on the stats aggregator recently. Sent from the ocean floor
On 13 Oct 2012, at 14:03, Pekka Olavi <[email protected]> wrote: > Hello again, list! I upgraded to R15B02 and the problem persists. I > started couchdb with -i (interactive mode) and this is what I found > out with etop:start().: > > ======================================================================================== > nonode@nohost > 12:47:59 > Load: cpu 4 Memory: total 11272 > binary 378 > procs 139 processes 4560 code > 3664 > runq 0 atom 242 ets > 789 > > Pid Name or Initial Func Time Reds Memory MsgQ > Current Function > ---------------------------------------------------------------------------------------- > <0.125.0> couch_stats_aggregat '-'******** 2543060 0 > gen_server:loop/6 > <0.3275.0> gstk:init/1 '-' 4569667 14856 0 > gstk:loop/1 > <0.87.0> timer_server '-' 2367605 12376 0 > gen_server:loop/6 > <0.3.0> erl_prim_loader '-' 1015895 88060 0 > erl_prim_loader:loop > <0.3276.0> gstk_port_handler:in '-' 516414 142180 0 > gstk_port_handler:id > <0.4634.0> etop_gui:init/1 '-' 431074 21184 0 > etop:update/1 > <0.3274.0> gs_frontend '-' 331345 12336 0 > gs_frontend:loop/1 > <0.20.0> code_server '-' 266949 131884 0 > code_server:loop/1 > <0.85.0> disksup '-' 93042 27612 0 > gen_server:loop/6 > <0.7.0> application_controll '-' 71123 213244 0 > gen_server:loop/6 > <0.86.0> couch_config '-' 55385 372116 0 > gen_server:loop/6 > <0.11.0> kernel_sup '-' 34725 61240 0 > gen_server:loop/6 > <0.130.0> proc_lib:init_p/5 '-' 25750 13348 0 > prim_inet:accept0/2 > <0.131.0> proc_lib:init_p/5 '-' 25750 13348 0 > prim_inet:accept0/2 > <0.132.0> proc_lib:init_p/5 '-' 25750 13348 0 > prim_inet:accept0/2 > <0.133.0> proc_lib:init_p/5 '-' 25750 13348 0 > prim_inet:accept0/2 > <0.134.0> proc_lib:init_p/5 '-' 25750 13348 0 > prim_inet:accept0/2 > <0.135.0> proc_lib:init_p/5 '-' 25750 13348 0 > prim_inet:accept0/2 > <0.136.0> proc_lib:init_p/5 '-' 25750 13348 0 > prim_inet:accept0/2 > <0.137.0> proc_lib:init_p/5 '-' 25750 13348 0 > prim_inet:accept0/2 > ======================================================================================== > > The reductions for couch_stats_aggregator is around 115400000. Is this > normal behaviour for Couch? It reached this figure in a couple of > hours. Any more ideas on where to look at? > > .p > > On Wed, Oct 10, 2012 at 8:35 PM, Pekka Olavi <[email protected]> > wrote: >> Yeah, I wasn't thinking that either. But I'm hoping running a more >> state of the art erlang would be a good try to fix this :-) >> >> On Wed, Oct 10, 2012 at 5:25 PM, Robert Newson <[email protected]> >> wrote: >>> Sorry, I meant that R15B02's scheduler could be the cause of this, not >>> a solution. Since you're not using it, it's obviously not that. >>> >>> If you're making no requests and /_active_tasks is empty and beam is >>> still chewing CPU, then that's a bit of a puzzle. >>> >>> Sent from the ocean floor >>> >>> On 10 Oct 2012, at 14:37, Pekka Olavi <[email protected]> wrote: >>> >>>> Thanks Dave and Robert! >>>> >>>> Actually, from the ten threads spawned two are doing this, the other >>>> about 10x more than the other. As far as I understand, the engine >>>> should be doing nothing (it's almost empty, just on db with 2 design >>>> docs and 4 normal ones), so this scheduling thingy Robert mentioned >>>> seems like a good candidate to start with. I'm currently at R14B02, >>>> I'll upgrade and see what happens with a newer version. >>>> >>>> .p >>>> >>>> On Wed, Oct 10, 2012 at 4:24 PM, Robert Newson <[email protected]> >>>> wrote: >>>>> http://dieswaytoofast.blogspot.com.es/2012/09/cpu-utilization-in-erlang-r15b02.html?m=1 >>>>> >>>>> Sent from the ocean floor >>>>> >>>>> On 10 Oct 2012, at 14:23, Robert Newson <[email protected]> wrote: >>>>> >>>>> I recall R15B02, perhaps earlier, introduced a scheduler that kept the >>>>> CPU hot to eliminate delays when changing state from idle. I read that >>>>> somewhere recently, but can't find the link. >>>>> >>>>> Sent from the ocean floor >>>>> >>>>> On 10 Oct 2012, at 13:05, Dave Cottlehuber <[email protected]> wrote: >>>>> >>>>> On 10 October 2012 13:50, Pekka Olavi <[email protected]> wrote: >>>>> >>>>> Hello folks, I run a couch on my desktop for testing purposes. >>>>> >>>>> Everything else is fine and dandy and I'm actually loving developing >>>>> >>>>> for the web with couch. There is one gripe though, the beam.smp >>>>> >>>>> process is bleeding the CPU, for some reason I have no proficiency to >>>>> >>>>> analyse. >>>>> >>>>> >>>>> http://pastebin.com/eqtUyNZS >>>>> >>>>> >>>>> I start the server with "sudo couchdb" and it shows up in my ps aux like >>>>> so: >>>>> >>>>> /usr/lib/erlang/erts-5.8.3/bin/beam.smp -Bd -K true -A 4 -- -root >>>>> >>>>> /usr/lib/erlang -progname erl -- -home /home/halides -- -noshell >>>>> >>>>> -noinput -os_mon start_memsup false start_cpu_sup false >>>>> >>>>> disk_space_check_interval 1 disk_almost_full_threshold 1 -sasl >>>>> >>>>> errlog_type error -couch_ini /usr/local/etc/couchdb/default.ini >>>>> >>>>> /usr/local/etc/couchdb/local.ini -s couch >>>>> >>>>> >>>>> Any help appreciated! >>>>> >>>>> >>>>> .p >>>>> >>>>> >>>>> Hi Pekka, >>>>> >>>>> >>>>> What is couchdb doing at the time? e.g. are you view indexing, >>>>> >>>>> whatever. Anything in the couch.log when running in debug mode? >>>>> >>>>> >>>>> None of this will fix the problem, but it might be helpful to note >>>>> >>>>> what OS you're running as well, and how erlang was compiledm (or >>>>> >>>>> rpmd). >>>>> >>>>> >>>>> Some of the flags you are using seem wrong if you are intending to >>>>> >>>>> enable kernel polling and increase the IO scheduler threads. >>>>> >>>>> >>>>> -A 4 should be +A 4 >>>>> >>>>> -K true should be +K true >>>>> >>>>> ditto for your +Bd option >>>>> >>>>> >>>>> You might be interested in some of the tricks in here >>>>> >>>>> http://erlang-in-production.herokuapp.com/#16 from archaelus, and let >>>>> >>>>> us know what processes are hogging. >>>>> >>>>> >>>>> After that, I think your best bet will be to hop on irc in #erlang or >>>>> >>>>> #erlounge and get some other smart ideas. >>>>> >>>>> >>>>> A+ >>>>> >>>>> Dave
