There were some issues around IdleNotification that were preferring to
force a full non-incremental GC. [which makes perfect sense if
embedder calls IdleNotification when it is _really_idle_ for a long
time, and not so much sense when it is not].

We decreased aggressiveness of that, so it might help you and remove stalls.

Node is pretty aggressive at calling IdleNotifications when it is not
actually idle, so you might be even better if you disable that
altogether and just let allocations drive GC.

Another thing here is that you might be allocating objects with a high
scavenger survival rate and this does not allow incremental marker to
keep up and finish marking before heap becomes too big.

Anyway, any GC problem requires deep investigation and tweaking of GC
parameters. There is no GC that fits every allocation pattern.

--
Vyacheslav Egorov


On Sun, Jul 15, 2012 at 11:13 PM, Jimb Esser <[email protected]> wrote:
> Just to add some anecdotal experience to the contrary... admittedly we're
> still on node 0.6, and it sounds like the GC has had a little love between
> then and the latest version, so this may not be totally relevant.
>
> We do no giant allocations, just lots of JS objects, some of which are
> constantly being modified (using a native physics module to modify the JS
> objects and synchronize them to other servers).  I'm looking at one specific
> node process that's been running 20 minutes or so (we don't let them run
> much longer than that, because the GC starts going crazy, and it takes less
> time to serialize the entire state of all objects into a new process than a
> single GC takes at that point).  It's JS Heap size is about 400mb (used and
> total).  For running a manual garbage collect, since getting command line
> arguments on our launched processes is a pain, I just use the one exposed by
> the mtrace native module, it just calls "while (!V8::IdleNotification());",
> but I'm assuming that's effectively the same thing.
>
> Anyway, running a manual garbage collect on this server took 1470ms.  Also,
> looking at the logs, about once every 5-10 seconds, the server stalls for
> around 1.4s, which the profiler shows as time spent in a garbage collect.
> This is definitely contrary to the comments stated above (we allocate many
> small objects, no giant objects, and it regularly stalls for a full GC).  We
> usually see times of around 500ms for GCs, but, as I said, this particular
> process has been running longer than most of ours.
>
> Side note:  this has been said before, but it's worth repeating - don't use
> node for hard-real-time apps.  We are, and it's kind of working, but it's
> rather insane, and the GC is really not happy with us.  If it wasn't so easy
> to develop on, we would have switched to an all native server for this part
> of our server stack long ago ;).
>
>   Jimb Esser
>   Cloud Party, Inc
>
>
>>  If you allocate many objects it should not be a problem for V8's
>> incremental GC
>
> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines:
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to [email protected]
> To unsubscribe from this group, send email to
> [email protected]
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

Reply via email to