On Thu, Oct 1, 2009 at 7:51 PM, Mike Belshe <[email protected]> wrote:
> I think we have a new constraint which we previously were ignoring.
> Spikes in memory usage of 500MB (which could be released) are no longer
> tolerable, ever.  (I'd like to get agreement on how to define this - some
> amount of cached memory is okay, of course, but 500MB is not).
> Some of the algorithms listed have these spikes, others do not.  It is true
> that algorithms which are more stable (less spikiness) will generally have
> some performance tradeoff.

Data point.  Peak Working Set for various allocators:

jemalloc:                                                315.304K
current tcmalloc:                                    569.588K
tcmalloc which decommits all free pages: 320.512K

So not decommiting gives less than 2x overhead.  That's definitely
bad, but how many sites are that demanding?  For example, (that's
completely unscientific benchmark---I opened my corp GMail, navigated
through couple of threads, labels, quick searches).  Numbers:
51.324K/65.012K (two most consuming of 4 process) with all
decommitting tcmalloc vs. 41.236/65.768K for never decommitting
tcmalloc.  10M might be tolerable price.

yours,
anton.


> Mike
>
> On Thu, Oct 1, 2009 at 6:14 AM, Vitaly Repeshko <[email protected]>
> wrote:
>>
>> On Thu, Oct 1, 2009 at 4:44 PM, Anton Muhin <[email protected]> wrote:
>> > Guys, just to summarize the discussion.
>> >
>> > There are several ways we can tweak tcmalloc:
>> >
>> > 1) decommit everything what is free;
>> > 2) keep spans with a mixed state (some pages committed, some not,
>> > coalescing nor commit, not decommits)---that should solve main Jim's
>> > argument;
>> > 3) commit on coalescing, but aggressively purge (like WebKit do, once
>> > in 5 secs unless something else has been committed, or in idle pauses.
>> >
>> > To my knowledge performance-wise 1) is slower (how slower we should
>> > learn), 2) is slightly faster than 3) (but it might be just a
>> > statistical error).  Of course, my benchmark is quite special.
>> >
>> > Memory-wise I think 2) and 3) with aggressive scavenging should be
>> > mostly the same---we could keep higher number of committed pages than
>> > in 1), but for short periods of time and I'm not convinced it's a bad
>> > thing.
>> >
>> > Overall I'm pro 2) and 3), but I am definitely biased.
>> >
>> > What do you think?
>> [...]
>>
>> I'd like to explain in what way decomitting everything (option #1
>> above) is slow. This hurts us most during garbage collection in V8.
>> When a DOM wrapper is collected the C++ DOM object that it wraps gets
>> deleted and we decommit memory (usually only to commit some of it
>> later). So we add extra low-level memory management cost to garbage
>> collections, which ideally should be as fast as possible to avoid
>> hurting interactivity of the browser. This is one of the reasons to
>> avoid aggressive decomitting.
>>
>> So I think having something like #2 with periodic onidle decommits
>> like #3 should be good both for performance in bechmarks and real apps
>> and for memory usage.
>>
>>
>> -- Vitaly
>
>

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to