On Thu, Oct 1, 2009 at 8:51 AM, Mike Belshe <[email protected]> wrote:
> I think we have a new constraint which we previously were ignoring.
> Spikes in memory usage of 500MB (which could be released) are no longer
> tolerable, ever.  (I'd like to get agreement on how to define this - some
> amount of cached memory is okay, of course, but 500MB is not).

How about as a percentage of working set?  Maybe if you legitimately
had a 6GB working set, 500MB of cache  might be OK.  For a 100MB of
working set, what's an OK amount of cache overhead?  10MB?  25MB?

Erik


> Some of the algorithms listed have these spikes, others do not.  It is true
> that algorithms which are more stable (less spikiness) will generally have
> some performance tradeoff.
> Mike
>
> On Thu, Oct 1, 2009 at 6:14 AM, Vitaly Repeshko <[email protected]>
> wrote:
>>
>> On Thu, Oct 1, 2009 at 4:44 PM, Anton Muhin <[email protected]> wrote:
>> > Guys, just to summarize the discussion.
>> >
>> > There are several ways we can tweak tcmalloc:
>> >
>> > 1) decommit everything what is free;
>> > 2) keep spans with a mixed state (some pages committed, some not,
>> > coalescing nor commit, not decommits)---that should solve main Jim's
>> > argument;
>> > 3) commit on coalescing, but aggressively purge (like WebKit do, once
>> > in 5 secs unless something else has been committed, or in idle pauses.
>> >
>> > To my knowledge performance-wise 1) is slower (how slower we should
>> > learn), 2) is slightly faster than 3) (but it might be just a
>> > statistical error).  Of course, my benchmark is quite special.
>> >
>> > Memory-wise I think 2) and 3) with aggressive scavenging should be
>> > mostly the same---we could keep higher number of committed pages than
>> > in 1), but for short periods of time and I'm not convinced it's a bad
>> > thing.
>> >
>> > Overall I'm pro 2) and 3), but I am definitely biased.
>> >
>> > What do you think?
>> [...]
>>
>> I'd like to explain in what way decomitting everything (option #1
>> above) is slow. This hurts us most during garbage collection in V8.
>> When a DOM wrapper is collected the C++ DOM object that it wraps gets
>> deleted and we decommit memory (usually only to commit some of it
>> later). So we add extra low-level memory management cost to garbage
>> collections, which ideally should be as fast as possible to avoid
>> hurting interactivity of the browser. This is one of the reasons to
>> avoid aggressive decomitting.
>>
>> So I think having something like #2 with periodic onidle decommits
>> like #3 should be good both for performance in bechmarks and real apps
>> and for memory usage.
>>
>>
>> -- Vitaly
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to