Actually, I had done testing like that in a VM to get a sense of raw
process/threads limits on a low memory system. It's relatively easy to
achieve a "fair" level of confidence with a live VM snapshot and making sure
to reuse the same timings for measurement.
The measurement timing is important since the OS will trigger page flush
after X number of seconds. If you are not in a hurry, usually taking
measurements 60 seconds after your test is idle will be quite stable.

M-A

On Sat, Jun 27, 2009 at 7:39 PM, Mike Belshe <[email protected]> wrote:

> This one is the hardest to test, you need to run a pristinely clean system
> to execute.
> Also - don't forget to make the browser window sizes the same (and with the
> same amount of visible window) for all browsers under test, because if the
> kernel can't offload to the graphics card, the display memory will be
> counted here.
>
> But yeah, if you can make all that work, then it is a good test!
> mike
>
>
> On Sat, Jun 27, 2009 at 2:50 PM, Linus Upson <[email protected]> wrote:
>
>> If I recall correctly, the best way we found to measure the total memory
>> usage of a multi-process system like chrome was to measure the total commit
>> charge of windows as you run the test. This will correctly account for
>> shared memory, mapped pages that have been touched, kernel memory, etc. I
>> don't recall if it includes virtual alloced paged that haven't been made
>> real. The big limitation is that your test needs to be the only thing
>> running on the machine.
>> Linus
>>
>>
>> On Thu, Jun 25, 2009 at 4:11 PM, Mike Beltzner <[email protected]>wrote:
>>
>>>
>>> On 25-Jun-09, at 7:02 PM, Mike Belshe wrote:
>>>
>>> > This screen actually confuses me a little, as the Summary statistics
>>> > don't match the summation of the process based statistics. Do you
>>> > mean to say your summary statistics take into account the memory
>>> > that's being shared across the various processes?
>>> >
>>> > Correct.
>>> >
>>> > The "shared" across all processes is a bit of a hack, because you
>>> > can't know exactly which pages are shared across every single
>>> > process.  We do a heuristic.
>>>
>>> Cool! Good to know. I'll take a peek into that code you mentioned to
>>> see what the heuristic is that you're using.
>>>
>>> > Interestingly, as I watched this value change while webpages were
>>> > loading, it tracked the same pattern of growth/decline as "Memory
>>> > (Private Working Set)" in the Task Manager, though the values were
>>> > usually about 2x or so more. I suppose this is due to the heap
>>> > sharing you were speaking of earlier?
>>> >
>>> > I'm not quite sure what you mean.
>>>
>>> I'm basically being lazy. I'd like to not have to make my own counter
>>> for Private Working Set, so I watched the values of "Memory (Private
>>> Working Set)" and "Commit Size" in the Task Manager as the test ran,
>>> and noticed that they increased/decreased at the same time, and the
>>> delta between them was a near constant 2x. Since my interest here is
>>> developing a metric that can help us understand when we're regressing/
>>> improving memory usage, the exact value isn't as important to me as
>>> the delta. If the deltas are simply off by a constant factor, I could
>>> live with that.
>>>
>>> As I said: lazy!
>>>
>>> >
>>> > The "Working Set - Private" counter doesn't seem to have a structure
>>> > according to the MSDN document; that's what maps to the "Memory
>>> > (Private Working Set)" column in the TaskManager.
>>> >
>>> > Right, I think you have to use QueryWorkingSet, walk the pages and
>>> > categorize them yourself.
>>> >
>>> > OK, I can look into trying that. Though I'm wondering if it's worth
>>> > the bother, as the meta-pattern, to me, is more interesting than the
>>> > precise megabyte count.
>>> >
>>> > For a single process browser, it's not worth the effort; I think
>>> > it's the only way to know how to account for shared memory.
>>>
>>>
>>> > The closest thing I can find is the "Working Set" counter, which
>>> > uses the PROCESS_MEMORY_COUNTERS_EX.WorkingSetSize structure and
>>> > shows up in the Vista Task Manager as "Working Set (Memory)"
>>> >
>>> > For multi-proc browsers like chrome, this will way overstate RAM;
>>> > there is a good 5-6MB of shared working set in each process.  So for
>>> > 10 tabs, you'd could an extra 50MB for Chrome if you do it this way.
>>> >
>>> > Looking both in Task Manager and about:memory, when I have 30 tabs
>>> > open I'm not seeing 30 processes. Are you sure you're right about
>>> > this point?
>>> >
>>> > You don't always get a new process for every tab.  If two tabs are
>>> > connected via javascript, then they'll be in the same process (the
>>> > about:memory shows which tabs are in the same process).  So,
>>> > clicking a link, for example, will open in the same tab, but typing
>>> > the URL in the omnibox will create a new process.  Others could tell
>>> > you more about the exact policy for when you get a new process and
>>> > when you don't.
>>>
>>> Someone just did in IRC, actually. Apparently in addition to what you
>>> said, as soon as a page is in cache, processes get pooled. I clear
>>> caches between test runs, but it sounds like since we're calling these
>>> with window.open() in our test, they all get placed in the same process.
>>>
>>> Overall, though, that should mean that we're *not* double counting
>>> memory. In fact, when I observed as the test ran, there were only
>>> three processes: one for the browser, one for the single content
>>> process from which all tabs were spawned, and one for Shockwave/Flash.
>>> Good news, I guess, in terms of reporting accurately!
>>>
>>> > OK - I think this might basically use one renderer process in
>>> > chrome?  Because of the new-process creation policy, it may not be
>>> > representative of real world usage.  Darin?
>>>
>>> Right, but AIUI, it's an erring on the side of reporting less, not
>>> more. If there's a better way to automate pageloads that represents
>>> real world usage, please let me know.
>>>
>>> > The whole while, we measure the amount of memory taken using the
>>> > PROCESS_MEMORY_COUNTERS structure, summating over processes when
>>> > multiple exist (as they do in the case of Internet Explorer 8 and
>>> > Chrome 2)
>>> >
>>> > Ok - that will double count shared memory.  I'd estimate 3-5MB per
>>> > process.
>>>
>>> So we're talking about over-reporting by 9-15MB across the test.
>>> Again, good to know.
>>>
>>> > I'll try to take a closer look at your test, but I'm not sure when
>>> > I'll have time :-(
>>>
>>> No rush here, and I appreciate your time and candor to date!
>>>
>>> cheers,
>>> mike
>>>
>>>
>>>
>>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to