We haven't accumulated yet a good sample of post-mortem cases to know what 
the important data in a dump might be. From the very general point of view 
it seems useful to have the main global objects captured. For example, if 
there is a memory corruption and the local view of the objects on the stack 
cannot be inspected meaningfully then being able to identify the expected 
managed heap structure from the global objects and then brute-force search 
through the memory could allow for some level of reconstruction and 
cross-validation. If capturing these data in key-value pairs is sufficient 
and produces a lighter dump I'm not against that at all. 

Agree that the default dumps should be as small as possible, perhaps, 
contain just the minimal data to identify the crashing stack and 
differentiate the crashes for triaging them (as it seems to be the current 
situation for v8) -- let's call such dumps "triage". However, I still think 
it might be valuable to have a "full" dump mode which can be set in some 
environments (e.g. lab runs) or requested occasionally from the end users 
on the opt-in basis. By a "small" dump in my previous reply I meant some 
kind of an intermediate between triage and full. Could the full mode be 
managed by crashpad and be as simple as collecting the full heap of the 
running process? Is it valuable to have the small(=intermediate) mode with 
either randomized data or more data as hinted by v8? 

What were the main post-mortem cases that drove the previous discussions 
about v8 and crashpad integration?

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to