|
On 5/13/19 23:46, 臧琳 wrote:
The file option is described as: file=<file> dump data
to <file>It does not tell anything about the path.With the new design, I suggest firstly parse <file_name>, if the value contains folder path, use the specified path, if not, use same path as "file=" value, and if "file=" is not set, use output stream. (The reason I prefer to use same path as "file=" is I assume that users prefer to save all data file under the same folder.) It needs to be clearly specified. What statements do you suggest? One idea of simplification is to get rid of the default <file_name> and to require it to be always specified (non-optional). Then we could replace this:
with this:
> > - chunksize=<N>, size of objects (in KB) will be dumped in one chunk. > > Q2: Should it be chunk of dump, not chunk of objects? The purpose of "chunksize" is to decide how many objects' info are dumped at once. for example use "chunksize=1" on a "Xmx1m", there will be at max 1MB/1KB = 1000 chunks, which indicates that there will be 1000 times of file writing when do "jmap -histo". I hardly understand the point to know max of objects that can be dumped at once. It is more important to know how much memory in the file it is going to take. How much of dump memory will take one object? Does it vary (does it depend on object types)? > > - maxfilesize=<N>, size of the incremental data dump file (in KB), when data size > > is larger than maxfilesize, the file is erased and latest data will be written. > Q3: What is a relation and limitations between chunksize and maxfilesize? > Should the maxfilesize be multiple of the chunksize? The question Q3 above was not unanswered. But never mind. Please, see the suggestion below. > Q4: The sentence "the file is erased and latest data will be written" is not clear enough. > Why the whole file needs to be erased > Should the incremental file behave like a cyclic buffer? > If so, then only next chunk needs to be erased. > Then the chunks need to be numbered in order, so the earliest one can be found. The "maxfilesize" controls the file size not to be too large, so when the dumped data is larger than "maxfilesize", the file is erased and latest data are written.The reason I erase whole file is that chunk data is accumulative, so the latest data includes the previous statistical ones. And this way may make the file easy to read. I agree that we can add ordered number in chunks, I think it more or less help user to get to know how object distributed in heap. I think maybe it is reasonable to have the incremental file behave like gclog, when maxfilesize is reached, the file is renamed with numbered suffix, and new file is created to use. so there can be IncrementalHisto.dump.0 and IncrementalHisto.dump.1 etc for large heap. what do you think? I think, it is not a bad idea. In general, new incremental feature design does not look simple and clear enough. It feels like another step of simplification is needed. What about to get rid of the maxfilesize option? Then each chunk can be recorded to a separate file IncrementalHisto.dump.<chunk_number>. A couple of questions to clarify: - Do want all chunks or just the latest chunk to be saved? - If we save all chunks then what is the point to have the full dump recorded as well? The advantages of this approach is that there is no need to describe: - relationship between chunksize and maxfilesize - recording behavior for multiple chunks in the incremental file - what chunks have been recorded into the incremental But again, this still needs to be clearly specified. It would be nice to reach a consensus on a design first. Thanks, Serguei Thanks, Lin ________________________________________ From: [email protected] <[email protected]> Sent: Saturday, May 11, 2019 2:17:41 AM To: 臧琳; Hohensee, Paul; JC Beyler Cc: [email protected] Subject: Re: [RFR]8215623: Add incremental dump for jmap histo Dear Lin, Sorry for the late reply. I've edited the CSR a little bit to fix some incorrect spots. Now, a couple of spots are not clear to me. > - incremental[:<file_name>], enable the incremental dump of heap, dumped > data will be saved to, by default it is "IncrementalHisto.dump" Q1: Should the <file_name> be full path or short name? Is there any default path? What is the path of the "IncrementalHisto.dump" file? > - chunksize=<N>, size of objects (in KB) will be dumped in one chunk. Q2: Should it be chunk of dump, not chunk of objects? > - maxfilesize=<N>, size of the incremental data dump file (in KB), when data size > is larger than maxfilesize, the file is erased and latest data will be written. Q3: What is a relation and limitations between chunksize and maxfilesize? Should the maxfilesize be multiple of the chunksize? Q4: The sentence "the file is erased and latest data will be written" is not clear enough. Why the whole file needs to be erased Should the incremental file behave like a cyclic buffer? If so, then only next chunk needs to be erased. Then the chunks need to be numbered in order, so the earliest one can be found. (I do not want you to accept my suggestions right away. It is just a discussion point. You need to prove that your approach is good and clean enough.) If we resolve the questions (or get into agreement) then I'll update the CSR as needed. Thanks, Serguei On 5/5/19 00:34, 臧琳 wrote: |
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... serguei . spitsyn
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... [email protected]
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... [email protected]
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... [email protected]
- RE: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
- Re: [RFR]8215623: Add incremental dump for jmap... [email protected]
- Re: [RFR]8215623: Add incremental dump for jmap... 臧琳
