Hi Jason, I agree, but sometimes you need to hunt big allocations and according our experience (high frequency and algorithmic trading) cutting helps a lot.
On Mon, Mar 17, 2014 at 10:38 PM, Jason Evans <jas...@canonware.com> wrote: > On Mar 17, 2014, at 1:13 AM, Evgeniy Ivanov <i...@eivanov.com> wrote: > > We use opt.prof_accum to profile memory allocations. Sometimes performance > degradation is too high because getting stacks is a heavy operation. In > resulting backtraces we see some relatively small allocations we are not > interested in. With DTrace our usual case is to cut allocations smaller than > 16 KB. I would like to add option to cut allocations smaller, than specified > one, e.g. "opt.prof_cut" (ssize_t). Will it be accepted upstream? > > > The way to reduce backtracing overhead is to increase the sample interval, > e.g. MALLOC_CONF=lg_prof_sample:20 doubles the default from an average of > one sample per 2^19 bytes (512 KiB) to 2^20 bytes (1 MiB). If you > systematically ignore small allocations you bias the sample, which > invalidates the math that allows sample-based profiling to converge on > reality. > > Jason -- Cheers, Evgeniy _______________________________________________ jemalloc-discuss mailing list jemalloc-discuss@canonware.com http://www.canonware.com/mailman/listinfo/jemalloc-discuss