segvn is the generic vnode segment driver, the most heavily used segment driver (typically) in a system. It manages vnode mappings for various address space segments, including text, data heap and stack segments, and is used for mmap/munmap operations.
I am not aware of any parameters that set (or limit) the memory used by segvn - the data associated with a vnode managed by segvn will be cached in physical memory. Before we go nuts, can your quantify "so many xcalls" ? What does the mpstat data report on a per-second basis? xcalls are relatively fast and cheap, and modern processors are capable of handling high xcall rates. What does your code do? The stack sample indicates you did 2494 munmap system calls. I do not know how long you sampled, so it's impossible to say if that is a large number or not from a rate perspective. But I assume, where there are munmaps, there are mmaps. It may interesting to track that rate and see how it correlates to xcall activity: dtrace -n 'syscall::mmap:entry { @ = count(); } tick-1sec { printa("mmaps per second: %...@d\n",@); trunc(@); }' You can add the predicate to test for the execname if you wish to just track mmaps for that process. As you indicated, if you are using ZFS, segmap does not apply. What version of Solaris is this? Are you running 64-bit or 32-bit? Thanks, /jim On Aug 13, 2010, at 5:35 PM, Kishore Kumar Pusukuri wrote: > Hi, > I observed that one multi-threaded application is generating so many > cross-calls (xcalls) on my AMD multi-core machine. A snapshot of stack trace > is shown below. I think that this is because of "segvn" activity, i.e. > unmapping the page and generating cross-call activity to maintain MMU level > coherence across the processors (from the Solaris Internals book). I read > that by increasing segmap cache size, we can improve the performance of some > multi-threaded applications (which produce serious File IO). Using adb, I am > able to change the size of "segmap cache". However, we will get this benefit > only on the File systems other than ZFS. I have ZFS and also I am unable to > change the size of segvn. I don't know whether ZFS uses segvn cache or not. > However, I tried to change the size of segvn cache like segmap using adb, > but failed. It is giving the message as shown below. > > $ pfexec adb -kw /dev/ksyms /dev/mem > physmem 7ff23f > segmapsize/D > segmapsize: 67108864 > > segvnsize/D > adb: failed to dereference symbol: unknown symbol name > > > Could anyone tell me how can I increase the size of segvn cache on my machine. > > $ pfexec dtrace -n 'xcalls /execname=="my_multithreaded"/ {...@[stack()] = > count()}' > dtrace: description 'xcalls ' matched 2 probes > > unix`xc_do_call+0x135 > unix`xc_call+0x4b > unix`hat_tlb_inval+0x2af > unix`unlink_ptp+0x92 > unix`htable_release+0xfa > unix`hat_unload_callback+0x1d8 > genunix`segvn_unmap+0x255 > genunix`as_unmap+0xf2 > genunix`munmap+0x80 > unix`sys_syscall32+0x101 > 377 > > unix`xc_do_call+0x135 > unix`xc_call+0x4b > unix`hat_tlb_inval+0x2af > unix`x86pte_update+0x69 > unix`hati_update_pte+0x10c > unix`hat_pagesync+0x169 > genunix`pvn_getdirty+0x5d > zfs`zfs_putpage+0x1c7 > genunix`fop_putpage+0x74 > genunix`segvn_sync+0x137 > genunix`as_ctl+0x200 > genunix`memcntl+0x764 > unix`sys_syscall32+0x101 > 946 > > unix`xc_do_call+0x135 > unix`xc_call+0x4b > unix`hat_tlb_inval+0x2af > unix`unlink_ptp+0x92 > unix`htable_release+0xfa > unix`hat_unload_callback+0x24a > genunix`segvn_unmap+0x255 > genunix`as_unmap+0xf2 > genunix`munmap+0x80 > unix`sys_syscall32+0x101 > 2494 > ......... > .... > -- > This message posted from opensolaris.org > _______________________________________________ > perf-discuss mailing list > perf-discuss@opensolaris.org _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org