Hi, I observed that one multi-threaded application is generating so many cross-calls (xcalls) on my AMD multi-core machine. A snapshot of stack trace is shown below. I think that this is because of "segvn" activity, i.e. unmapping the page and generating cross-call activity to maintain MMU level coherence across the processors (from the Solaris Internals book). I read that by increasing segmap cache size, we can improve the performance of some multi-threaded applications (which produce serious File IO). Using adb, I am able to change the size of "segmap cache". However, we will get this benefit only on the File systems other than ZFS. I have ZFS and also I am unable to change the size of segvn. I don't know whether ZFS uses segvn cache or not. However, I tried to change the size of segvn cache like segmap using adb, but failed. It is giving the message as shown below.
$ pfexec adb -kw /dev/ksyms /dev/mem physmem 7ff23f segmapsize/D segmapsize: 67108864 segvnsize/D adb: failed to dereference symbol: unknown symbol name Could anyone tell me how can I increase the size of segvn cache on my machine. $ pfexec dtrace -n 'xcalls /execname=="my_multithreaded"/ {...@[stack()] = count()}' dtrace: description 'xcalls ' matched 2 probes unix`xc_do_call+0x135 unix`xc_call+0x4b unix`hat_tlb_inval+0x2af unix`unlink_ptp+0x92 unix`htable_release+0xfa unix`hat_unload_callback+0x1d8 genunix`segvn_unmap+0x255 genunix`as_unmap+0xf2 genunix`munmap+0x80 unix`sys_syscall32+0x101 377 unix`xc_do_call+0x135 unix`xc_call+0x4b unix`hat_tlb_inval+0x2af unix`x86pte_update+0x69 unix`hati_update_pte+0x10c unix`hat_pagesync+0x169 genunix`pvn_getdirty+0x5d zfs`zfs_putpage+0x1c7 genunix`fop_putpage+0x74 genunix`segvn_sync+0x137 genunix`as_ctl+0x200 genunix`memcntl+0x764 unix`sys_syscall32+0x101 946 unix`xc_do_call+0x135 unix`xc_call+0x4b unix`hat_tlb_inval+0x2af unix`unlink_ptp+0x92 unix`htable_release+0xfa unix`hat_unload_callback+0x24a genunix`segvn_unmap+0x255 genunix`as_unmap+0xf2 genunix`munmap+0x80 unix`sys_syscall32+0x101 2494 ......... .... -- This message posted from opensolaris.org _______________________________________________ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org