On Fri, Jul 3, 2009 at 8:18 AM, c0re dumped <ez.c...@gmail.com> wrote:

> So, I never had problem with this server, but recently it starts to
> giv me the following messages *every* minute :
>
> Jul  3 10:04:00 squid kernel: Approaching the limit on PV entries,
> consider increasing either the vm.pmap.shpgperproc or the
> vm.pmap.pv_entry_max tunable.
> Jul  3 10:05:00 squid kernel: Approaching the limit on PV entries,
> consider increasing either the vm.pmap.shpgperproc or the
> vm.pmap.pv_entry_max tunable.
> Jul  3 10:06:00 squid kernel: Approaching the limit on PV entries,
> consider increasing either the vm.pmap.shpgperproc or the
> vm.pmap.pv_entry_max tunable.
> Jul  3 10:07:01 squid kernel: Approaching the limit on PV entries,
> consider increasing either the vm.pmap.shpgperproc or the
> vm.pmap.pv_entry_max tunable.
> Jul  3 10:08:01 squid kernel: Approaching the limit on PV entries,
> consider increasing either the vm.pmap.shpgperproc or the
> vm.pmap.pv_entry_max tunable.
> Jul  3 10:09:01 squid kernel: Approaching the limit on PV entries,
> consider increasing either the vm.pmap.shpgperproc or the
> vm.pmap.pv_entry_max tunable.
> Jul  3 10:10:01 squid kernel: Approaching the limit on PV entries,
> consider increasing either the vm.pmap.shpgperproc or the
> vm.pmap.pv_entry_max tunable.
> Jul  3 10:11:01 squid kernel: Approaching the limit on PV entries,
> consider increasing either the vm.pmap.shpgperproc or the
> vm.pmap.pv_entry_max tunable.
>
> This server is running Squid + dansguardian. The users are complaining
> about slow navigation and they are driving me crazy !
>
> Have anyone faced this problem before ?
>
> Some infos:
>
> # uname -a
> FreeBSD squid 7.2-RELEASE FreeBSD 7.2-RELEASE #0: Fri May  1 08:49:13
> UTC 2009     r...@walker.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
> i386
>
> # sysctl vm
> vm.vmtotal:
> System wide totals computed every five seconds: (values in kilobytes)
> ===============================================
> Processes:              (RUNQ: 1 Disk Wait: 1 Page Wait: 0 Sleep: 230)
> Virtual Memory:         (Total: 19174412K, Active 9902152K)
> Real Memory:            (Total: 1908080K Active 1715908K)
> Shared Virtual Memory:  (Total: 647372K Active: 10724K)
> Shared Real Memory:     (Total: 68092K Active: 4436K)
> Free Memory Pages:      88372K
>
> vm.loadavg: { 0.96 0.96 1.13 }
> vm.v_free_min: 4896
> vm.v_free_target: 20635
> vm.v_free_reserved: 1051
> vm.v_inactive_target: 30952
> vm.v_cache_min: 20635
> vm.v_cache_max: 41270
> vm.v_pageout_free_min: 34
> vm.pageout_algorithm: 0
> vm.swap_enabled: 1
> vm.kmem_size_scale: 3
> vm.kmem_size_max: 335544320
> vm.kmem_size_min: 0
> vm.kmem_size: 335544320
> vm.nswapdev: 1
> vm.dmmax: 32
> vm.swap_async_max: 4
> vm.zone_count: 84
> vm.swap_idle_threshold2: 10
> vm.swap_idle_threshold1: 2
> vm.exec_map_entries: 16
> vm.stats.misc.zero_page_count: 0
> vm.stats.misc.cnt_prezero: 0
> vm.stats.vm.v_kthreadpages: 0
> vm.stats.vm.v_rforkpages: 0
> vm.stats.vm.v_vforkpages: 340091
> vm.stats.vm.v_forkpages: 3604123
> vm.stats.vm.v_kthreads: 53
> vm.stats.vm.v_rforks: 0
> vm.stats.vm.v_vforks: 2251
> vm.stats.vm.v_forks: 19295
> vm.stats.vm.v_interrupt_free_min: 2
> vm.stats.vm.v_pageout_free_min: 34
> vm.stats.vm.v_cache_max: 41270
> vm.stats.vm.v_cache_min: 20635
> vm.stats.vm.v_cache_count: 5734
> vm.stats.vm.v_inactive_count: 242259
> vm.stats.vm.v_inactive_target: 30952
> vm.stats.vm.v_active_count: 445958
> vm.stats.vm.v_wire_count: 58879
> vm.stats.vm.v_free_count: 16335
> vm.stats.vm.v_free_min: 4896
> vm.stats.vm.v_free_target: 20635
> vm.stats.vm.v_free_reserved: 1051
> vm.stats.vm.v_page_count: 769244
> vm.stats.vm.v_page_size: 4096
> vm.stats.vm.v_tfree: 12442098
> vm.stats.vm.v_pfree: 1657776
> vm.stats.vm.v_dfree: 0
> vm.stats.vm.v_tcached: 253415
> vm.stats.vm.v_pdpages: 254373
> vm.stats.vm.v_pdwakeups: 14
> vm.stats.vm.v_reactivated: 414
> vm.stats.vm.v_intrans: 1912
> vm.stats.vm.v_vnodepgsout: 0
> vm.stats.vm.v_vnodepgsin: 6593
> vm.stats.vm.v_vnodeout: 0
> vm.stats.vm.v_vnodein: 891
> vm.stats.vm.v_swappgsout: 0
> vm.stats.vm.v_swappgsin: 0
> vm.stats.vm.v_swapout: 0
> vm.stats.vm.v_swapin: 0
> vm.stats.vm.v_ozfod: 56314
> vm.stats.vm.v_zfod: 2016628
> vm.stats.vm.v_cow_optim: 1959
> vm.stats.vm.v_cow_faults: 584331
> vm.stats.vm.v_vm_faults: 3661086
> vm.stats.sys.v_soft: 23280645
> vm.stats.sys.v_intr: 18528397
> vm.stats.sys.v_syscall: 1990471112
> vm.stats.sys.v_trap: 8079878
> vm.stats.sys.v_swtch: 105613021
> vm.stats.object.bypasses: 14893
> vm.stats.object.collapses: 55259
> vm.v_free_severe: 2973
> vm.max_proc_mmap: 49344
> vm.old_msync: 0
> vm.msync_flush_flags: 3
> vm.boot_pages: 48
> vm.max_wired: 255475
> vm.pageout_lock_miss: 0
> vm.disable_swapspace_pageouts: 0
> vm.defer_swapspace_pageouts: 0
> vm.swap_idle_enabled: 0
> vm.pageout_stats_interval: 5
> vm.pageout_full_stats_interval: 20
> vm.pageout_stats_max: 20635
> vm.max_launder: 32
> vm.phys_segs:
> SEGMENT 0:
>
> start:     0x1000
> end:       0x9a000
> free list: 0xc0cca168
>
> SEGMENT 1:
>
> start:     0x100000
> end:       0x400000
> free list: 0xc0cca168
>
> SEGMENT 2:
>
> start:     0x1025000
> end:       0xbc968000
> free list: 0xc0cca060
>
> vm.phys_free:
> FREE LIST 0:
>
>  ORDER (SIZE)  |  NUMBER
>                |  POOL 0  |  POOL 1
> --            -- --      -- --      --
>  10 (  4096K)  |       0  |       0
>   9 (  2048K)  |       0  |       0
>   8 (  1024K)  |       0  |       0
>   7 (   512K)  |       0  |       0
>   6 (   256K)  |       0  |       0
>   5 (   128K)  |       0  |       0
>   4 (    64K)  |       0  |       0
>   3 (    32K)  |       0  |       0
>   2 (    16K)  |       0  |       0
>   1 (     8K)  |       0  |       0
>   0 (     4K)  |      24  |    3562
>
> FREE LIST 1:
>
>  ORDER (SIZE)  |  NUMBER
>                |  POOL 0  |  POOL 1
> --            -- --      -- --      --
>  10 (  4096K)  |       0  |       0
>   9 (  2048K)  |       0  |       0
>   8 (  1024K)  |       0  |       0
>   7 (   512K)  |       0  |       0
>   6 (   256K)  |       0  |       0
>   5 (   128K)  |       0  |       2
>   4 (    64K)  |       0  |       3
>   3 (    32K)  |       6  |      11
>   2 (    16K)  |       6  |      21
>   1 (     8K)  |      14  |      35
>   0 (     4K)  |      20  |      70
>
> vm.reserv.reclaimed: 187
> vm.reserv.partpopq:
> LEVEL     SIZE  NUMBER
>
>   -1:  71756K,     19
>
> vm.reserv.freed: 35575
> vm.reserv.broken: 94
> vm.idlezero_enable: 0
> vm.kvm_free: 310374400
> vm.kvm_size: 1073737728
> vm.pmap.pmap_collect_active: 0
> vm.pmap.pmap_collect_inactive: 0
> vm.pmap.pv_entry_spare: 50408
> vm.pmap.pv_entry_allocs: 38854797
> vm.pmap.pv_entry_frees: 37052501
> vm.pmap.pc_chunk_tryfail: 0
> vm.pmap.pc_chunk_frees: 130705
> vm.pmap.pc_chunk_allocs: 136219
> vm.pmap.pc_chunk_count: 5514
> vm.pmap.pv_entry_count: 1802296
> vm.pmap.pde.promotions: 0
> vm.pmap.pde.p_failures: 0
> vm.pmap.pde.mappings: 0
> vm.pmap.pde.demotions: 0
> vm.pmap.shpgperproc: 200
> vm.pmap.pv_entry_max: 2002224
> vm.pmap.pg_ps_enabled: 0
>
> Either pmap.shpgperproc and vm.pmap.pv_entry_max are with their
> default values. I read here
> (http://lists.freebsd.org/pipermail/freebsd-hackers/2003-May/000695.html)
> tha its not a good ideia to increase these values arbitrarily.
>

There are two things that you can do:

(1) Enable superpages by setting vm.pmap_pg_ps_enabled to "1" in
/boot/loader.conf.  A 4MB superpage mapping on i386 consumes a single PV
entry instead of the 1024 entries that would be consumed by mapping 4MB of
4KB pages. Whether or not this will help depends on aspects of Squid and
Dansguardian that I can't predict.

(2) You shouldn't be afraid of increasing vm.pmap.pv_entry_max.  However,
you should watch vm.kvm_free as you do this.  It will decrease in proportion
to the increase in vm.pmap.pv_entry_max.  Don't let vm.kvm_free drop too
close to 0.  I would consider too close on the order of 25-50MB.

Regards,
Alan

 
<http://lists.freebsd.org/mailman/listinfo/freebsd-hackers><freebsd-hackers-unsubscr...@freebsd.org>
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"

Reply via email to