On Wed, Nov 16, 2016 at 11:53:55AM -0600, Alan Cox wrote: > On 11/16/2016 10:59, Ruslan Bukin wrote: > > On Wed, Nov 16, 2016 at 06:53:43PM +0200, Konstantin Belousov wrote: > >> On Wed, Nov 16, 2016 at 01:37:18PM +0000, Ruslan Bukin wrote: > >>> I have a panic with this on RISC-V. Any ideas ? > >> How did you checked that the revision you replied to, makes the problem ? > >> Note that the backtrace below is not reasonable. > > I reverted this commit like that and rebuilt kernel: > > git show 2fa36073055134deb2df39c7ca46264cfc313d77 | patch -p1 -R > > > > So the problem is reproducible on dual-core with 32mb mdroot. > > This change amounted to dead code removal, so I'm not sure how it could > have an effect. There were only a couple places where the changes were > other than mechanical in nature. Also, that the number of cores matters > is no less puzzling. > > Can you send Kostik and me the output of "sysctl vm.stats.vm" from > shortly after boot on the kernel with the patch reverted? >
Here is result with patch reverted: # sysctl vm.stats.vm vm.stats.vm.v_vm_faults: 1578 vm.stats.vm.v_io_faults: 213 vm.stats.vm.v_cow_faults: 135 vm.stats.vm.v_cow_optim: 0 vm.stats.vm.v_zfod: 360 vm.stats.vm.v_ozfod: 0 vm.stats.vm.v_swapin: 0 vm.stats.vm.v_swapout: 0 vm.stats.vm.v_swappgsin: 0 vm.stats.vm.v_swappgsout: 0 vm.stats.vm.v_vnodein: 0 vm.stats.vm.v_vnodeout: 0 vm.stats.vm.v_vnodepgsin: 0 vm.stats.vm.v_vnodepgsout: 0 vm.stats.vm.v_intrans: 0 vm.stats.vm.v_reactivated: 0 vm.stats.vm.v_pdwakeups: 0 vm.stats.vm.v_pdpages: 2 vm.stats.vm.v_pdshortfalls: 0 vm.stats.vm.v_tcached: 0 vm.stats.vm.v_dfree: 0 vm.stats.vm.v_pfree: 142 vm.stats.vm.v_tfree: 340 vm.stats.vm.v_page_size: 4096 vm.stats.vm.v_page_count: 235637 vm.stats.vm.v_free_reserved: 356 vm.stats.vm.v_free_target: 5064 vm.stats.vm.v_free_min: 1533 vm.stats.vm.v_free_count: 231577 vm.stats.vm.v_wire_count: 3779 vm.stats.vm.v_active_count: 251 vm.stats.vm.v_inactive_target: 7596 vm.stats.vm.v_inactive_count: 29 vm.stats.vm.v_laundry_count: 0 vm.stats.vm.v_cache_count: 0 vm.stats.vm.v_pageout_free_min: 34 vm.stats.vm.v_interrupt_free_min: 2 vm.stats.vm.v_forks: 4 vm.stats.vm.v_vforks: 0 vm.stats.vm.v_rforks: 0 vm.stats.vm.v_kthreads: 20 vm.stats.vm.v_forkpages: 132 vm.stats.vm.v_vforkpages: 0 vm.stats.vm.v_rforkpages: 0 vm.stats.vm.v_kthreadpages: 0 # And here is patch not reverted, but 800m of physical memory: # sysctl sysctl vm.stats.vm vm.stats.vm.v_vm_faults: 1580 vm.stats.vm.v_io_faults: 213 vm.stats.vm.v_cow_faults: 135 vm.stats.vm.v_cow_optim: 0 vm.stats.vm.v_zfod: 362 vm.stats.vm.v_ozfod: 0 vm.stats.vm.v_swapin: 0 vm.stats.vm.v_swapout: 0 vm.stats.vm.v_swappgsin: 0 vm.stats.vm.v_swappgsout: 0 vm.stats.vm.v_vnodein: 0 vm.stats.vm.v_vnodeout: 0 vm.stats.vm.v_vnodepgsin: 0 vm.stats.vm.v_vnodepgsout: 0 vm.stats.vm.v_intrans: 0 vm.stats.vm.v_reactivated: 0 vm.stats.vm.v_pdwakeups: 0 vm.stats.vm.v_pdpages: 4 vm.stats.vm.v_pdshortfalls: 0 vm.stats.vm.v_tcached: 0 vm.stats.vm.v_dfree: 0 vm.stats.vm.v_pfree: 142 vm.stats.vm.v_tfree: 340 vm.stats.vm.v_page_size: 4096 vm.stats.vm.v_page_count: 179753 vm.stats.vm.v_free_reserved: 284 vm.stats.vm.v_free_target: 3872 vm.stats.vm.v_free_min: 1181 vm.stats.vm.v_free_count: 176074 vm.stats.vm.v_wire_count: 3396 vm.stats.vm.v_active_count: 253 vm.stats.vm.v_inactive_target: 5808 vm.stats.vm.v_inactive_count: 29 vm.stats.vm.v_laundry_count: 0 vm.stats.vm.v_cache_count: 0 vm.stats.vm.v_pageout_free_min: 34 vm.stats.vm.v_interrupt_free_min: 2 vm.stats.vm.v_forks: 4 vm.stats.vm.v_vforks: 0 vm.stats.vm.v_rforks: 0 vm.stats.vm.v_kthreads: 20 vm.stats.vm.v_forkpages: 132 vm.stats.vm.v_vforkpages: 0 vm.stats.vm.v_rforkpages: 0 vm.stats.vm.v_kthreadpages: 0 # Ruslan _______________________________________________ [email protected] mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "[email protected]"
