Processes blocked on 6.1-R with amr(4)
Recently I tryed to update 5.4 to 6.1 from sources. After successfull install I found that all processes (pkg_delete, pkg_add, cp, tar, etc..) I run hangs mostly in getblk state. I reinstalled 5.4. It works fine. So I installed only the kernel from 6.1-R and booted it in single user mode. The problem appears again. Here is debugger output: db ps pid proc uid ppid pgrp flag stat wmesgwchan cmd 57 c270c62404857 0004002 [SLPQ wdrain 0xc0990ea4][SLP] bsdtar 48 c2650a3c04248 0004002 [SLPQ wait 0xc2650a3c][SLP] bash 42 c270c8300 142 0004002 [SLPQ wait 0xc270c830][SLP] sh 41 c270ca3c0 0 0 204 [SLPQ - 0xca1f5d08][SLP] schedcpu 40 c270cc480 0 0 204 [SLPQ - 0xc0998c6c][SLP] nfsiod 3 39 c270d0000 0 0 204 [SLPQ - 0xc0998c68][SLP] nfsiod 2 38 c2617c480 0 0 204 [SLPQ - 0xc0998c64][SLP] nfsiod 1 37 c264f0000 0 0 204 [SLPQ - 0xc0998c60][SLP] nfsiod 0 36 c264f20c0 0 0 204 [SLPQ sdflush 0xc099e314][SLP] softdepflush 35 c264f4180 0 0 204 [SLPQ syncer 0xc098bc3c][SLP] syncer 34 c264f6240 0 0 204 [SLPQ vlruwt 0xc264f624][SLP] vnlru 33 c264f8300 0 0 204 [SLPQ psleep 0xc0990e6c][SLP] bufdaemon 32 c264fa3c0 0 0 20c [SLPQ pgzero 0xc099f284][SLP] pagezero 31 c264fc480 0 0 204 [SLPQ psleep 0xc099edd4][SLP] vmdaemon 30 c2650 0 0 204 [SLPQ psleep 0xc099ed90][SLP] pagedaemon 29 c265020c0 0 0 204 [IWAIT] swi0: sio 28 c26504180 0 0 204 [IWAIT] irq7: ppc0 27 c25136240 0 0 204 [SLPQ - 0xc24ea83c][SLP] fdc0 26 c25138300 0 0 204 [IWAIT] irq12: psm0 25 c2513a3c0 0 0 204 [IWAIT] irq1: atkbd0 24 c2513c480 0 0 204 [IWAIT] irq14: ata0 23 c26170000 0 0 204 [SLPQ idle 0xc24e9600][SLP] aic_recovery1 9 c261720c0 0 0 204 [SLPQ idle 0xc24e9600][SLP] aic_recovery1 8 c26174180 0 0 204 [SLPQ idle 0xc24e9000][SLP] aic_recovery0 22 c26176240 0 0 204 [IWAIT] irq15: ahc0 ahc1+ 7 c26178300 0 0 204 [SLPQ idle 0xc24e9000][SLP] aic_recovery0 21 c2617a3c0 0 0 204 [IWAIT] irq9: fxp0 20 c24f420c0 0 0 204 [IWAIT] irq11: amr0 19 c24f44180 0 0 204 [IWAIT] swi2: cambio 18 c24f46240 0 0 204 [IWAIT] swi5: + 6 c24f48300 0 0 204 [SLPQ - 0xc25ae480][SLP] thread taskq 17 c24f4a3c0 0 0 204 [IWAIT] swi6: + 16 c24f4c480 0 0 204 [IWAIT] swi6: task queue 5 c25130000 0 0 204 [SLPQ - 0xc24ed500][SLP] kqueue taskq 15 c251320c0 0 0 204 [SLPQ - 0xc0986c00][SLP] yarrow 4 c25134180 0 0 204 [SLPQ - 0xc09893c8][SLP] g_down 3 c24ee0000 0 0 204 [SLPQ - 0xc09893c4][SLP] g_up 2 c24ee20c0 0 0 204 [SLPQ - 0xc09893bc][SLP] g_event 14 c24ee4180 0 0 204 [IWAIT] swi3: vm 13 c24ee6240 0 0 20c [IWAIT] swi4: clock sio 12 c24ee8300 0 0 204 [IWAIT] swi1: net 11 c24eea3c0 0 0 20c [APU 0] idle 1 c24eec480 0 1 0004200 [SLPQ wait 0xc24eec48][SLP] init 10 c24f40000 0 0 204 [SLPQ ktrace 0xc0989e18][SLP] ktrace 0 c09894c00 0 0 200 [IWAIT] swapper db show lockedvnods Locked vnodes 0xc283a990: tag ufs, type VREG usecount 1, writecount 1, refcount 232 mountedhere 0 flags () v_object 0xc1029420 ref 0 pages 920 lock type ufs: EXAL (count 1) by thread 0xc2651000 (pid 57) ino 290899, on dev amrd0s1e db trace 57 Tracing pid 57 tid 100045 td 0xc2651000 sched_switch(c2651000,0,1) at sched_switch+0x14b mi_switch(1,0,c2651000,ca211944,c0672612) at mi_switch+0x1ba sleepq_switch(c0990ea4) at sleepq_switch+0x86 sleepq_wait(c0990ea4,0,c2651000,6404,c56bd758) at sleepq_wait+0x36 msleep(c0990ea4,c0990ec0,44,c08b74e3,0) at msleep+0x235 waitrunningbufspace(c12ac830,c56facc8,ca21199c,c0699637,c56bd758) at waitrunningbufspace+0x62 bufwrite(c56bd758) at bufwrite+0x121 bawrite(c56bd758,ca2119cc,c283aa0c,c283aa0c,c283aa0c) at bawrite+0x13 cluster_wbuild(c283a990,4000,e5,0,8) at cluster_wbuild+0x6f0 cluster_write(c283a990,c56facc8,394000,0,7f) at cluster_write+0x4db ffs_write(ca211bec,0,0,ca211ba0,4) at ffs_write+0x504 VOP_WRITE_APV(c09601a0,ca211bec) at VOP_WRITE_APV+0xce vn_write(c2725630,ca211cbc,c24ece80,0,c2651000) at vn_write+0x1ea dofilewrite(c2651000,3,c2725630,ca211cbc,) at dofilewrite+0x77 kern_writev(c2651000,3,ca211cbc,8056000,1800) at kern_writev+0x3b write(c2651000,ca211d04,3,1c,212) at write+0x45 syscall(3b,3b,3b,8052040,2800) at syscall+0x2b7 Xint0x80_syscall() at Xint0x80_syscall+0x1f Another one: db ps pid proc
panic: smbiod
Approximately once per month or two this server panics. # mount | grep smbfs //[EMAIL PROTECTED]/FTPEXCHANGE on /mnt/exchange (smbfs) //[EMAIL PROTECTED]/FREEBSD on /mnt/FreeBSD (smbfs) # dmesg -a can be found at http://www.dp.uz.gov.ua/dmesg.isc-cache kernel config - http://www.dp.uz.gov.ua/kernel.isc-cache # uname -a FreeBSD isc-cache.dp.uz.gov.ua 5.4-RELEASE-p4 FreeBSD 5.4-RELEASE-p4 #1: Fri Nov 4 11:37:34 EET 2005 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/ISC-CACHE_KERNEL i386 db where Tracing pid 91914 tid 100078 td 0xc125ec00 turnstile_head(0,c1c7f900,c1e67a00,c1c7f900,cc32fb54) at turnstile_head+0x6 _mtx_unlock_sleep(c1e67a94,0,c1d21ec2,61) at _mtx_unlock_sleep+0x40 _mtx_unlock_flags(c1e67a94,0,c1d21ec2,61,c1c7f900) at _mtx_unlock_flags+0x2e smb_iod_invrq(c1c7f900,c1c7f900,c1c7f900,cc32fba4,c1d16766) at smb_iod_invrq+0x43 smb_iod_dead(c1c7f900,c1c7f400,c1c7f488,c1e67a00,cc32fbbc) at smb_iod_dead+0x1a smb_iod_addrq(c1e67a00,c1e67a00,0,c154f200,cc32fbd4) at smb_iod_addrq+0x6e smb_rq_enqueue(c1e67a00,c1e67a18,c0fae3e0,c154f200,cc32fce4) at smb_rq_enqueue+0x2d smb_rq_simple(c1e67a00,c1e67a00,c1e67a18,c154f200,c1d21cd9) at smb_rq_simple+0x2a smb_smb_treeconnect(c1c7f400,c1c7f950,c1b461a0,c1c7f900,c1c7f958) at smb_smb_treeconnect+0x24e smb_iod_treeconnect(c1c7f900,c1c7f400,c1c7f900,c125d710,c1d16d3c) at smb_iod_treeconnect+0x7b smb_iod_thread(c1c7f900,cc32fd48) at smb_iod_thread+0xf4 fork_exit(c1d16d3c,c1c7f900,cc32fd48) at fork_exit+0x74 fork_trampoline() at fork_trampoline+0x8 --- trap 0x1, eip = 0, esp = 0xcc32fd7c, ebp = 0 --- db ps ... 91914 c125d7100 1 0 204 [APU 0] smbiod4 ... What additional information can I provide? Thanks! -- Best regards, Palij Oleg, ISC (Pridn railway) xmpp://[EMAIL PROTECTED] ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
permanent per month panic on 5.4-p4 related to smbfs
Approximately once per month or two this server panics. We use it for InterSystems Cache` (under linux emulation) and as samba-server (not heavy-loaded). Also, we are using smbfs # mount | grep smbfs //[EMAIL PROTECTED]/FTPEXCHANGE on /mnt/exchange (smbfs) //[EMAIL PROTECTED]/FREEBSD on /mnt/FreeBSD (smbfs) Panics mostly happen while active usage of smbfs. # dmesg -a can be found at http://www.dp.uz.gov.ua/dmesg.isc-cache kernel config - http://www.dp.uz.gov.ua/kernel.isc-cache # uname -a FreeBSD isc-cache.dp.uz.gov.ua 5.4-RELEASE-p4 FreeBSD 5.4-RELEASE-p4 #1: Fri Nov 4 11:37:34 EET 2005 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/ISC-CACHE_KERNEL i386 db where Tracing pid 46479 tid 100104 td 0xc12a1600 turnstile_head(0,c11dd700,c1c42700,c11dd700,cc396b54) at turnstile_head+0x6 _mtx_unlock_sleep(c1c42794,0,c1d6eec2,61) at _mtx_unlock_sleep+0x40 _mtx_unlock_flags(c1c42794,0,c1d6eec2,61,c11dd700) at _mtx_unlock_flags+0x2e smb_iod_invrq(c11dd700,c11dd700,c11dd700,cc396ba4,c1d63766) at smb_iod_invrq+0x43 smb_iod_dead(c11dd700,c1597400,c1597488,c1c42700,cc396bbc) at smb_iod_dead+0x1a smb_iod_addrq(c1c42700,c1c42700,0,c1042a00,cc396bd4) at smb_iod_addrq+0x6e smb_rq_enqueue(c1c42700,c1c42718,c102c5c0,c1042a00,cc396ce4) at smb_rq_enqueue+0x2d smb_rq_simple(c1c42700,c1c42700,c1c42718,c1042a00,c1d6ecd9) at smb_rq_simple+0x2a smb_smb_treeconnect(c1597400,c11dd750,c19a28f0,c11dd700,c11dd758) at smb_smb_treeconnect+0x24e smb_iod_treeconnect(c11dd700,c1597400,c11dd700,c1342a98,c1d63d3c) at smb_iod_treeconnect+0x7b smb_iod_thread(c11dd700,cc396d48) at smb_iod_thread+0xf4 fork_exit(c1d63d3c,c11dd700,cc396d48) at fork_exit+0x74 fork_trampoline() at fork_trampoline+0x8 db ps pid proc uid ppid pgrp flag stat wmesgwchan cmd 38438 c1ba3e200 38437 38384 0004000 [SLPQ 90evw 0xc19a28f0][SLP] df 38437 c1491c5c0 38393 38384 0004000 [SLPQ wait 0xc1491c5c][SLP] sh 38395 c12a0a980 38394 38384 0004000 [SLPQ piperd 0xc10b8600][SLP] mail 38394 c1ba31c40 38386 38384 000 [SLPQ wait 0xc1ba31c4][SLP] sh 38393 c12a08d40 38386 38384 000 [SLPQ wait 0xc12a08d4][SLP] sh 38386 c1ba3a980 38384 38384 0004000 [SLPQ wait 0xc1ba3a98][SLP] sh 38384 c11268d40 38380 38384 0004000 [SLPQ wait 0xc11268d4][SLP] sh 38380 c1ba61c40 433 433 000 [SLPQ piperd 0xc1341180][SLP] cron 35977 c129e1c4 1007 35975 35974 0004003 [SLPQ ttyin 0xc1061210][SLP] cache 35975 c129e54c 1007 35974 35974 0004002 [SLPQ wait 0xc129e54c][SLP] sh 35974 c0f5dc5c0 35973 35974 0004102 [SLPQ wait 0xc0f5dc5c][SLP] login 35973 c1491e200 551 35973 0004000 [SLPQ select 0xc06c4784][SLP] telnetd 35406 c1ba60000 497 35406 101 [RUNQ] cache 34146 c14911c40 451 451 101 [SLPQ select 0xc06c4784][SLP] smbd 1133 c1ba30000 1 1133 0004002 [SLPQ ttyin 0xc1043810][SLP] getty 91944 c13423880 1 91944 0004002 [SLPQ ttyin 0xc0e9c810][SLP] getty 46479 c1342a980 1 0 204 [APU 0] smbiod1 564 c129e3880 1 564 0004002 [SLPQ ttyin 0xc0e9c410][SLP] getty 551 c125854c0 1 551 000 [SLPQ select 0xc06c4784][SLP] inetd 531 c129ea980 497 531 102 [SLPQ select 0xc06c4784][SLP] cache 530 c125ce200 497 530 102 [SLPQ select 0xc06c4784][SLP] cache 527 c125ca980 497 527 102 [SLPQ accept 0xc126d2c2][SLP] cache 525 c125c8d40 524 525 0004002 [SLPQ select 0xc06c4784][SLP] clmanager 524 c125c7100 497 524 102 [SLPQ select 0xc06c4784][SLP] cache 510 c129ec5c0 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 509 c129ee200 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 508 c12a0 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 507 c12587100 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 506 c108aa980 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 505 c1258e200 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 504 c125c3880 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 503 c108a0000 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 502 c0f5d1c40 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 501 c1258a980 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 500 c125c1c40 497 497 102 [SLPQ semwait 0xc0fb][SLP] cache 497 c125c0000 1 497 0004002 [SLPQ msgwait 0xc0f89000][SLP] cache 455 c108ae200 1 455 001 [SLPQ select 0xc06c4784][SLP] winbindd 453 c108ac5c0 1 453 001 [SLPQ select 0xc06c4784][SLP] nmbd 451 c12580000 1 451 001 [SLPQ select 0xc06c4784][SLP] smbd 417 c0f5da980 1 417 100 [SLPQ select 0xc06c4784][SLP] sshd 342 c0f5d710 64 340 340 100 [SLPQ bpf 0xc10d8100][SLP] pflogd 340 c0f5d8d40 1 340 000 [SLPQ sbwait 0xc10d36ec][SLP] pflogd 271 c0f5de200 1 271 000 [SLPQ select 0xc06c4784][SLP] syslogd 253 c0f5d3880 1 253
6.0-R random freezes
Hi! I have machine running OpenBSD without any problems. When I put FreeBSD 6.0-R to this machine it started randomly freezing with and without load. It does not respond to keyboard ctrl-alt-esc, ctrl-alt-del (with KDB in kernel) and by network. So I rebuild kernel with DDB, attached serial console, and send break with minicom when it freezed. KD@: enter: Line break on console [thread pid 22 tid 100011 ] Stopped at kdb_enter+0x2b: nop db where Tracing pid 22 tid 100011 td 0xc0d91300 kdb_enter(c064968f) at kdb_enter+0x2b siointr1(c0e4f800) at siointr1+0xce siointr(c0e4f800) at siointr+0x38 intr_execute_handlers(c067c140,c5a38c7c,c9f38068,c0e33400,6) at intr_execute_ha5 atpic_handle_intr(4) at atpic_handle_intr+0x96 Xatpic_intr4() at Xatpic_intr4+0x20 --- interrupt, eip = 0xc0484d03, esp = 0xc5a38cc0, ebp = 0xc5a38cd4 --- lnc_tint(c0e33400) at lnc_tint+0x47 lncintr(c0e33400) at lncintr+0xf7 ithread_loop(c0d89680,c5a38d38) at ithread_loop+0x159 fork_exit(c04dd348,c0d89680,c5a38d38) at fork_exit+0x70 fork_trampoline() at fork_trampoline+0x8 --- trap 0x1, eip = 0, esp = 0xc5a38d6c, ebp = 0 --- db show lockedvnods Locked vnodes db ps pid proc uid ppid pgrp flag stat wmesgwchan cmd 695 c104fc480 692 695 0004002 [SLPQ ttyin 0xc0e4fc10][SLP] bash 692 c0ec5418 10641 691 692 0004002 [SLPQ wait 0xc0ec5418][SLP] bash 691 c104f20c0 1 691 0004102 [SLPQ wait 0xc104f20c][SLP] login 475 c0f84c48 100 435 475 0004000 [SLPQ piperd 0xc0ef2330][SLP] unlinkd 474 c0f8420c0 1 1 0004000 [SLPQ nanslp 0xc068752c][SLP] getty 472 c0f844180 1 472 0004002 [SLPQ ttyin 0xc0e57010][SLP][SWAP] getty 471 c0f840000 1 471 0004002 [SLPQ ttyin 0xc0e56010][SLP][QWAP] getty 470 c0f846240 1 470 0004002 [SLPQ ttyin 0xc0e55c10][SLP][SWAP] getty 435 c0ec6624 100 433 433 0004000 [SLPQ select 0xc068bf64][SLP] squid 433 c0ec620c 100 1 433 000 [SLPQ wait 0xc0ec620c][SLP][SWAP] squid 432 c0ec6c48 1100 418 432 0004002 [SLPQ select 0xc068bf64][SLP] mlnet-real 418 c0ec68300 143 0004102 [SLPQ wait 0xc0ec6830][SLP][SWAP] su 398 c0e138300 1 398 000 [SLPQ nanslp 0xc068752c][SLP] cron 385 c0e136240 1 385 100 [SLPQ select 0xc068bf64][SLP] sshd 242 c0e13a3c0 1 242 000 [SLPQ select 0xc068bf64][SLP][SWAP] devd 151 c0ec520c0 1 151 000 [SLPQ pause 0xc0ec5240][SLP][SWAP] adjkz 42 c0ec58300 0 0 204 [SLPQ - 0xc5e85d08][SLP] schedcpu 41 c0ec5a3c0 0 0 204 [SLPQ syncer 0xc068729c][SLP] syncer 40 c0ec5c480 0 0 204 [SLPQ vlruwt 0xc0ec5c48][SLP] vnlru 39 c0ec60000 0 0 204 [SLPQ psleep 0xc068c4ac][SLP] bufdaemon 38 c0db4c480 0 0 20c [SLPQ pgzero 0xc068fc24][SLP] pagezero 37 c0e120000 0 0 204 [SLPQ psleep 0xc068f774][SLP] vmdaemon 36 c0e1220c0 0 0 204 [SLPQ psleep 0xc068f730][SLP] pagedaemon 35 c0e124180 0 0 204 [IWAIT] swi0: sio 34 c0e126240 0 0 204 [SLPQ - 0xc0d8203c][SLP] fdc0 9 c0e128300 0 0 204 [SLPQ - 0xc0e20400][SLP] thread taskq 33 c0e12a3c0 0 0 204 [IWAIT] swi6:+ 32 c0e12c480 0 0 204 [IWAIT] swi6: task queue 8 c0e130000 0 0 204 [SLPQ - 0xc0d8a480][SLP] acpi_task2 7 c0e1320c0 0 0 204 [SLPQ - 0xc0d8a480][SLP] acpi_task1 6 c0e134180 0 0 204 [SLPQ - 0xc0d8a480][SLP] acpi_task0 5 c0da56240 0 0 204 [SLPQ - 0xc0d8a580][SLP] kqueue taskq 31 c0da58300 0 0 204 [IWAIT] swi5:+ 30 c0da5a3c0 0 0 204 [SLPQ - 0xc0683fe0][SLP] yarrow 4 c0da5c480 0 0 204 [SLPQ - 0xc0684a48][SLP] g_down 3 c0db40000 0 0 204 [SLPQ - 0xc0684a44][SLP] g_up 2 c0db420c0 0 0 204 [SLPQ - 0xc0684a3c][SLP] g_event 28 c0db46240 0 0 204 [IWAIT] swi3: vmt 27 c0db48300 0 0 20c [RUNQ] swi4: clock sio 25 c0d9020c0 0 0 204 [IWAIT] irq14: ata0 24 c0d904180 0 0 204 [IWAIT] irq13: 23 c0d906240 0 0 204 [IWAIT] irq12: 22 c0d908300 0 0 204 [APU 0] irq11: lnc0 21 c0d90a3c0 0 0 204 [IWAIT] irq10: 20 c0d90c480 0 0 204 [IWAIT] irq9: acpi0 19 c0da50000 0 0 204 [IWAIT] irq8: rtc 18 c0da520c0 0 0 204 [IWAIT] irq7: ppc0 17 c0da54180 0 0 204 [IWAIT] irq6: fdc0 16 c0d8b0000 0 0 204 [IWAIT] irq5: 15 c0d8b20c0 0 0 204 [IWAIT] irq4: sio0 14 c0d8b4180 0 0 204 [IWAIT] irq3: 13 c0d8b6240 0 0 204 [IWAIT] irq1: atkbd0 12 c0d8b8300 0 0 204 [IWAIT] irq0: clk 11 c0d8ba3c0 0 0 20c [Aan run] idle 1 c0d8bc480 0
permanent per month panic on 5.4-p4
Approximately once per month or two this server panics. We use it for InterSystems Cache` (under linux emulation) and as samba-server (not heavy-loaded). # dmesg -a can be found at http://www.dp.uz.gov.ua/dmesg.isc-cache kernel config - http://www.dp.uz.gov.ua/kernel.isc-cache # uname -a FreeBSD isc-cache.dp.uz.gov.ua 5.4-RELEASE-p4 FreeBSD 5.4-RELEASE-p4 #1: Fri Nov 4 11:37:34 EET 2005 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/ISC-CACHE_KERNEL i386 # kgdb kernel.debug_5.4-p4_2005-11-04 vmcore.6 [GDB will not be able to debug user-mode threads: /usr/lib/libthread_db.so: Undefined symbol ps_pglobal_lookup] #0 doadump () at pcpu.h:159 159 pcpu.h: No such file or directory. in pcpu.h (kgdb) where #0 doadump () at pcpu.h:159 #1 0xc050a102 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:410 #2 0xc050a3c8 in panic (fmt=0xc0656ebc %s) at /usr/src/sys/kern/kern_shutdown.c:566 #3 0xc063ca14 in trap_fatal (frame=0xcbe69ae4, eva=0) at /usr/src/sys/i386/i386/trap.c:817 #4 0xc063c141 in trap (frame= {tf_fs = 24, tf_es = -1061093360, tf_ds = -1050869744, tf_edi = -1052353024, tf_esi = -1053737324, tf_ebp = -874079452, tf_isp = -874079472, tf_ebx = 0, tf_edx = -1057524608, tf_ecx = -1053737324, tf_eax = 0, tf_trapno = 12, tf_err = 0, tf_eip = -1068333178, tf_cs = 8, tf_eflags = 65606, tf_esp = -874079428, tf_ss = -1068492228}) at /usr/src/sys/i386/i386/trap.c:255 #5 0xc062f0fa in calltrap () at /usr/src/sys/i386/i386/exception.s:140 #6 0x0018 in ?? () #7 0xc0c10010 in ?? () #8 0xc15d0010 in ?? () #9 0xc1465e00 in ?? () #10 0xc1313e94 in ?? () #11 0xcbe69b24 in ?? () #12 0xcbe69b10 in ?? () #13 0x in ?? () #14 0xc0f77480 in ?? () #15 0xc1313e94 in ?? () #16 0x in ?? () #17 0x000c in ?? () #18 0x in ?? () #19 0xc0528786 in turnstile_head (ts=0x0) at /usr/src/sys/kern/subr_turnstile.c:763 #20 0xc0501a3c in _mtx_unlock_sleep (m=0xc1313e94, opts=0, file=0xc1db7ec2 /usr/src/sys/modules/smbfs/../../netsmb/smb_iod.c, line=97) at /usr/src/sys/kern/kern_mutex.c:659 #21 0xc050188a in _mtx_unlock_flags (m=0x0, opts=0, file=0xc1db7ec2 /usr/src/sys/modules/smbfs/../../netsmb/smb_iod.c, line=97) at /usr/src/sys/kern/kern_mutex.c:364 #22 0xc1dabe27 in ?? () #23 0xc1313e94 in ?? () #24 0x in ?? () #25 0xc1db7ec2 in ?? () #26 0x0061 in ?? () #27 0xc1465e00 in ?? () #28 0xc1a14800 in ?? () #29 0xc1313e00 in ?? () ---Type return to continue, or q return to quit--- #30 0xcbe69b8c in ?? () #31 0xc1dabef6 in ?? () #32 0xc1465e00 in ?? () #33 0xc1465e00 in ?? () #34 0xc1465e00 in ?? () #35 0xcbe69ba4 in ?? () #36 0xc1dac766 in ?? () #37 0xc1465e00 in ?? () #38 0xc19abe00 in ?? () #39 0xc19abe88 in ?? () #40 0xc1313e00 in ?? () #41 0xcbe69bbc in ?? () #42 0xc1daa261 in ?? () #43 0xc1313e00 in ?? () #44 0xc1313e00 in ?? () #45 0x in ?? () #46 0xc1a14800 in ?? () #47 0xcbe69bd4 in ?? () #48 0xc1daa202 in ?? () #49 0xc1313e00 in ?? () #50 0xc1313e18 in ?? () #51 0xc19ff9e0 in ?? () #52 0xc1a14800 in ?? () #53 0xcbe69ce4 in ?? () #54 0xc1da8f12 in ?? () #55 0xc1313e00 in ?? () #56 0xc1313e00 in ?? () #57 0xc1313e18 in ?? () #58 0xc1a14800 in ?? () #59 0xc1db7cd9 in ?? () #60 0x in ?? () #61 0x in ?? () #62 0x0001 in ?? () #63 0x in ?? () #64 0x in ?? () #65 0xc1313e00 in ?? () #66 0xc1da79c3 in ?? () #67 0xc15da288 in ?? () ---Type return to continue, or q return to quit--- #68 0x in ?? () #69 0xcbe69c30 in ?? () #70 0x in ?? () #71 0xcbe69c2c in ?? () #72 0xc0519e6a in sched_choose () at /usr/src/sys/kern/sched_4bsd.c:1137 Previous frame inner to this frame (corrupt stack?) What additional information can I provide? Thanks! -- Best regards, Palij Oleg, ISC (Pridn railway) xmpp://[EMAIL PROTECTED] ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: permanent per month panic on 5.4-p4
Thu, Dec 08, 2005 at 02:44:40AM +0200, [EMAIL PROTECTED] написал: On Wed, Dec 07, 2005 at 05:18:21PM +0200, Oleg Palij wrote: Approximately once per month or two this server panics. We use it for InterSystems Cache` (under linux emulation) and as samba-server (not heavy-loaded). # dmesg -a can be found at http://www.dp.uz.gov.ua/dmesg.isc-cache kernel config - http://www.dp.uz.gov.ua/kernel.isc-cache This is a samba server? It looks like you're using smbfs. Yes, we use it as a samba server. Yes, we use smbfs. # mount | grep smbfs //[EMAIL PROTECTED]/FTPEXCHANGE on /mnt/exchange (smbfs) //[EMAIL PROTECTED]/FREEBSD on /mnt/FreeBSD (smbfs) -- Best regards, Palij Oleg, ISC (Pridn railway) xmpp://[EMAIL PROTECTED] ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: permanent per month panic on 5.4-p4
Wed, Dec 07, 2005 at 10:31:32PM +0200, [EMAIL PROTECTED] написал: On Wed, Dec 07, 2005 at 05:18:21PM +0200, Oleg Palij wrote: Approximately once per month or two this server panics. We use it for InterSystems Cache` (under linux emulation) and as samba-server (not heavy-loaded). #5 0xc062f0fa in calltrap () at /usr/src/sys/i386/i386/exception.s:140 #6 0x0018 in ?? () #7 0xc0c10010 in ?? () #8 0xc15d0010 in ?? () #9 0xc1465e00 in ?? () #10 0xc1313e94 in ?? () #11 0xcbe69b24 in ?? () #12 0xcbe69b10 in ?? () #13 0x in ?? () #14 0xc0f77480 in ?? () #15 0xc1313e94 in ?? () #16 0x in ?? () #17 0x000c in ?? () #18 0x in ?? () Unfortunately this trace looks corrupted. Are you building your kernel with -O2? I guess that no. # cat /etc/make.conf WITHOUT_X11=yes WITHOUT_GUI=yes DISTDIR=/data/install/FreeBSD/distfiles NO_INET6=yes #NO_MODULES=1 # added by use.perl 2005-10-12 09:20:03 PERL_VER=5.8.7 PERL_VERSION=5.8.7 # set | grep -- '-O' # I built kernel with # make kernel KERNCONF=... Also, I noticed that in my work computer (6.0-R) all dumps I obtained seems to be corrupted too. I can not even guess why this can happen. -- Best regards, Palij Oleg, ISC (Pridn railway) xmpp://[EMAIL PROTECTED] ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to [EMAIL PROTECTED]