against myself
panic messages:
---
panic: lockmgr: locking against myself
panic: from debugger
Uptime: 10m26s
Dumping 191 MB
16 32 48 64 80 96 112 128 144 160 176
---
Reading symbols from
/usr/src/sys/i386/compile/AKLEMM/modules/usr/src/sys/modules/acpi/acpi.ko.debug...done.
Loaded symbols for
/usr/src
: lockmgr: locking against myself
panic messages:
---
panic: lockmgr: locking against myself
panic: from debugger
Uptime: 10m26s
Dumping 191 MB
16 32 48 64 80 96 112 128 144 160 176
I had a similar panic on 5.0-R. Do you happen to have any snapshots on any of
the filesystems to which you
specified
(kgdb) exec-file kernel
(kgdb) core-file /var/crash/vmcore.0
panic: lockmgr: locking against myself
panic messages:
---
panic: lockmgr: locking against myself
panic: from debugger
Uptime: 10m26s
Dumping 191 MB
16 32 48 64 80 96 112 128 144 160 176
I had a similar panic
myself
panic messages:
---
panic: lockmgr: locking against myself
syncing disks, buffers remaining... 882 882 880 880 880 880 880 880 880
880 880 822 823 822 822 822 822 822 unknown: device timeout
unknown: DMA timeout
824 822 827 822 822 822 822 822 822 822 822 822 822 822 822 822 822 822
822 822
Jeff Roberson writes:
On Mon, 17 Mar 2003, Andrew Gallatin wrote:
I'm seeing the following panic under heavy NFS client usage on an SMP
w/kernel sources from Weds. evening. Has this been fixed?
Thanks,
Drew
I believe that is fixed in nfs_vnops.c 1.200.
Yep, it
I'm seeing the following panic under heavy NFS client usage on an SMP
w/kernel sources from Weds. evening. Has this been fixed?
Thanks,
Drew
panic: lockmgr: locking against myself
cpuid = 0; lapic.id =
Debugger(panic)
Stopped at Debugger+0x55: xchgl %ebx,in_Debugger.0
db t
Andrew Gallatin wrote:
I'm seeing the following panic under heavy NFS client usage on an SMP
w/kernel sources from Weds. evening. Has this been fixed?
If I'm not mistaken, this is the problem Jeff fixed in revision 1.134 of
vfs_cluster.c.
Cheers,
Maxime
To Unsubscribe: send mail to [EMAIL
On Mon, 17 Mar 2003, Andrew Gallatin wrote:
I'm seeing the following panic under heavy NFS client usage on an SMP
w/kernel sources from Weds. evening. Has this been fixed?
Thanks,
Drew
I believe that is fixed in nfs_vnops.c 1.200.
panic: lockmgr: locking against myself
cpuid = 0
Maxime Henrion writes:
Andrew Gallatin wrote:
I'm seeing the following panic under heavy NFS client usage on an SMP
w/kernel sources from Weds. evening. Has this been fixed?
If I'm not mistaken, this is the problem Jeff fixed in revision 1.134 of
vfs_cluster.c.
Great!
flag@newluxor$ uname -a
FreeBSD newluxor.skynet.org 5.0-RC FreeBSD 5.0-RC #6: Sun Dec 8 13:45:45 CET 2002
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/NEWLUXOR i386
flag@newluxor$
i get a panic everytime i launch tin to read the newsgroup:
panic: lockmgr: locking against myself
syncing disks
the newsgroup:
panic: lockmgr: locking against myself
syncing disks, buffers remaining... panic: bdwrite: buffer is not busy
and here is the dump:
Script started on Sun Dec 8 15:22:34 2002
bash-2.05b# gdb -l k /usr/obj/usr/src/sys/NEWLUXOR/kernel.debug vmcore.0
GNU gdb 5.2.1 (FreeBSD
While local package initilization I get a panic. World and kernel from today.
This I found in messages:
Sep 30 15:49:55 leeloo kernel: panic: lockmgr: locking against myself
This problem is gone with today's kernel.
Marc
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd
Hi!
While local package initilization I get a panic. World and kernel from today.
This I found in messages:
Sep 30 15:49:55 leeloo kernel: panic: lockmgr: locking against myself
Sep 30 15:49:55 leeloo kernel:
Sep 30 15:49:55 leeloo kernel: syncing disks... panic: bremfree: bp 0xd381a080
On 11 Sep, Ian Dowse wrote:
In message [EMAIL PROTECTED], Don Lewis writes:
A potentially better solution just occurred to me. It looks like it
would be better if vrele() waited to decrement v_usecount until *after*
the call to VOP_INACTIVE() (and after the call to VI_LOCK()). If that
were
In message [EMAIL PROTECTED], Don Lewis writes:
After looking at ufs_inactive(), I'd like to add a fourth proposal
And I've just remembered a fifth :-) I think the old BSD code had
both an `open' count and a reference count. The open count is a
count of the real users of the vnode (it is what
Ian Dowse wrote:
And I've just remembered a fifth :-) I think the old BSD code had
both an `open' count and a reference count. The open count is a
count of the real users of the vnode (it is what ufs_inactive really
wants to compare against 0), and the reference count is just for
places that
In message [EMAIL PROTECTED], Terry Lambert writes:
Ian Dowse wrote:
And I've just remembered a fifth :-) I think the old BSD code had
both an `open' count and a reference count. The open count is a
count of the real users of the vnode (it is what ufs_inactive really
wants to compare against
On 13 Sep, Ian Dowse wrote:
For example, if you hold the reference count at 1 while calling the
cleanup function, it allows that function to safely add and drop
references, but if that cleanup function has a bug that drops one
too many references then you end up recursing instead of
I've gotten a couple lockmgr: locking against myself panics today and
it seems to be somewhat reproduceable. I had the serial console hooked
up for the last one, so I was able to get a stack trace:
panic: lockmgr: locking against myself
Debugger(panic)
Stopped at Debugger+0x45: xchgl
On 10 Sep, Don Lewis wrote:
It looks like the call to vrele() from vn_close() is executing the
following code:
if (vp-v_usecount == 1) {
vp-v_usecount--;
/*
* We must call VOP_INACTIVE with the node locked.
* If
-a_vp-v_interlock);
Deepankar
Don Lewis wrote:
I've gotten a couple lockmgr: locking against myself panics today and
it seems to be somewhat reproduceable. I had the serial console hooked
up for the last one, so I was able to get a stack trace:
panic: lockmgr: locking against myself
In message [EMAIL PROTECTED], Don Lewis writes:
A potentially better solution just occurred to me. It looks like it
would be better if vrele() waited to decrement v_usecount until *after*
the call to VOP_INACTIVE() (and after the call to VI_LOCK()). If that
were done, nfs_inactive() wouldn't
.
Additional TCP options:.
Starting background filesystem checks
Sun Jul 7 05:41:23 PDT 2002
FreeBSD/i386 (freebeast.catwhisker.org) (cuaa0)
login: panic: lockmgr: locking against myself
cpuid = 0; lapic.id =
Debugger(panic)
Stopped at Debugger+0x46: xchgl %ebx,in_Debugger.0
232 (newfs), uid 0: exited on signal 8 (core
dumped)
apache cvsupd
.
Additional TCP options:.
Starting background filesystem checks
Sun Jul 7 05:41:23 PDT 2002
FreeBSD/i386 (freebeast.catwhisker.org) (cuaa0)
login: panic: lockmgr: locking against myself
This is exactly what's
24 matches
Mail list logo