Re: cvs commit: src/sys/kern vfs_subr.c vfs_vnops.c src/sys/sys
On 19-Dec-01 Matthew Dillon wrote: >:> The structure is being bzero()'d before its dynamic flag gets checked. >:> I've included a patch below. Josef, I would appreciate it if you would >:> apply the patch and try your system with the various procfs devices >:> mounted again. It's an obvious bug so I'm comitting it to -current >:> now, >:> the question is: Is it the *only* bug? >:> >:> -Matt >: >:Hmm, why bzero at all if you are just going to free it? Why not move the >:bzero >:to an else after the ISDYNSTRUCT check? (Not that this is really all that >:important, but... :) >: >:-- >: >:John Baldwin <[EMAIL PROTECTED]> <>< http://www.FreeBSD.org/~jhb/ > > He is invalidating the structure to catch references to deleted sbufs. > (see assert_sbuf_integrity() calls). Fair enough. In theory an INVARIANTS kernel with 0xdeadc0de should do that for you though. :) -- John Baldwin <[EMAIL PROTECTED]> <>< http://www.FreeBSD.org/~jhb/ "Power Users Use the Power to Serve!" - http://www.FreeBSD.org/ To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message
Re: cvs commit: src/sys/kern vfs_subr.c vfs_vnops.c src/sys/sys vnode.h
On Wed, Dec 19, 2001 at 11:05:22AM -0800, Matthew Dillon wrote: > > The structure is being bzero()'d before its dynamic flag gets checked. > I've included a patch below. Josef, I would appreciate it if you would > apply the patch and try your system with the various procfs devices > mounted again. It's an obvious bug so I'm comitting it to -current now, > the question is: Is it the *only* bug? Patching now - I'll let you know how it fairs. Thanks :) Joe msg33128/pgp0.pgp Description: PGP signature
Re: cvs commit: src/sys/kern vfs_subr.c vfs_vnops.c src/sys/sys
:> The structure is being bzero()'d before its dynamic flag gets checked. :> I've included a patch below. Josef, I would appreciate it if you would :> apply the patch and try your system with the various procfs devices :> mounted again. It's an obvious bug so I'm comitting it to -current now, :> the question is: Is it the *only* bug? :> :> -Matt : :Hmm, why bzero at all if you are just going to free it? Why not move the bzero :to an else after the ISDYNSTRUCT check? (Not that this is really all that :important, but... :) : :-- : :John Baldwin <[EMAIL PROTECTED]> <>< http://www.FreeBSD.org/~jhb/ He is invalidating the structure to catch references to deleted sbufs. (see assert_sbuf_integrity() calls). Josef, I take back my last request... I think there's another bug. Your vmstat -m output showed that both the sbuf pool and the VFS cache pool were blown up. The fix I just made will probably only solve the sbuf pool issue. If you still want to test it, observe both pools in the vmstat -m output carefully! -Matt Matthew Dillon <[EMAIL PROTECTED]> To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message
Re: cvs commit: src/sys/kern vfs_subr.c vfs_vnops.c src/sys/sys
On 19-Dec-01 Matthew Dillon wrote: > >:> hmm.. ok, there are some subsystems using sbuf's: >:>=20 >:> linprocfs >:> procfs >:> pseudofs >:>=20 >:> I think someone may have broken something in pseudofs, procfs, >:> and/or linprocfs that is causing the VFS cache and sbuf MALLOC >:> area to run away. >:>=20 >:> Anybody have any ideas? >: >:This certainly appear to be the case. I've unmounted procfs and >:linprocfs and I've got an uptime of 5 hours now. Result! :) >: >:Joe > > Excellent! Can you narrow the problem down further, to either procfs or > linprocfs? > > I think I *may* have found it. Or at least I've found one error. There > could be more: > > void > sbuf_delete(struct sbuf *s) > { > assert_sbuf_integrity(s); > /* don't care if it's finished or not */ > > if (SBUF_ISDYNAMIC(s)) > SBFREE(s->s_buf); > bzero(s, sizeof *s); > if (SBUF_ISDYNSTRUCT(s)) > SBFREE(s); > } > > The structure is being bzero()'d before its dynamic flag gets checked. > I've included a patch below. Josef, I would appreciate it if you would > apply the patch and try your system with the various procfs devices > mounted again. It's an obvious bug so I'm comitting it to -current now, > the question is: Is it the *only* bug? > > -Matt Hmm, why bzero at all if you are just going to free it? Why not move the bzero to an else after the ISDYNSTRUCT check? (Not that this is really all that important, but... :) -- John Baldwin <[EMAIL PROTECTED]> <>< http://www.FreeBSD.org/~jhb/ "Power Users Use the Power to Serve!" - http://www.FreeBSD.org/ To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message
Re: cvs commit: src/sys/kern vfs_subr.c vfs_vnops.c src/sys/sys vnode.h
:> hmm.. ok, there are some subsystems using sbuf's: :>=20 :> linprocfs :> procfs :> pseudofs :>=20 :> I think someone may have broken something in pseudofs, procfs, :> and/or linprocfs that is causing the VFS cache and sbuf MALLOC :> area to run away. :>=20 :> Anybody have any ideas? : :This certainly appear to be the case. I've unmounted procfs and :linprocfs and I've got an uptime of 5 hours now. Result! :) : :Joe Excellent! Can you narrow the problem down further, to either procfs or linprocfs? I think I *may* have found it. Or at least I've found one error. There could be more: void sbuf_delete(struct sbuf *s) { assert_sbuf_integrity(s); /* don't care if it's finished or not */ if (SBUF_ISDYNAMIC(s)) SBFREE(s->s_buf); bzero(s, sizeof *s); if (SBUF_ISDYNSTRUCT(s)) SBFREE(s); } The structure is being bzero()'d before its dynamic flag gets checked. I've included a patch below. Josef, I would appreciate it if you would apply the patch and try your system with the various procfs devices mounted again. It's an obvious bug so I'm comitting it to -current now, the question is: Is it the *only* bug? -Matt Index: kern/subr_sbuf.c === RCS file: /home/ncvs/src/sys/kern/subr_sbuf.c,v retrieving revision 1.13 diff -u -r1.13 subr_sbuf.c --- kern/subr_sbuf.c10 Dec 2001 05:51:45 - 1.13 +++ kern/subr_sbuf.c19 Dec 2001 19:01:26 - @@ -461,12 +461,15 @@ void sbuf_delete(struct sbuf *s) { + int isdyn; + assert_sbuf_integrity(s); /* don't care if it's finished or not */ if (SBUF_ISDYNAMIC(s)) SBFREE(s->s_buf); + isdyn = SBUF_ISDYNSTRUCT(s); bzero(s, sizeof *s); - if (SBUF_ISDYNSTRUCT(s)) + if (isdyn) SBFREE(s); } To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message
Re: cvs commit: src/sys/kern vfs_subr.c vfs_vnops.c src/sys/sys vnode.h
On Wed, Dec 19, 2001 at 12:02:49AM -0800, Matthew Dillon wrote: > hmm.. ok, there are some subsystems using sbuf's: > > linprocfs > procfs > pseudofs > > I think someone may have broken something in pseudofs, procfs, > and/or linprocfs that is causing the VFS cache and sbuf MALLOC > area to run away. > > Anybody have any ideas? This certainly appear to be the case. I've unmounted procfs and linprocfs and I've got an uptime of 5 hours now. Result! :) Joe msg33123/pgp0.pgp Description: PGP signature
Re: cvs commit: src/sys/kern vfs_subr.c vfs_vnops.c src/sys/sys vnode.h
hmm.. ok, there are some subsystems using sbuf's: linprocfs procfs pseudofs I think someone may have broken something in pseudofs, procfs, and/or linprocfs that is causing the VFS cache and sbuf MALLOC area to run away. Anybody have any ideas? -Matt To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message
Re: cvs commit: src/sys/kern vfs_subr.c vfs_vnops.c src/sys/sys vnode.h
This is the problem: vfscache781251 49097K 73745K 85234K 24112590 0 64,128,256,2= 56K The VFS cache is running away. When it hits the 85MB kernel malloc limit the machine will lockup. The sbuf usage is also scary: sbuf1074836 33589K 33590K 85234K 21496720 0 32,1K,4K I am going to add freebsd-current back to the list, maybe this will ring a bell with someone else. Something is obviously blowing up, but I have no idea what. -Matt Matthew Dillon <[EMAIL PROTECTED]> :Memory statistics by type Type Kern :Type InUse MemUse HighUse Limit Requests Limit Limit Size(s) :USBHC 3 1K 1K 85234K30 0 32 : USBdev 2 3K 3K 85234K50 0 128,512,2K : USB2824K 26K 85234K 7557110 0 16,32,64,128= :,256,512,1K,2K,4K :linux13 1K 1K 85234K 130 0 32 : pfs_vncache 205 7K 7K 85234K 25680 0 32 : pfs_fileno 240K 40K 85234K20 0 32K :pfs_nodes44 6K 6K 85234K 440 0 128 : acpitask 0 0K 1K 85234K 14070 0 32 : acpica 1583 209K223K 85234K940600 0 16,64,128,25= :6,512,1K,2K,8K : acpidev55 1K 1K 85234K 550 0 16,32 : acpipwr 2 1K 1K 85234K20 0 32 :acpicmbat 5 1K 1K 85234K50 0 16,128,256 : acpibatt 2 1K 1K 85234K20 0 16 : agp 1 1K 1K 85234K10 0 16 : atkbddev 2 1K 1K 85234K20 0 32 : nexusdev 5 1K 1K 85234K50 0 16 : memdesc 1 4K 8K 85234K 110 0 32,4K : ZONE14 2K 2K 85234K 140 0 128 :VM pgdata 164K 64K 85234K10 0 64K : ISOFS mount 1 128K128K 85234K10 0 128K : isadev27 2K 2K 85234K 270 0 64 : MSDOSFS FAT 1 128K128K 85234K10 0 128K :UFS mount1530K 42K 85234K 190 0 256,2K,4K,8K= :,16K :UFS ihash 1 128K128K 85234K10 0 128K : UFS dirhash 453 102K104K 85234K 8700 0 16,32,64,128= :,256,512,1K,2K : FFS node 9750 2438K 2495K 85234K 2351340 0 256 :newdirblk 0 0K 1K 85234K30 0 16 : dirrem 0 0K 3K 85234K 21390 0 32 :mkdir 1 1K 1K 85234K 280 0 32 : diradd 7 1K 2K 85234K 21830 0 32 : freefile 0 0K 3K 85234K 12860 0 32 : freeblks 0 0K 13K 85234K 14190 0 128 : freefrag 0 0K 24K 85234K305360 0 32 : allocindir 0 0K 1289K 85234K 2497900 0 64 : indirdep 0 0K 1027K 85234K 17980 0 32,8K : allocdirect 7 1K 21K 85234K 99170 0 64 :bmsafemap 2 1K 3K 85234K 19040 0 32 : newblk 1 1K 1K 85234K 2597080 0 32,256 : inodedep11 130K146K 85234K 31750 0 128,128K : pagedep 417K 18K 85234K 8950 0 64,16K : p1003.1b 1 1K 1K 85234K10 0 16 :MSDOSFS mount 265K 65K 85234K20 0 1K,64K : MSDOSFS node 0 0K 1K 85234K10 0 128 :DEVFS 15025K 25K 85234K 1500 0 16,128,8K :AD driver 1 1K 2K 85234K 1700690 0 64,1K :in6_multi 6 1K 1K 85234K60 0 64 : syncache 1 8K 8K 85234K10 0 8K :tseg_qent 0 0K 2K 85234K 88330 0 32 : in_multi 2 1K 1K 85234K20 0 32 : routetbl49 7K 13K 85234K 1330 0 16,32,64,128= :,256 : lo 1 1K 1K 85234K10 0 512 : ether_multi28 2K 2K 85234K 280 0 16,32,64 : ifaddr23 8K 8K 85234K 240 0 32,256,512,2K : BPF 7 9K 9K 85234K70 0 128,256,4K : vnodes25 6K 6K 85234K 3080 0 16,32,64,128= :,256 :mount 9 5K 5K 85234K 110 0 16,128,512 :cluster_save buffer 0 0K 1K 85234K 41200 0 32,64,= :128 : vfscache781251 49097K 73745K 85234K 24112590 0 6