Hey Sam,

Here is some more information. If you want more of what I cut out, I can
supply it, but it filled dmesg. I also attached the pvfs2-config.h.

dmesg output with the debugging turned on:

.
.
.
pvfs2_put_inode: pvfs2_inode: f4fb2d08 (inode = 2147483621) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4fb2d08 destroying inode 2147483621
pvfs2_put_inode: pvfs2_inode: f532754c (inode = 1048576) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f532754c destroying inode 1048576
pvfs2_kill_sb: (WARNING) number of inode allocs (4100) != number of inode
deallocs (2542)
pvfs2_kill_sb: returning normally
pvfs2_kill_sb: called
Removing SB f6b54a00 from pvfs2 superblocks
pvfs2_put_inode: pvfs2_inode: f4e127e0 (inode = 1073741824) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4e127e0 destroying inode 1073741824
pvfs2_put_inode: pvfs2_inode: f4e1254c (inode = 2147483622) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4e1254c destroying inode 2147483622
.
.
.
pvfs2_put_inode: pvfs2_inode: f4e14024 (inode = 1073741784) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4e14024 destroying inode 1073741784
pvfs2_put_inode: pvfs2_inode: f4e12a74 (inode = 2147483621) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4e12a74 destroying inode 2147483621
pvfs2_put_inode: pvfs2_inode: f5091024 (inode = 1048576) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f5091024 destroying inode 1048576
pvfs2_kill_sb: (WARNING) number of inode allocs (4100) != number of inode
deallocs (2583)
pvfs2_kill_sb: returning normally
pvfs2_kill_sb: called
Removing SB c3695a00 from pvfs2 superblocks
pvfs2_put_inode: pvfs2_inode: f4fbd54c (inode = 1073741824) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4fbd54c destroying inode 1073741824
pvfs2_put_inode: pvfs2_inode: f4fbd2b8 (inode = 2147483622) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4fbd2b8 destroying inode 2147483622
pvfs2_put_inode: pvfs2_inode: f4fc6d08 (inode = 1073741768) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4fc6d08 destroying inode 1073741768
.
.
.
pvfs2_put_inode: pvfs2_inode: f4fbd7e0 (inode = 2147483621) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f4fbd7e0 destroying inode 2147483621
pvfs2_put_inode: pvfs2_inode: f6c712b8 (inode = 1048576) = 1 (nlink=1)
pvfs2_destroy_inode: deallocated f6c712b8 destroying inode 1048576
pvfs2_kill_sb: (WARNING) number of inode allocs (4100) != number of inode
deallocs (2624)
pvfs2_kill_sb: returning normally
pvfs2_kill_sb: called
pvfs2_put_inode: pvfs2_inode: f6e80d08 (inode = 1073741778) = 2 (nlink=1)
pvfs2_put_inode: pvfs2_inode: f6e80a74 (inode = 1073741776) = 2 (nlink=1)
pvfs2_put_inode: pvfs2_inode: f48c4d08 (inode = 1073741778) = 2 (nlink=1)
pvfs2_put_inode: pvfs2_inode: f48c4a74 (inode = 1073741776) = 2 (nlink=1)
.
.
.
pvfs2_put_inode: pvfs2_inode: f4e62d08 (inode = 1073741778) = 2 (nlink=1)
pvfs2_put_inode: pvfs2_inode: f4e62a74 (inode = 1073741776) = 2 (nlink=1)
pvfs2_put_inode: pvfs2_inode: f4f98024 (inode = 1073741778) = 2 (nlink=1)
pvfs2_put_inode: pvfs2_inode: f4f99d08 (inode = 1073741776) = 2 (nlink=1)
pvfs2_put_inode: pvfs2_inode: f4feed08 (inode = 1073741778) = 2 (nlink=1)
pvfs2_put_inode: pvfs2_inode: f4feea74 (inode = 1073741776) = 2 (nlink=1)


Bart.



On Fri, May 2, 2008 at 3:22 PM, Phil Carns <[EMAIL PROTECTED]> wrote:

> The dmesg output probably will help, but I think that the inode
> alloc/dealloc check itself is a little trigger happy:
>
> https://trac.mcs.anl.gov/projects/pvfs/ticket/7
>
> I didn't think it would actually break anything other than printing out an
> unecessary warning, but I never followed up on it.  There may actually be
> something unrelated going wrong.
>
> -Phil
>
> Sam Lang wrote:
>
> >
> > Hi Bart,
> >
> > After loading the pvfs2 kmod, can you do:
> >
> > echo "1" > /proc/sys/pvfs2/debug
> >
> > Then run the same test, and send the dmesg output to me?  This should
> > show where the inode allocs/deallocs are going awry.
> >
> > Thanks,
> > -sam
> >
> > On May 2, 2008, at 4:00 PM, Bart Taylor wrote:
> >
> >  Hey guys,
> > >
> > > I have been running some tests against the 271 release, and I am
> > > having some trouble with multiple mounts on one client.  My setup has 2
> > > servers (both meta and io servers on local disk) and one client all of 
> > > which
> > > are running RHEL4 update 6. All that was done on the test client is 
> > > loading
> > > the kernel module and starting pvfs2-client.  I can mount the file system
> > > once and use it without any problem, but I have attached a test script -
> > > takes file system information and a number of times to mount it - that 
> > > keeps
> > > failing.  Here are the steps it executes:
> > >
> > > - For the number of mounts requested
> > >   - Create a new directory (defaults to /tmp/mount_limit.#)
> > >   - Mount the specified file system on the new dir
> > >
> > > - For the number of mounts requested
> > >   - Do a recursive ls comparison (keep a copy the first time through
> > > and compare subsequent mounts to the first)
> > >   - Unmount the dir
> > >   - Delete the dir
> > >
> > > I have been able to consistently reproduce the problem running the
> > > attached script like this:
> > > ./test-mount-limit.pl pvfs2-server1:3334/pvfs2-fs 100
> > > It stalls every time with either 36 or 37 mounts remaining.  The
> > > script has been successfully run on previous versions of pvfs2 up to 
> > > several
> > > thousand mounts.
> > >
> > > The problem comes at the umount step.  Eventually the process just
> > > hangs, strands a bunch of mounts, and umount doesn't work as expected 
> > > after
> > > that even from the command line.  When it stalls, I start seeing messages
> > > like this one in dmesg and syslog:
> > > May  2 15:02:44 client-node kernel: pvfs2_kill_sb: (WARNING) number of
> > > inode allocs (4100) != number of inode deallocs (2665)
> > >
> > > I am running this against an almost empty file system since the
> > > recursive ls would take a while if it were large. Am I doing something
> > > wrong/strange here, or is there a client/kernel problem? The test seems
> > > pretty straight-forward, and I've never had an issue with the script 
> > > before.
> > > I'm not sure if it was run against the 2.7.0 release though.
> > >
> > > Bart.
> > > <test-mount-limit.pl>_______________________________________________
> > > Pvfs2-developers mailing list
> > > [email protected]
> > > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers
> > >
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > Pvfs2-developers mailing list
> > [email protected]
> > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers
> >
>
>

Attachment: pvfs2-config.h
Description: Binary data

_______________________________________________
Pvfs2-developers mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers

Reply via email to