Bill:

Please send me some information:  version of pvfs2, conf file, server log
files, version of OS that you are using, and version of Berkeley DB you
are using.

It appears that the servers are working, i.e., communicating with each
other, based on the ping output.  So, it seems to me that the metadata for
your system is somehow hosed.  There may or may not be a way to recover;
however, I will research the "Resource temporarily unavailable" message
and let you know "why" you might receive this message.

Also, if you have to create a new filesystem anyway, I recommend that you
setup each server as both metadata and I/O.  If you are using OrangeFS
(PVFS 2.8.3), you can put the metadata data space and the data data space
on different drives and still have one server access both.  If you are not
using OrangeFS, then you will have to put both metadata and data in the
same directory; however, just by simply spreading out the metadata
functionality to all the servers will enhance performance, regardless of
the location of the metadata.

Becky
-- 
Becky Ligon
PVFS Developer
Clemson University
864-650-4065

> Just not having any luck anymore with PVFS!   Besides the fsck issues
> which remain unresolved, now we are all having trouble just creating
> directories on this filesystem (which initiated the original fsck to
> begin with).   Maybe I need to recreate the entire filesystem?
>
> # pvfs2-mkdir /scratch/pvfs2/bill
> [E 09:59:44.774443] mkdir failed with error: Resource temporarily
> unavailable
> PVFS_sys_mkdir: Resource temporarily unavailable (error class: 0)
> cannot create [/scratch/pvfs2/bill]
>
> # df | grep pvfs
> tcp://della3:3334/pvfs2-fs
>                      4932108288  22208512 4909899776   1% /scratch/pvfs2
>
> The setup is one meta separate from the 16 I/O servers.  We mount using
> the kernel modules as well.  But it doesn't seem to matter if we use the
> kernel hooks or the user space hooks, the "Resource temporarily
> unavailable" messages disallow all things.
>
>
>
> pvfs2-ping returns all the goodness a healthy filesystem typically does.
>
> I'm at a complete loss with this fairly new install.
>
> Bill
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>


_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to