On 01/11/2012 07:59 PM, Becky Ligon wrote:
The information for the mount command is only used once when the client starts up to know where to get the config file. So, unless you have hundreds or thousands of clients that you start at once, it really doesn't matter. If you are in a situation where you have lots of nodes starting the client simultaneously, then balancing of nodes per server is a good idea, just as you have described.
ok, perfect, thanks.

I'm not following your question about disk space. Please give me more information.

ok, I will try to by more clear.

Those last two days, my pvfs2 file system seems to be very very slow. It takes a lot of time
to copy small files and a simple ls needs time to return an answer.

Looking at the file system, I see that it is nearly full (95%), so I was wondering if the fact that all disks participating to the file system are nearly full could impact on the speed of the file system itself.
I think this occurs with other file systems, like zfs for example.

However, interestingly just now it works much better, while it is still full at a 95% level. So I'm trying to find another explanation. In fact, two days ago, two among four of the deamon server (pvfs2-server) crashes. I restarted them. Do you think they could be some link between the slowness of the file system and these
crashes ?

Thanks,

yves







Becky

On Wed, Jan 11, 2012 at 11:29 AM, Yves Revaz <[email protected] <mailto:[email protected]>> wrote:

    Dear List,

    I'm using pvfs2 successfully on a cluster since now about 2-3 month,
    with only one or two minor problems.

    However, I have two questions to ask:

    1) I have 4 masters that provides disk (all are meta data
    servers), say :

    m001,m002,m003,m004

    and n slaves that must access the servers.
    pvfs2 is mounted using the kernel module.
    Actually, for each slave, I use the following command :

    mount -t pvfs2 ib://m001:3335/pvfs2-fs /SCRATCH

    Is it a problem that all slaves goes through m001 ? Maybe it should
    be better that 1/4 of the slave uses m001, 1/4 m002 , etc
    in order to avoid a bottleneck, no ? I have no idea how this works.

    2) Those last days, the pvfs2 disk space seems very low. Can it be
    simply due to
    the fact that all disks are nearly full (actually 95% )?

    Filesystem            Size  Used Avail Use% Mounted on
                         3.4T  3.3T  189G  95% /SCRATCH



    With best regards,


    yves




-- (o o)
    --------------------------------------------oOO--(_)--OOo-------
     Dr. Yves Revaz
     Laboratory of Astrophysics
     Ecole Polytechnique Fédérale de Lausanne (EPFL)
     Observatoire de Sauverny     Tel : ++ 41 22 379 24 28
    <tel:%2B%2B%2041%2022%20379%2024%2028>
     51. Ch. des Maillettes       Fax : ++ 41 22 379 22 05
    <tel:%2B%2B%2041%2022%20379%2022%2005>
     1290 Sauverny             e-mail : [email protected]
    <mailto:[email protected]>
     SWITZERLAND                  Web : http://www.lunix.ch/revaz/
    ----------------------------------------------------------------

    _______________________________________________
    Pvfs2-users mailing list
    [email protected]
    <mailto:[email protected]>
    http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users




--
Becky Ligon
OrangeFS Support and Development
Omnibond Systems
Anderson, South Carolina




--
                                                 (o o)
--------------------------------------------oOO--(_)--OOo-------
  Yves Revaz
  Laboratory of Astrophysics EPFL
  Observatoire de Sauverny     Tel : ++ 41 22 379 24 28
  51. Ch. des Maillettes       Fax : ++ 41 22 379 22 05
  1290 Sauverny             e-mail : [email protected]
  SWITZERLAND                  Web : http://www.lunix.ch/revaz/
----------------------------------------------------------------

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to