The information for the mount command is only used once when the client
starts up to know where to get the config file.  So, unless you have
hundreds or thousands of clients that you start at once, it really doesn't
matter.  If you are in a situation where you have lots of nodes starting
the client simultaneously, then balancing of nodes per server is a good
idea, just as you have described.

I'm not following your question about disk space.  Please give me more
information.

Becky

On Wed, Jan 11, 2012 at 11:29 AM, Yves Revaz <[email protected]> wrote:

> Dear List,
>
> I'm using pvfs2 successfully on a cluster since now about 2-3 month,
> with only one or two minor problems.
>
> However, I have two questions to ask:
>
> 1) I have 4 masters that provides disk (all are meta data servers), say :
>
> m001,m002,m003,m004
>
> and n slaves that must access the servers.
> pvfs2 is mounted using the kernel module.
> Actually, for each slave, I use the following command :
>
> mount -t pvfs2 ib://m001:3335/pvfs2-fs /SCRATCH
>
> Is it a problem that all slaves goes through m001 ? Maybe it should
> be better that 1/4 of the slave uses m001, 1/4 m002 , etc
> in order to avoid a bottleneck, no ? I have no idea how this works.
>
> 2) Those last days, the pvfs2 disk space seems very low. Can it be simply
> due to
> the fact that all disks are nearly full (actually 95% )?
>
> Filesystem            Size  Used Avail Use% Mounted on
>                      3.4T  3.3T  189G  95% /SCRATCH
>
>
>
> With best regards,
>
>
> yves
>
>
>
>
> --
>                                                (o o)
> ------------------------------**--------------oOO--(_)--OOo---**----
>  Dr. Yves Revaz
>  Laboratory of Astrophysics
>  Ecole Polytechnique Fédérale de Lausanne (EPFL)
>  Observatoire de Sauverny     Tel : ++ 41 22 379 24 28
>  51. Ch. des Maillettes       Fax : ++ 41 22 379 22 05
>  1290 Sauverny             e-mail : [email protected]
>  SWITZERLAND                  Web : http://www.lunix.ch/revaz/
> ------------------------------**------------------------------**----
>
> ______________________________**_________________
> Pvfs2-users mailing list
> Pvfs2-users@beowulf-**underground.org<[email protected]>
> http://www.beowulf-**underground.org/mailman/**listinfo/pvfs2-users<http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users>
>



-- 
Becky Ligon
OrangeFS Support and Development
Omnibond Systems
Anderson, South Carolina
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to