Thanks for your answer. Yes, I forgot to mention I was planning on having
each node be a metadata and an I/O.
 
OK, backup metadata too. I am not sure how I'd recover from a node
problem, though; I guess I need to do a few experiments. (Two scenarios: a)
node goes down, and everything gets stopped; b) node goes down, and I
decide to continue working with the other).


Best,

R.




On Sat, 19 May 2012 22:16:11 -0400,Boyd Wilson <[email protected]> wrote:
> OrangeFS works fine across the compute nodes as storage, many people use it
> that way, so you are fine.  As with the backup, also make sure you take
> regular backups of the metadata as well.  The easiest setup would be to
> have each node be a metadata and an I/O node.

> -boyd

> On Sat, May 19, 2012 at 6:53 PM, Ramon Diaz-Uriarte <[email protected]>wrote:

> >
> > Dear All,
> >
> > I have some general questions about whether, in my setup, it makes sense
> > to use OrangeFS and, if it makes sense, the recommended usage patterns.
> >
> >
> > Context
> > =======
> >
> > We have a two node cluster (a Dell PowerEdge C6145). Each node has four
> > SAS HDs (600 GB each) per node (which I'll most likely use as a single
> > virtual disk using the RAID card). Each node also has four AMD Opteron
> > 6276 sockets (16 cores per socket) and 256 GB RAM. The two nodes are
> > connected via Infiniband.
> >
> >
> > We will be using the cluster for bioinformatics/statistics computing,
> > including programs that use MPI and OpenMP, as well as giving access (via
> > web-based applications) to those same bioinfo/stats computing programs.
> >
> >
> > The main reason for considering OrangeFS is to provide a single
> > shared-disk file system for homes, application code, result storage,
> > scratch space, tmp files, etc.
> >
> >
> > Questions:
> > ==========
> >
> >
> > 1.  I've seen it recommended that computing and storage nodes be different
> >    (e.g., FAQ, 3.6). However, having different computing and storage nodes
> >    would not make sense in my case. Anything I am missing?
> >
> >
> > 2. Is it an overkill to use OrangeFS in my scenario?
> >
> >
> > 3. If one of the nodes fails, and if I keep a backup of the data, can I
> >   recover by just copying the shared disk backup, and mounting as a
> >   regular, local, file system? (To allow this, I think I need to create
> >   the backups from one of the clients, not just the data held by a single
> >   server ---i.e., I would not use the approach in the FAQ, 9.1)
> >
> >
> > 4. I understand things will be much cleaner if I use a dedicated partition
> >  (in each machine) to be used as a brick, instead of a directory?
> >
> >
> > 5. What other options might I consider? I am also thinking about GlusterFS
> >   (and asking similar questions on their list). (Lustre definitely seems
> >   to discourage client and OSS in same node).
> >
> >
> > Any other comments or suggestions for this setup are welcome.
> >
> >
> > Best,
> >
> > R.
> >
> > --
> > Ramon Diaz-Uriarte
> > Department of Biochemistry, Lab B-25
> > Facultad de Medicina
> > Universidad Autónoma de Madrid
> > Arzobispo Morcillo, 4
> > 28029 Madrid
> > Spain
> >
> > Phone: +34-91-497-2412
> >
> > Email: [email protected]
> >       [email protected]
> >
> > http://ligarto.org/rdiaz
> >
> >
> > _______________________________________________
> > Pvfs2-users mailing list
> > [email protected]
> > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> >
-- 
Ramon Diaz-Uriarte
Department of Biochemistry, Lab B-25
Facultad de Medicina 
Universidad Autónoma de Madrid 
Arzobispo Morcillo, 4
28029 Madrid
Spain

Phone: +34-91-497-2412

Email: [email protected]
       [email protected]

http://ligarto.org/rdiaz


_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to