On Dec 28, 2005, at 2:37 PM, Paul Robins wrote:
I'll reply in one if that's ok (sorry for the topposting)
No problem ... :-)
I would expect a disk to be the thing to go to be honest but
regardless, i want some system where there is parity data stored on
other nodes in this group of machines. Basically RAID5 but
networked would be perfect, as that would give me ~ 400 gig of
space whilst being able to handle a machine vanishing from the
network (the whole machine dies when the disk does, cheap
whiteboxes don't you know)
:-)
I understand the way AFS works with regards to clients seeing /afs,
and i did see read only replication, and then running a command to
change a read only node(?) into a read write node (i'm sorry if i'm
talking crap, i'd read the wiki if i could). This is why i figured
perhaps it could be implimented with some sort of networked RAID5,
giving me a lot more storage than just RO mirroring one server to
the other 3, but whilst still being redundant.
You're not talking nonsense at all. It's exactly the kind of
statement I was provoking.
(Just the term 'node' is from cluster terminology not AFS, but OK.
AFS doesn't care about nodes, there are only volumes on fileservers) :-)
I don't really recommend that conversion of RO volumes to RWs for
backups. If you search the archives for that topic you'll find some
of my old statements regarding that.
IIRC I said something like: use that only if your disk is gone, like
in 'the dog eat your harddisk'. (I didn't look it up ... ;-) )
Well, reading your idea, I think I have to repeat myself.
It's doable, especially with something like ENBD, but you have to
'pay' for that with some performance, as far as I can oversee that
design.
You'll have one fileserver then (or maybe 2) and transfer a lot of
data over the network to those fileserver(s), just to transfer that
from there to your AFS clients, in the worst case over the same network.
It _could_ work, even if it's not really what the designers of AFS
had in mind when they built the system. ;-)
Horst
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info