Jonathan Dobbie wrote:
Thank you for the responses, we're still designing our storage system
and AFS still seems like the best option. I've installed AFS on a
test server (G4 OSX - I figured I'd start with the craziest platform
we might want it on)
I'm hoping that if I toss my plan out now, people can point out the
holes before I invest too much time in it. The end goal is uptime
more than performance.
Our AFS servers would probably all be running linux. Clients are OSX,
windows and Linux.
There would be an MSKDC that would trust the main MIT KDC. I need to
talk to the Windows Admin about ntlm, but if we need to sync the
passwords, thats fine (password change is done via a webpage or
command line tools that we wrote, not passwd), if not, the MSKDC will
have random passwords.
We only have one small chunk or data that (I think) lends itself to a
RO replica. We have a network library that it automounted by all osx
computers. All other data is updated enough that people wouldn't want
to wait for me to release it. Am I missing a way to set up RO
replicas? I'd be nice if they would mirror changes automatically.
Part of what I want is to be able to have any one piece of hardware
die, and either route around it automatically, or bring it back up
remotely.
Here is my current idea (I'm not hugely fond of it, so I'm really
hoping that someone has a better one) There will be two FC storage
devices (we currently have one xraid. If we can't get much cash,
it'll be another, if not, something better.) These will be kept in
sync with DRBD, at least at the partition level (which seems a little
silly) Heartbeat will be used so that if anything goes wrong with the
server or the storage, the other server will restart its AFS server
and start serving the downed server automatically. (It'll certainly
end up more complicated than this, but that's the basic idea).
Could someone please point out the holes in this plan? Is there a
simpler way to do this with R/O replicas that might require me to
manually promote the replica to R/W, but would be less error prone?
Most of the data involved is home directories and departmental
shares. If it can be fixed remotely in <5 minutes, it's probably good
enough.
I keep thinking that there should be a clever way to use GFS(not
google, the RH one) instead or DRBD to keep the volumes in sync. All
of the machines have two gigabit NICs, but it still seems like a waste
not to use FC.
More precisely, would this be possible:
/vicepd is on GFS on both RAID arrays (A and B)
it's mounted on servers 1(rw) and 2(ro).
If A dies, B serves the data and no one notices
if 1 dies, heartbeat promotes 2 to rw and ro.
and, if it is possible, what would users notice?
I've read other people's remarks that syncing /vicepx is bad, but I
don't know for myself.
FC isn't neccessary. Just add more disks, FC or not. Add a new server
with more disks. You simply add more volumes and mount them in any path
you want in /afs. The AFS client automatically does failover for R/O
volumes. It doesn't do failover for R/W volumes, though. You can also
create a backup volume, which is a snapshot of a volume. You can have as
many R/O replicas as you have servers.
As a last question, completely out of left field, does anyone know if
AFS stores apple metadata? I've seen some references to it doing so
in Apple Double files, but nothing concrete.
I'm not sure, but apple tends to store resource metadata for filename in
._filename when the filesystem doesn't support resource forks natively.
Jason
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info