Nathan Rawling wrote:

Steve,

*every* machine that wants to access files that live in /afs needs to
be an afs client.  This includes AFS server machines.

ok.  Since we have a small number of machines, what would be the
situation running *all* (max 10) machines as AFS Servers, and then
acting as their own clients,

My understanding is that you want to keep the number of DB servers down
because of performance issues when you have too many. I suspect that
probably three DB servers would be sufficient for your application. You
want more than one to eliminate your risk of a single point of failure,
two is not recommended for technical reasons, so three is a good bet.

Thanks for that Nathan.

Are the 'performance issues' ethernet related, or processor related ? We can easily place this traffic on a private network segment.


With that said, you could run the file server portion on each of your
terminal servers if you so desire. If your prime goal is to distribute
load, that might be the way to go.

The prime goal is redundancy and simplicity. I understand AFS is lots more complex than an NFS solution, but we can mostly package the system pre-configured, and the hardware/networking pre-specified.


ok.  Will the clients still sync if the server is down ?  Will we still
have our single point of failure ?  Can more, or all of the Terminal
Servers act as OpenAFS Servers, as well as a client for the cell ?  Can
we then `mount` this cell as /home ?

Well, with one server, you would definitely have a single point of
failure. As Dan Pritts mentioned, even with a multiple server solution as
I talk about above, a server outage would result in some user home
directories becoming unavailable.

As you spread the user volumes across more fileservers, you reduce the
impact of the loss of one of them. If you use every one of your terminal
servers as a file server, and you have 10 servers, you only lose 10% of
your user volumes if/when a server dies.

Ok. I would have mounted /home/ as *one* volume.. Can we have each server maintain it's own sync'ed (cached) copy of the same (/home/) fileset ?

I only need all the directories under /home/


I have heard that Netapp has an implementation of NFS with working
read-write failover, and I know that Veritas has a solution. However, I'm
sure that either would charge big bucks.

eeeh, ah well.


We simply want 2-10 Terminal/Application Servers that keep /home/* +
UIDs + GIDs sync'ed at all times.  All these Terminal/App Servers are on
the same physical Ethernet Segment, firewalled in, on one site, and
probably in the same rack - although it would be nice to distribute
across the complex/campus near their associated set of Terminals.

You should watch out for the technical limitations of AFS as a solution.
If you have users with files larger than 2gig, AFS is not probably the
best solution.
2 gig ? shivers.. there would be no hdd space left ! nah, seriously, 1 700MB CD ISO at the most..

Byte-range file locking is another common area of concern.

I'm not up on that at this stage.. I trust I can safely ignore it for now ? (famous last words...)

The largest administration hassle with AFS, in my experience, has been
managing the backups. The bundled backup system is cumbersome (although it
has probably been greatly improved since I last used it), so frequently
you have to work out a solution that matches your environment.

<shrug> Why can't I just `init 3 && cp -a /home /somewhere/else && init 5` ??

Sorry if that's an ignorant question, but it seems logical to me..

<spacer between questions>

I *thought* I heard that there is no such thing as UIDS + GIDS on an AFS filesystem ? (I expect I'm really showing my ignorance now..)



Thank you people. Please, anything that anyone would like to add, feel free...

regards,
Steve


_______________________________________________
OpenAFS-info mailing list
[EMAIL PROTECTED]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to