Chaz Chandler wrote:

>> - if most of the users do mostly reading, and mostly smallish files, you
>> might get by using volume replication at each site, leaving the RW
>> volume wherever it is, and relying on some tricks to keep the user
>> confused during writes...i think, in general, the idea of moving RW
>> volumes all the time is a bad one...it will put big loads on your
>> fileservers if you have lots of users
> 
> For the most part, yes.  We try to keep the large stuff (databases) 
> separately and have the db apps take care of synchronization in their own way.
> 
> I thought as much, but I'm getting very poor performance even for reading.  
> It seems that the clients don't do a good job of prioritizing the local 
> fileserver first.  Some posts mention lower-numbered IP addresses getting 
> priority, but I'm not sure how that would work for servers on different 
> subnets.  Shouldn't the choice have something to do with lower latency?

In theory the answer is 'yes'.  Unfortunately, there is no logic in the
client to determine what the latency is being any given set of
connections.   The server preferences can be manually set using the
"fs setserverprefs" command

  http://www.openafs.org/pages/manpages/1/fs_setserverprefs.html

You can provide your users with scripts to run to prioritize the client
for their current location.

>> - check the top of your afs tree and make sure root.afs, root.cell and
>> probably most other volumes at the top of the tree contain only
>> mountpoints, and are all replicated, and don't change them much...in
>> other words, you want stability at the top of the tree, to reduce the
>> changes that clients have to pay attention to
> 
> These are all mounted R/O, right?  Should mostly-static volumes be mounted 
> R/O and then only access the R/W vol through /afs/.domain/ for the occasional 
> updates?  What about for the software repository scenario below?
> 
> The way I understand the volume access is that even if a volume is mounted 
> R/W, its R/O vol is accessed until a write operation, in which case all 
> further access (including reads) are from the R/W vol.  If so, then a user 
> should ideally be able to use the local server for most reads.

There are two types of mount points:  normal and read-write.   If all of
the mount points are normal, then a readonly replica is preferred over a
read-write volume.   If there is no readonly replica, then the readwrite
is used.

Once the user crosses an explicit read-write mount point, then the
client will always prefer the read-write volume over the readonly.

> We haven't made any changes to the default cache options, so maybe this would 
> be a good next step?

You will need to tune the cache for your users usage patterns.  At the
very least the size of the cache should be large enough to hold the
working set required by the applications being used.

>> how many users? what kinds of files and what kinds of usage? how much
>> sharing? these can make a difference in how you set things up, too. (i
>> worked on a cell once with clients in sweden, south america, and on the
>> european continent...one server at each site, whenever the south
>> american site became master db server, followed by transatlantic link
>> going down...the european users weren't happy....basically, as long as
>> you have good network connectivity between client systems and server
>> systems, there is no need for physical proximity...)
> 
> We've got about 5 users, so a pretty small setup.  They're mostly very 
> non-technical, and expect it to Just Work (tm).
> 
> The files regular users are concerned with are mostly documents, images, and 
> presentations.  Their heavy sharing is usually confined to a small set of 
> small files (usually less that 1MB each); sets change as projects change 
> (weekly/monthly).  Still, it takes about 1 minute to transfer 1MB across the 
> VPN -- not insignificant.

What operating system?

Jeffrey Altman

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to