|
Christopher D. Clausen wrote: Read/write data is also cached. If not changed it remains cached. If changed and pushed back to the file server it is marked stale in the other client caches and fetched again by other clients only if it is requested again.Adnoh <[EMAIL PROTECTED]> wrote: Caching is whole-file, so if a large file is changed on client A and was previously cached on client B, when client B requests the file again it notices it is marked as stale and fetches the entire file from the file server by client B. (Since client A made the changes the file is already cached on client A. Writes to cached AFS files go first to the AFS cache and then back to the AFS file server on flushes or closes or syncs.) []Two other factors affect whether a given replicated volume will be accessed as RO or RW.By default, the AFS client prefers to use readonly volumes, so if you create a replica of a volume, the data will immediately become readonly. Given mount points /a/b/c/d, the volume mounted at d/ will not be accessed as RO if a) any of the mount points above it are RW mount points. b) any of the volumes mounted above it are not replicated. (This is determined by the VLDB entry for the volume. If I have five replicas on five different file servers but the VLDB entry for the volume does not show any RO sites, then the volume is treated as unreplicated. It's important that the VLDB be in the correct state.) It's easy to check -- assume that /afs mounts root.afs and root.afs is replicated, and that cellname mounts root.cell and root.cell is replicated, and that <next> is in a volume that is also replicated, where <next> might be a directory in /afs/cellname or perhaps a mount point to a volume called '<next>' cd /afs fs lq cd cellname fs lq cd <next> fs lq Each 'fs lq' command should return a .readonly volume name . Example: [EMAIL PROTECTED] afs]$ cd /afs [EMAIL PROTECTED] afs]$ fs lq Volume Name Quota Used %Used Partition root.afs.readonly 5000 7 0% 48% [EMAIL PROTECTED] afs]$ cd ccre.com [EMAIL PROTECTED] ccre.com]$ fs lq Volume Name Quota Used %Used Partition root.cell.readonly 5000 24 0% 48% [EMAIL PROTECTED] ccre.com]$ cd user [EMAIL PROTECTED] user]$ fs lq Volume Name Quota Used %Used Partition root.cell.readonly 5000 24 0% 48% [EMAIL PROTECTED] user]$ cd k [EMAIL PROTECTED] k]$ fs lq Volume Name Quota Used %Used Partition root.cell.readonly 5000 24 0% 48% -- Up to here all directory nodes are in a replicated volume and all mount points are 'regular' (not RW) mount points. -- The next volume is not replicated, and no .readonly suffix is returned by 'fs lq' [EMAIL PROTECTED] k]$ cd kim [EMAIL PROTECTED] kim]$ fs lq Volume Name Quota Used %Used Partition user.k.kim no limit 1074945 0% 4% [EMAIL PROTECTED] kim]$ -- To further illustrate, we know that root.afs is replicated (fs lq returned .readonly at /afs.) However, the rule is "From a RW to a RW," so if I make a mount point to root.afs here (/afs/ccre.com/user/k/kim) in the RW user volume, it will take us to the RW root.afs without using the -rw switch when I create the mount point: [EMAIL PROTECTED] kim]$ pwd /afs/ccre.com/user/k/kim [EMAIL PROTECTED] kim]$ fs mkm AFSroot root.afs --- A RW mount point is indicated by a % sign in front of the volume name --- Here the regular mount point is indicated by the # in front of the volume name. [EMAIL PROTECTED] kim]$ fs lsm AFSroot 'AFSroot' is a mount point for volume '#root.afs' [EMAIL PROTECTED] kim]$ cd AFSroot [EMAIL PROTECTED] AFSroot]$ fs lq Volume Name Quota Used %Used Partition root.afs 5000 7 0% 23% [EMAIL PROTECTED] AFSroot]$ --- no .readonly suffix; we're in the RW root.afs instance This -rw mount point affects all clients, of course, regardless of their location, since the mount point is in the AFS file system and not local to each client.You can however manualy force the mount point to be RW (-rw option to fs mkm) and this way you can have an RW volume in each local district and still be able to clone the data to other servers using vos release. That is, there is still only one instance of the RW volume, not a RW volume in each location. Only one instance of the RW is allowed -- that is, only one file server and only one partition will have the RW volume of a given name.. The convention for writing to replicated volumes is to create the so-called 'dot path' and to use the dot path for writes. The 'dot path' is conventionally created under /afs and is a RW mount point. /afs/cellname --> follow the 'volume traversal rules' (basically, from a RO to a RO, with some caveats) /afs/.cellname --> the RW mount point causes the client to ignore replicas from this node on down The dot path is created with 'fs mkmount' using the -rw switch: fs mkm .cellname root.cell -rw
This isn't correct. If the RO goes down the RW is not used. If another RO site is available the client will automatically fail over to another RO, but will not fail over from any RO to the RW. This is by design.You might need to use fs setserverprefs to have the clients on each side use the correct server. Also, note that the AFS client will NOT switch between using the RO and RW automatically (well, if the RO goes down, the RW will be used, but that isn't likely what you want to happen in this case.) I got tired of accidentally removing the RW when I intended to remove a RO, so even though it is considered 'best practice' I don't follow it. YMMV.If you are using the "dot path" all reads and writes will be to the RW volume. Generally, its a "best practice" to have an RO clone on the same server as the RW as well. Not sure if you did that or not. When a replica is updated only changed files are copied from the RW site to the RO sites. If the '-f' switch is used on the 'vos release' command the entire RO volume is copied over. If all of the files in a RO volume are changed, then the entire RO volume is copied over (file by file.) If the files are relatively small then limited bandwidth isn't going to be a 'killer.' Limited bandwidth applies to AFS files as well as files from samba or NFS servers. If volume size is an issue, try using smaller volumes. Under /afs/domain/data/it/controlling, for example, do you have other directories? If so, you can create a volume and mount it at the directory node. This keeps volumes smaller. True also for samba and NFS, unless using multiple instances of the RW file and merging changes from different samba/NFS servers.AFS can do what you want, but the performance over the WAN links is likely going to be poor. And since the RW volume can only be a single server, someone is going to be stuck with the slow connection. We had file servers all over the planet. Remote administration is easy enough. Aircraft mechanics could power cycle a server if required. _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info |
- Re: [OpenAFS] a noobs question and problems on a ne... Jason Edgecombe
- Re: [OpenAFS] a noobs question and problems on... Thomas Kula
- Re: [OpenAFS] a noobs question and problems on... Adnoh
- Re: [OpenAFS] a noobs question and problem... Christopher D. Clausen
- Re: [OpenAFS] a noobs question and pro... Adnoh
- Re: [OpenAFS] a noobs question and... Christopher D. Clausen
- Re: [OpenAFS] a noobs questio... Adnoh
- Re: [OpenAFS] a noobs que... Klaas Hagemann
- Re: [OpenAFS] a noobs que... Anne . Salemme
- Re: [OpenAFS] a noobs que... Christopher D. Clausen
- Re: *** Spam *** Re: [Op... Kim Kimball
- Re: [OpenAFS] a noobs que... Jeffrey Altman
- Re: [OpenAFS] a noobs que... Kim Kimball
- Re: [OpenAFS] a noobs que... Adnoh
- Re: [OpenAFS] a noobs question and problem... anne salemme
- [OpenAFS] a noobs question and problems on a new ce... Adnoh
