John Hascall <[EMAIL PROTECTED]>:
  * The fs flush/flushv commands do not work reliably.  Our cell looks
    like: (root.afs)(root.cell)(users)(user.username).  We recently
    replicated the volume users and *nothing* short of a reboot got
    clients to recognize this properly (in some cases, the afs
    cache had to be "rm -rf"ed too).

[EMAIL PROTECTED]:
  I *believe* the command you wanted was fs checkv.  The checkv command
  breaks the psuedo callback on RO volumes (in pre 3.3 releases),
  forcing the cachemanager to refetch RO data from the server.  Waiting
  an hour would do the same thing.  This may not be the case with a
  newly released volume, but I suspect so.

       We tried "fs checkv" too.  And just about every other command
       which looked like having a prayer of working.

  Also, I'm not sure of the 3.3 implications given that there are now
  real callbacks on the volume level for ROs.

       I am actually wondering if this isn't the root of the problem.


Wallace Colyer <[EMAIL PROTECTED]>:
  I have>  seen this problem many times over the years.  It is far less
  frequent now than in the past (it happens once a year instead of once a
  week).  Anyway, there seem to be situations where with readonly volumes
  the cache gets corrupt and it is extremely difficult to fix the problem.

       It is happening to us several times a day on various client
       machines.  It is not making our user community very happy...   :(

  What I do is move the file to something with a .NEW extension then copy
  the file back to its old name.  With a corrupt directory I move the
  directory, create a new one,  then move the contents of the old
  directory into the new directory.

       Our directory(cell) structure is:

                /afs/iastate.edu/users/XX/YY/USERNAME
          (root.afs) (root.cell)(users)      (user.USERNAME)

       One perhaps "odd thing" is the volume "users" contains
       no real files (only the numbered directories XX, YY,
       where XX,YY range from 00 to 31) and the mountpoints
       for the users volumes.

       The symptom is that the user's directory appears 
       not to exist as /afs/iastate.edu/users/XX/YY/USERNAME, but it
       does exist as  /afs/.iastate.edu/users/XX/YY/USERNAME
                                 
       The "XX" and "YY" directories do seem OK (other than
       "ls -l" fails in "YY", of course).

       And cmdebug shows this, which looks fishy to me:

          ** Cache entry @ 0xc3a11100 for 1.536870943.19552.25019
              0 bytes     DV 0 refcnt 0
              callback 00000000   expires 0
              0 opens     0 writers
              normal file
              states (0x4), read-only

       536870943 is "users.readonly"
       536870943.19552 is inode #2051168

       % ls -ldi /afs/iastate.edu/users/17/26/gewilson
       2051168 drwx------ 12 gewilson     2048 Mar 19 13:54 \
       /afs/iastate.edu/users/17/26/gewilson

       What strikes me as odd is the "0 bytes" part.
       (BTW, what does "DV" stand for?)


John

Reply via email to