>Hm.. You'd care when you started getting the complaints about wierd cache
>corruption.
I'm perfectly willing to admit that I don't understand everything about
AFS, but I'd be glad if you explained this to me.
I'd actually be willing to live with a 2x increase in write
performance if it meant that consistancy wasn't guaranteed if two
clients try writing a new file at the same time (I'm not really
sure it's guaranteed now, to be honest). In my experience, this
happens so rarely that it's not really something to worry about.
(The times I am aware that we had people doing this, it didn't work
right anyway, so I don't think anything would be lost). Judging
from the amount of email I got from everyone else the first time
I posted about this, I'm not the only one who feels this way.
It's also not clear to me if omitting the read RPCs for new files
was intentional (if so, why is the code still there?) or it just
got broken along the way. Everything I've seen indicates the latter.
--Ken