Hello, I am running an OpenAFS version 1.4.2 server on Linux Debian Etch AMD64.
Recently, I was extending the RAID5 volume which had one partition, holding pvolume of an LVM2 volume group, which holds LVM2 logical volumes holding the vicep? partitions: [sda1] <- [pvolume] <- [volume-group] <- [logical-volumes for vicep?] The sda1 partition was not touched during the expansion, and after the expansion, I only added the sda2 partition on the remaining free space of the expanded sda RAID5 volume. The OS was re-booted few times without starting the OpenAFS file-server at all (single user mode). All went well, but after starting the OpenAFS file-server, the salvager kicked in and went for hours. Is this by design - e.g. fileserver detected that the mount count on vicep? changed and did salvage? This is by no means a rant, I am just trying to understand why this happened as it was unexpected... Kind regards, Vlad ------ > because it reverses the logical flow of conversation + it is hard to follow. >> why not? >>> do not put a reply at the top of the message, please... Please access the attached hyperlink for an important electronic communications disclaimer: http://www.lse.ac.uk/collections/secretariat/legal/disclaimer.htm _______________________________________________ OpenAFS-info mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-info
