Modulok <[EMAIL PROTECTED]> wrote:
> Couple questions for anyone on the list who has a moment (and the answer to
> any of these):
> Objective: I need to kick people off of a storage drive (we'll say
> /dev/ad4), without corrupting the file system and without bringing the
> entire system down. I need to safely umount the file systems, even if my
> users have processes which have files open.
> 1. If I use "umount -f /dev/ad4s1a" to forcefully umount a file system, does
> this jeopardize the integrity of said file system? Like...will it jerk the
> run out from under a process in the middle of a disk write, thus leaving a
> half written file, or will it wait until the write is complete? (I guess
> this would largely depend on the disk controller?)
I don't believe there are any guarantees if your -f it. The filesystem
will probably be OK, but I would expect files to get corrupt.
> 2. How do I get a list of processes that are accessing a specific file
> system, e.g. /dev/ad4s1a?
fstat(1) is your friend.
> 3. Is there any safe way to unconditionally umount a file system, even if a
> run-away process is writing to it (as bad of an idea as this is)?
I would write a script that pulls fstat data, then kills any processes
with files open, then attempts to unmount the filesystem. If that fails,
go through the fstat data again and kill -9 the processes this time.
If all that fails, you can finally choose to umount -f.
You could also try some looping constructs, kill/umount four or five
times before switching to kill -9/umount, etc.
It all depends on how desperate you are to get the filesystem unmounted,
how long you're willing to wait, how much data you're willing to lose,
email@example.com mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"