I had an interesting dilemma recently and I'm wondering if anyone here can 
illuminate on why this happened.

I have a number of pools, including the root pool, in on-board disks on the 
server. I also have one pool on a SAN disk, outside the system. Last night the 
SAN crashed, and shortly thereafter, the ZFS system executed a number of cron 
jobs, most of which involved running functions on the pool that was on the SAN. 
This caused a number of problems, most notably that when the SAN eventually 
came up, those cron jobs finished, and then crashed the system again.

Only by [i]zfs destroy[/i] on the newly created zfs file system that the cron 
jobs created was the system able to boot up again. As long as those corrupted 
zfs file systems remained on the SAN disk, not even the rpool would boot up 
correctly. None of the zfs file systems would mount, and most services were 
disabled. Once I destroyed the newly created zfs file systems, everything 
instantly mounted and all services started.

Question: why would those one zfs file systems prevent ALL pools from mounting, 
even when they are on different disks and file systems, and prevent all 
services from starting? I thought ZFS was more resistant to this sort of thing. 
I will have to edit my scripts and add SAN-checking to make sure it is up 
before they execute to prevent this from happening again. Luckily I still had 
all the raw data that the cron jobs were working with, so I was able to quickly 
re-create what the cron jobs did originally.   Although this happened with 
Solaris 10, perhaps the discussion could be applicable to OpenSolaris as well 
(I use both).
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to