We have the same problem and I have just moved back to UFS because of
this issue. According to the engineer at Sun that i spoke with, he
implied that there is an RFE out internally that is to address this problem.

The issue is this:

When configuring a zpool with 1 vdev in it and zfs times out a write
operation to the pool/filesystem for whatever reason, possibly just a
hold back or retyrable error, the zfs module will cause a system panic
because it thinks there are no other mirror's in the pool to write to
and forces a kernel panic.

The way around this is to configure the zpools with mirror's which
negates the use of a hardware raid array, and sends twice the amount of
data down to the RAID cache that is actually required (because of the
mirroring at the ZFS layer). In our case it was a little old Sun
StorEdge 3511 FC SATA Array, but the principle applies to any RAID array
that is not configured as a JBOD.



Victor Engle wrote:
> Roshan,
> 
> Could you provide more detail please. The host and zfs should be
> unaware of any EMC array side replication so this sounds more like an
> EMC misconfiguration than a ZFS problem. Did you look in the messages
> file to see if anything happened to the devices that were in your
> zpools? If so then that wouldn't be a zfs error. If your EMC devices
> fall offline because of something happening on the array or fabric
> then zfs is not to blame. The same thing would have happened with any
> other filesystem built on those devices.
> 
> What kind of pools were in use, raidz, mirror or simple stripe?
> 
> Regards,
> Vic
> 
> 
> 
> 
> On 6/19/07, Roshan Perera <[EMAIL PROTECTED]> wrote:
>> Hi All,
>>
>> We have come across a problem at a client where ZFS brought the system
>> down with a write error on a EMC device due to mirroring done at the
>> EMC level and not ZFS, Client is total EMC committed and not too happy
>> to use the ZFS for oring/RAID-Z. I have seen the notes below about the
>> ZFS and SAN attached devices and understand the ZFS behaviour.
>>
>> Can someone help me with the following Questions:
>>
>> Is this the way ZFS will work in the future ?
>> is there going to be any compromise to allow SAN Raid and ZFS to do
>> the rest.
>> If so when and if possible details of it ?
>>
>>
>> Many Thanks
>>
>> Rgds
>>
>> Roshan
>>
>> ZFS work with SAN-attached devices?
>> >
>> >     Yes, ZFS works with either direct-attached devices or SAN-attached
>> > devices. However, if your storage pool contains no mirror or RAID-Z
>> > top-level devices, ZFS can only report checksum errors but cannot
>> > correct them. If your storage pool consists of mirror or RAID-Z
>> > devices built using storage from SAN-attached devices, ZFS can report
>> > and correct checksum errors.
>> >
>> > This says that if we are not using ZFS raid or mirror then the
>> > expected event would be for ZFS to report but not fix the error. In
>> > our case the system kernel panicked, which is something different. Is
>> > the FAQ wrong or is there a bug in ZFS?
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to