Gary,

> I realize that this configuration is not supported.

The configuration is supported, but not in the manner mentioned below.

If there are two (or more) instances of ZFS in the end-to-end data  
path, each instance is responsible for its own redundancy and error  
recovery. There is no in-band communication between one instance of  
ZFS and another instances of ZFS located elsewhere in the same end-to- 
end data path.

A key understanding is the a ZVOL provides the same block I/O  
semantics as any other Solaris block device, therefore when a ZVOL is  
configured as an iSCSI Target, and the target is accessed by an iSCSI  
Initiator LU, there is no awareness that a ZVOL is the backing-store  
of this LU.

Although not quite the same , this ZFS discussion list raises  
questions about configuring ZFS on RAID enable storage arrays, and how  
using simple JBODs might be a better solution.

Jim Dunham
Engineering Manager
Sun Microsystems, Inc.
Storage Platform Software Group

> What's required
> to make it work?  Consider a file server running ZFS that exports a
> volume with Iscsi.  Consider also an application server that imports
> the LUN with Iscsi and runs a ZFS filesystem on that LUN.  All of the
> redundancy and disk management takes place on the file server, but
> end-to-end error detection takes place on the application server.
> This is a reasonable configuration, is it not?
>
> When the application server detects a checksum error, what information
> does it have to return to the file server so that it can correct the
> error?  The file server could then retry the read from its redundant
> source, which might be a mirror or might be synthentic data from
> RAID-5.  It might also indicate that a disk must be replaced.
>
> Must any information accompany each block of data sent to the
> application server so that the file server can identify the source
> of the data in the event of an error?
>
> Does this additional exchange of information fit into the Iscsi
> protocol, or does it have to flow out of band somehow?
>
> -- 
> -Gary Mills-    -Unix Support-    -U of M Academic Computing and  
> Networking-
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to