> Each drive is freshly formatted with one 2G file copied to it. 

How are you creating each of these files?

Also, would you please include a the output from the isalist(1) command?

> These are snapshots of iostat -xnczpm 3 captured somewhere in the
> middle of the operation.

Have you double-checked that this isn't a measurement problem by
measuring zfs with zpool iostat (see zpool(1M)) and verifying that
outputs from both iostats match?

> single drive, zfs file
>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>  258.3    0.0 33066.6    0.0 33.0  2.0  127.7    7.7 100 100 c0d1
> 
> Now that is odd. Why so much waiting? Also, unlike with raw or UFS, kr/s /
> r/s gives 256K, as I would imagine it should.

Not sure.  If we can figure out why ZFS is slower than raw disk access
in your case, it may explain why you're seeing these results.

> What if we read a UFS file from the PATA disk and ZFS from SATA:
>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>  792.8    0.0 44092.9    0.0  0.0  1.8    0.0    2.2   1  98 c1d0
>  224.0    0.0 28675.2    0.0 33.0  2.0  147.3    8.9 100 100 c0d0
> 
> Now that is confusing! Why did SATA/ZFS slow down too? I've retried this a
> number of times, not a fluke.

This could be cache interference.  ZFS and UFS use different caches.

How much memory is in this box?

> I have no idea what to make of all this, except that it ZFS has a problem
> with this hardware/drivers that UFS and other traditional file systems,
> don't. Is it a bug in the driver that ZFS is inadvertently exposing? A
> specific feature that ZFS assumes the hardware to have, but it doesn't? Who
> knows!

This may be a more complicated interaction than just ZFS and your
hardware.  There are a number of layers of drivers underneath ZFS that
may also be interacting with your hardware in an unfavorable way.

If you'd like to do a little poking with MDB, we can see the features
that your SATA disks claim they support.

As root, type mdb -k, and then at the ">" prompt that appears, enter the
following command (this is one very long line):

*sata_hba_list::list sata_hba_inst_t satahba_next | ::print sata_hba_inst_t 
satahba_dev_port | ::array void* 32 | ::print void* | ::grep ".!=0" | ::print 
sata_cport_info_t cport_devp.cport_sata_drive | ::print -a sata_drive_info_t 
satadrv_features_support satadrv_settings satadrv_features_enabled

This should show satadrv_features_support, satadrv_settings, and
satadrv_features_enabled for each SATA disk on the system.

The values for these variables are defined in:

http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/sys/sata/impl/sata.h

this is the relevant snippet for interpreting these values:

/*
 * Device feature_support (satadrv_features_support)
 */
#define SATA_DEV_F_DMA                  0x01
#define SATA_DEV_F_LBA28                0x02
#define SATA_DEV_F_LBA48                0x04
#define SATA_DEV_F_NCQ                  0x08
#define SATA_DEV_F_SATA1                0x10
#define SATA_DEV_F_SATA2                0x20
#define SATA_DEV_F_TCQ                  0x40    /* Non NCQ tagged queuing */

/*
 * Device features enabled (satadrv_features_enabled)
 */
#define SATA_DEV_F_E_TAGGED_QING        0x01    /* Tagged queuing enabled */
#define SATA_DEV_F_E_UNTAGGED_QING      0x02    /* Untagged queuing enabled */

/*
 * Drive settings flags (satdrv_settings)
 */
#define SATA_DEV_READ_AHEAD             0x0001  /* Read Ahead enabled */
#define SATA_DEV_WRITE_CACHE            0x0002  /* Write cache ON */
#define SATA_DEV_SERIAL_FEATURES        0x8000  /* Serial ATA feat.  enabled */
#define SATA_DEV_ASYNCH_NOTIFY          0x2000  /* Asynch-event enabled */

This may give us more information if this is indeed a problem with
hardware/drivers supporting the right features.

-j
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to