Hi Bob,
you are using some non-Sun SCSI HBA. Could you please be more specific
about HBA model and driver?
You are getting pretty the same high CPU load with write to single-disk
UFS and raid-z. This may mean that the problem is not with ZFS itself.
Victor
Bob Evans wrote:
Robert,
Sorry about not being clearer.
The storage unit I am using is configured as follows:
X X X X X X X X X X X X X X
\
\-- (Each X is an 18 GB SCSI Disk)
The first 7 disks have been used for the ZFS RaidZ, I used the last disk (#14)
for my UFS target. The first 7 are on one scsi channel, the next 7 are on the
other channel.
Here is the output of zpool status:
pool: z
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
z ONLINE 0 0 0
raidz ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
c3t5d0 ONLINE 0 0 0
c3t8d0 ONLINE 0 0 0
errors: No known data errors
Here is the format of each of the 14 disks in the array:
partition> print
Current partition table (original):
Total disk sectors available: 35548662 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 34 16.95GB 35548662
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 35548663 8.00MB 35565046
I ufs mounted the target disk by doing the following:
newfs /dev/rdsk/c4t8d0s0
mount /foo /dev/dsk/c4t8d0s0
Thanks!
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss