Sascha Brechenmacher wrote:

>
> Am 13.02.2007 um 22:46 schrieb Ian Collins:
>
>> Looks like poor hardware, how was the pool built?  Did you give ZFS  the
>> entire drive?
>>
>> On my nForce4 Athlon64 box with two 250G SATA drives,
>>
>> zpool status tank
>>   pool: tank
>>  state: ONLINE
>>  scrub: none requested
>> config:
>>
>>         NAME        STATE     READ WRITE CKSUM
>>         tank        ONLINE       0     0     0
>>           mirror    ONLINE       0     0     0
>>             c3d0    ONLINE       0     0     0
>>             c4d0    ONLINE       0     0     0
>>
>> Version  1.03       ------Sequential Output------ --Sequential Input-
>> --Random-
>>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
>> /sec %CP
>> bester           4G 45036  21 47972   8 32570   5 83134  80 97646  12
>> 253.9   0
>>
>> dd from the mirror gives about 77MB/s
>>
>> Ian.
>>
>
> I use the entire drive for the zpools:
>
>   pool: data
> state: ONLINE
> scrub: none requested
> config:
>
>         NAME        STATE     READ WRITE CKSUM
>         data        ONLINE       0     0     0
>           mirror    ONLINE       0     0     0
>             c1d0    ONLINE       0     0     0
>             c1d1    ONLINE       0     0     0
>
> errors: No known data errors
>
So it realy looks like your hardware isn't up to the job.

> how could I dd from the zpool's, where is the blockdevice?


I just used a DVD ISO file.

Ian
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to