This seems to have worked. But is showing an abnormal amount of data.

r...@fsk-backup:~# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
ambry  3.62T   132K  3.62T     0%  ONLINE  -

r...@fsk-backup:~# df -h | grep ambry
ambry                 2.7T   27K  2.7T   1% /ambry

This happened the last time I created a raidz1... Meh, before I 
continue, is this incredibly abnormal? Or is there something that I'm 
missing and this is normal procedure?

Thanks, Jonny

Wes Morgan wrote:
> On Thu, 15 Jan 2009, Jonny Gerold wrote:
>
>> Hello,
>> I was hoping that this would work:
>> http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
>>
>> I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
>> cant delete/backup somewhere else)
>>
>>> r...@fsk-backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
>>> /dev/lofi/1
>>> r...@fsk-backup:~# zpool list
>>> NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
>>> ambry   592G   132K   592G     0%  ONLINE  -
>> I get this (592GB???) I bring the virtual device offline, and it becomes
>> degraded, yet I wont be able to copy my data over. I was wondering if
>> anyone else had a solution.
>>
>> Thanks, Jonny
>>
>> P.S. Please let me know if you need any extra information.
>
> Are you certain that you created the sparse file as the correct size? 
> If I had to guess, it is only in the range of about 150gb. The 
> smallest device size will limit the total size of your array. Try 
> using this for your sparse file and recreating the raidz:
>
> dd if=/dev/zero of=fakedisk bs=1k seek=976762584 count=0
> lofiadm -a fakedisk
>

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to