Hi,
For my carelessness, I added two disks into a raid-z2 zpool as normal data
disk, but in fact
I want to make them as zil devices.
Any remedy solutions?
Many thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fred Liu
>
> For my carelessness, I added two disks into a raid-z2 zpool as normal data
> disk, but in fact
> I want to make them as zil devices.
That's a huge bummer, and it's the main reason
>
> That's a huge bummer, and it's the main reason why device removal has
> been a
> priority request for such a long time... There is no solution. You
> can
> only destroy & recreate your pool, or learn to live with it that way.
>
> Sorry...
>
Yeah, I also realized this when I send out this
On 19 September, 2011 - Fred Liu sent me these 0,9K bytes:
> >
> > That's a huge bummer, and it's the main reason why device removal has
> > been a
> > priority request for such a long time... There is no solution. You
> > can
> > only destroy & recreate your pool, or learn to live with it that
> From: Fred Liu [mailto:fred_...@issi.com]
>
> Yeah, I also realized this when I send out this message. In NetApp, it is
so
> easy to change raid group size. There is still a long way for zfs to go.
> Hope I can see that in the future.
This one missing feature of ZFS, IMHO, does not result in "a
>
> This one missing feature of ZFS, IMHO, does not result in "a long way
> for
> zfs to go" in relation to netapp. I shut off my netapp 2 years ago in
> favor
> of ZFS, because ZFS performs so darn much better, and has such
> immensely
> greater robustness. Try doing ndmp, cifs, nfs, iscsi on n
>
> You can add mirrors to those lonely disks.
>
Can it repair the pool?
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, September 19, 2011 08:07, Edward Ned Harvey wrote:
> This one missing feature of ZFS, IMHO, does not result in "a long way for
> zfs to go" in relation to netapp. I shut off my netapp 2 years ago in
> favor of ZFS, because ZFS performs so darn much better, and has such
> immensely greater
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fred Liu
>
> For my carelessness, I added two disks into a raid-z2 zpool as normal data
> disk,
> -Original Message-
> From: Fred Liu [mailto:fred_...@issi.com]
>
> I also did another
>
> So... You accidentally added non-redundant disks to a pool. They were
> not
> part of the raidz2, so the redundancy in the raidz2 did not help you.
> You
> removed the non-redundant disks, and now the pool is faulted.
>
> The only thing you can do is:
> Add the disks back to the pool (re-in
On Mon, Sep 19, 2011 at 9:29 AM, Fred Liu wrote:
> Yes. I have connected them back to server. But it does not help.
> I am really sad now...
I cringed a little when I read the thread title. I did this on
accident once as well, but "lucky" for me, I had enough scratch
storage around in various siz
> From: Krunal Desai [mailto:mov...@gmail.com]
>
> On Mon, Sep 19, 2011 at 9:29 AM, Fred Liu wrote:
> > Yes. I have connected them back to server. But it does not help.
> > I am really sad now...
I'll tell you what does not help. This email. Now that you know what you're
trying to do, why don
>
> I'll tell you what does not help. This email. Now that you know what
> you're trying to do, why don't you post the results of your "zpool
> import" command? How about an error message, and how you're trying to
> go about fixing your pool? Nobody here can help you without
> information.
>
I also used zpool import -fFX cn03 in b134 and b151a(via live SX11 live cd). It
resulted a core dump and reboot after about 15 min.
I can see all the leds are blinking on the HDD within this 15 min.
Can replacing empty ZIL devices help?
Thanks.
Fred
> -Original Message-
> From: Fred Li
The core dump:
r10: ff19a5592000 r11:0 r12:0
r13:0 r14:0 r15: ff00ba4a5c60
fsb: fd7fff172a00 gsb: ff19a5592000 ds:0
es:0 fs:
On Sep 19, 2011, at 12:10 AM, Fred Liu wrote:
> Hi,
>
> For my carelessness, I added two disks into a raid-z2 zpool as normal data
> disk, but in fact
> I want to make them as zil devices.
You don't mention which OS you are using, but for the past 5 years of
[Open]Solaris
releases, the system
I use opensolaris b134.
Thanks.
Fred
> -Original Message-
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: 星期一, 九月 19, 2011 22:21
> To: Fred Liu
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] remove wrongly added device from zpool
>
> On Sep 19, 2011, at
>
> You don't mention which OS you are using, but for the past 5 years of
> [Open]Solaris
> releases, the system prints a warning message and will not allow this
> to occur
> without using the force option (-f).
> -- richard
>
Yes. There is a warning message, I used zpool add -f.
Thanks.
Fr
I get some good progress like following:
zpool import
pool: cn03
id: 1907858070511204110
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
And:
format
Searching for disks...done
c22t2d0: configured with capacity of 1.77GB
AVAILABLE DISK SELECTIONS:
0. c4t5000C5003AC39D5Fd0
/scsi_vhci/disk@g5000c5003ac39d5f
1. c4t5000C50039F0B447d0
/scsi_vhci/disk@g5000c50039f0b447
2. c4t5000C5000970B70Bd0
On Sep 19, 2011, at 8:34 AM, Fred Liu wrote:
> I get some good progress like following:
>
> zpool import
> pool: cn03
>id: 1907858070511204110
> state: UNAVAIL
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
>device
>
> For each disk, look at the output of "zdb -l /dev/rdsk/DISKNAMEs0".
> 1. Confirm that each disk provides 4 labels.
> 2. Build the vdev tree by hand and look to see which disk is missing
>
> This can be tedious and time consuming.
Do I need to export the pool first?
Can you give more details
On Sep 19, 2011, at 9:16 AM, Fred Liu wrote:
>>
>> For each disk, look at the output of "zdb -l /dev/rdsk/DISKNAMEs0".
>> 1. Confirm that each disk provides 4 labels.
>> 2. Build the vdev tree by hand and look to see which disk is missing
>>
>> This can be tedious and time consuming.
>
> Do I ne
>
> No, but your pool is not imported.
>
YES. I see.
> and look to see which disk is missing"?
>
> The label, as displayed by "zdb -l" contains the heirarchy of the
> expected pool config.
> The contents are used to build the output you see in the "zpool import"
> or "zpool status"
> commands.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have a new answer: interaction between dataset encryption and L2ARC
and ZIL.
1. I am pretty sure (but not completely sure) that data stored in the
ZIL is encrypted, if the destination dataset uses encryption. Can
anybody confirm?.
2. What happens w
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 19/09/11 19:45, Jesus Cea wrote:
> I have a new answer: interaction between dataset encryption and
> L2ARC and ZIL.
Question, a new question... :)
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://w
more below…
On Sep 19, 2011, at 9:51 AM, Fred Liu wrote:
>>
>> No, but your pool is not imported.
>>
>
> YES. I see.
>> and look to see which disk is missing"?
>>
>> The label, as displayed by "zdb -l" contains the heirarchy of the
>> expected pool config.
>> The contents are used to build th
> -Original Message-
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: 星期二, 九月 20, 2011 3:57
> To: Fred Liu
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] remove wrongly added device from zpool
>
> more below…
>
> On Sep 19, 2011, at 9:51 AM, Fred Liu wrot
zdb -l /dev/rdsk/c22t2d0s0
LABEL 0
failed to unpack label 0
LABEL 1
failed to unpack label 1
--
Hi,
I did this:
1): prtvtoc /dev/rdsk/c22t3d0s0 | fmthard -s - /dev/rdsk/c22t2d0s0
2): zpool import cn03
3): zpool status
pool: cn03
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the fil
On 9/19/11 11:45 AM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have a new answer: interaction between dataset encryption and L2ARC
and ZIL.
1. I am pretty sure (but not completely sure) that data stored in the
ZIL is encrypted, if the destination dataset uses encryption.
31 matches
Mail list logo